alone-casual-couch-1327218

ANN ARBOR, Mich. — Artificial intelligence may be the strongest soldier in the war against fake news. An algorithm designed by University of Michigan researchers to weed out fake news works better than humans assigned to the same task, a new study shows.

The algorithm identifies propagated news on information aggregator and social media sites like Google News and Facebook by honing in on the telltale linguistic cues in the stories. After a series of tests, the researchers concluded that it is at least as good as humans at this task, and in many cases, better.

The linguistic approach to identifying fake news utilized by the algorithm could be the key to quickly discerning between real and fake information online, even before a story can be corroborated or cross-referenced. Researchers programmed the algorithm to identify common word choice, grammar usage, punctuation and complexity typically seen in fake news stories.

To be sure, researchers calculated the algorithm correctly identified target content 76% of the time, compared to a 70% rate posted by humans.

For the study, researchers paid a team of participants to rewrite 240 real news stories, ultimately mimicking the style of the story, but actually turning into a fake news piece. Both the real versions and the fake ones were then analyzed by the algorithm, which correctly identified 76% of the rewritten stories.

Because fake news stories can spread extremely quickly on social media and other platforms — even when debunked just as fast — they've already convinced a subset of readers that they're true. The authors believe their program could help websites that can't keep up with the constant flow of suspicious content.

“You can imagine any number of applications for this on the front or back end of a news or social media site,” says lead researcher and creator of the algorithm, Rada Mihalcea, a computer science and engineering professor at the university, in a school release. “It could provide users with an estimate of the trustworthiness of individual stories or a whole news site. Or it could be a first line of defense on the back end of a news site, flagging suspicious stories for further review. A 76 percent success rate leaves a fairly large margin of error, but it can still provide valuable insight when it's used alongside humans.”

The study will be presented at the 27th International Conference on Computational Linguistics in Santa Fe, New Mexico on August 24th.

About Ben Renner

Writer, editor, curator, and social media manager based in Denver, Colorado. View my writing at https://rennerb1.wixsite.com/benrenner.

Our Editorial Process

EdNews publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on EdNews are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

Chris Melore

Editor

Sophia Naughton

Associate Editor