AI Fact Checks Could Increase Belief in Fake News

Indiana University

Although many tech companies and start-ups have touted the potential of automated fact-checking services powered by artificial intelligence to stem the rising tide of online misinformation, a new study led by researchers at Indiana University has found that AI-fact checking can, in some cases, actually increase belief in false headlines whose veracity the AI was unsure about, as well as decrease belief in true headlines mislabeled as false.

The work also found that participants given the option to view headlines fact checked by large language model-powered AI were significantly more likely to share both true and false news – but only more likely to believe false headlines, not true headlines.

The study, "Fact-checking information from large language models can decrease headline discernment," was published Dec 4 in the Proceedings of the National Academy of Sciences. The first author is Matthew DeVerna , a Ph.D. student at the Indiana University Luddy School of Informatics, Computing and Engineering in Bloomington. The senior author is Fil ippo Men c zer , IU Luddy Distinguished Professor and director of IU's Observatory on Social Media .

"There is a lot of excitement about leveraging AI to scale up applications like fact-checking, as human fact-checkers cannot keep up with the volume of false or misleading claims spreading on social media, including content generated by AI," DeVerna said. "However, our study highlights that when people interact with AI, unintended consequences can arise, highlighting how important it is to carefully consider how these tools are deployed."

In the study, IU scientists specifically investigated the impact of fact-checking information generated by a popular large language model on belief in, and sharing intent of, political news headlines in a pre-registered randomized control experiment.

Although the model accurately identified 90% of false headlines, the researchers found that this did not significantly improve participants' ability to distinguish between true and false headlines, on average.

In contrast, the researchers found the use of human-generated fact checks did enhance users' discernment of true headlines.

"Our findings highlight an important source of potential harm stemming from AI applications and underscore the critical need for policies to prevent or mitigate such unintended consequences," said Menczer. "More research is needed to improve the accuracy of AI fact-checking as well as understand the interactions between humans and AI better."

Among the groups working to improve AI fact-checking are IU's Observatory on Social Media, including a project led by DeVerna to create a web browser-based extension to check claims in articles linked on social media posts and provide more accurate, bridge-building responses. The Observatory is also the lead institution on a $ 7.5 million grant to better understand the role of human-AI interaction in the spread of false information online.

Additional contributors to the paper were Kai-Cheng Yang of Northeastern University and Harry Yaojun Yan of the Stanford Social Media Lab. This research was supported in part by Knight Foundation and Volkswagen Foundation.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.