Political observers have been troubled by the rise of online misinformation—a concern that has grown as we approach Election Day. However, while the spread of fake news may pose threats, a new study finds that its influence is not universal. Rather, users with extreme political views are more likely than are others to both encounter and believe false news.
"Misinformation is a serious issue on social media, but its impact is not uniform," says Christopher K. Tokita, the lead author of the study, conducted by New York University's Center for Social Media and Politics (CSMaP).
The findings, which appear in the journal PNAS Nexus, also indicate that current methods to combat the spread of misinformation are likely not viable—and that the most effective way to address it is to implement interventions quickly and to target them toward users most likely to be vulnerable to these falsehoods.
"Because these extreme users also tend to see misinformation early on, current social media interventions often struggle to curb its impact—they are typically too slow to prevent exposure among those most receptive to it," adds Zeve Sanderson, executive director of CSMaP.
Existing methods used to assess the exposure to and impact of online misinformation rely on measuring views or shares. However, these fail to fully capture the true impact of misinformation, which depends not just on spread, but also on whether users actually believe the false information.
To address this shortcoming, Tokita, Sanderson, and their colleagues developed a novel approach using Twitter (now "X") data to estimate not just how many users were exposed to a specific news story, but also how many were likely to believe it.
"What is particularly innovative about our approach in this research is that the method combines social media data tracking the spread of both true news and misinformation on Twitter with surveys that assessed whether Americans believed the content of these articles," explains Joshua A. Tucker, a co-director of CSMaP and an NYU professor of politics, one of the paper's authors. "This allows us to track both the susceptibility to believing false information and the spread of that information across the same articles in the same study."
The researchers captured 139 news articles (November 2019-February 2020)—102 of which were rated as true and 37 of which were rated as false or misleading by professional fact-checkers—and calculated the spread of those articles across Twitter from the time of their initial publication.
This sample of popular articles was drawn from five types of news streams: mainstream left-leaning publications, mainstream right-leaning publications, low-quality left-leaning publications, low-quality right-leaning publications, and low-quality publications without an apparent ideological lean. To establish the veracity of the articles, each article was sent to a team of professional fact checkers within 48 hours of publication. The fact-checkers rated each article as "true" or "false/misleading."
To estimate exposure to and belief in these articles, the researchers combined two types of data. First, they used Twitter data to identify which users on Twitter were potentially exposed to each of the articles; they also estimated each potentially exposed user's ideological placement on a liberal-conservative scale by using an established method that infers a user's ideology from the prominent news and political accounts they follow.
Second, to determine the likelihood that these exposed users would believe an article to be true, they deployed real-time surveys as each article spread online. These surveys asked Americans who are habitual internet users to classify the article as true or false and to provide demographic information, including their ideology. From this survey data, the authors calculated the proportion of individuals within each ideological category that believed the article to be true. With these estimates for each article, they could calculate the number of Twitter users exposed and receptive to believing the article to be true.
Overall, the findings showed that while false news reached users across the political spectrum, those with more extreme ideologies (both conservative and liberal) were far more likely to both see and believe it. Crucially, these users, who are receptive to misinformation, tend to encounter it early in its spread through Twitter.
The research design also allowed the study's authors to simulate the impact of different types of interventions designed to stop the spread of misinformation. One takeaway from these simulations was that the earlier interventions were applied, the more likely they were to be effective. Another was that "visibility" interventions—whereby a platform makes flagged misinformation posts less likely to appear in users' feeds—appeared more likely to reduce the reach of misinformation to susceptible users than did interventions aimed at making users less likely to share misinformation.
"Our research indicates that understanding who is likely to be receptive to misinformation, not just who is exposed to it, is key to developing better strategies to fight misinformation online," advises Tokita, now a data scientist in the tech industry.
The study's other authors included Kevin Aslett, a CSMaP postdoctoral researcher and University of Central Florida professor at the time of the study who now works as a researcher in the tech industry, William P. Godel, an NYU doctoral student at the time of the study and now a researcher in the tech industry, as well as CSMaP researchers Jonathan Nagler and Richard Bonneau.
The research was supported by a graduate research fellowship from the National Science Foundation (DGE1656466).