Factual yet misleading vaccine content was 46 times more effective at driving vaccine hesitancy than flagged misinformation, reports a new study exploring real-world impacts of misinformation exposure. A second study aiming to better understand the characteristics of "supersharers" – a small group of individuals increasingly found to spread misinformation – reports that just over 2,000 supersharers on X (formally Twitter) spread 80% of the fake news during the 2020 US presidential election; the study involved a sample of more than 660,000 voters on X and uncovered that the supersharers were mainly middle-aged Republican women in conservative states.
Misinformation, particularly that spread widely across social media, is considered a substantial threat to science, public health, and democratic processes worldwide. Despite this, the real-world impact of exposure to misinformation remains unknown. What's more, the characteristics and the scale of influence of those who spread misinformation most are hard to pin down. Here, in two separate studies, authors address these knowledge gaps through quantitative analyses. Their findings offer insights for designing more effective strategies to curtail the spread of misinformation.
In one study, Jennifer Allen and colleagues evaluate the impact of factually accurate but deceptive vaccine-related links shared on Facebook during the rollout of the first COVID-19 vaccine in 2021. Low vaccine uptake in the US has been widely attributed to social media misinformation. Although the impact of conspicuous vaccine misinformation was curbed once flagged and debunked as false by Facebook's third-party fact-checkers, more ambiguous content – the factual but potentially misleading vaccine-skeptical content from credible sources – often went unflagged. An example of this true-but-misleading content is a story published in the Chicago Tribune: "A 'healthy' doctor died two weeks after getting a COVID-19 vaccine; the CDC is investigating why." Although there was no evidence that the vaccine had anything to do with the death, the framing of this headline falsely implied causation. The story was viewed by nearly 55 million people on the platform. Using a combination of lab experiments, crowdsourcing, and machine learning to estimate the causal effect of 13,206 vaccine-related URLS on the vaccination uptake of roughly 233 million US Facebook users, Allen et al. found that unflagged misinformation was 46 times more consequential for driving vaccine hesitancy than content flagged as misinformation.
In the second study, Sahar Baribi-Bartov and colleagues investigate who was responsible for spreading misinformation about voting during the 2020 US presidential election on X. According to the authors, little is known about the spread of fake news by individual users. Baribi-Bartov et al. evaluated supersharers on Twitter and found that in a sample of 664,391 US registered voters, only 2107 accounted for 80% of the fake news shared on X during the 2020 US presidential election. In a demographic breakdown of these individuals, the authors found significant over-representation of Republican middle-aged white women residing in three largely conservative states – Arizona, Florida, and Texas. These individuals were more often from neighborhoods that were generally poorly educated but relatively higher-income. Moreover, the authors discovered that the supersharers' massive volume of content promotion was generated through manual and persistent retweeting. One of the key findings in the study is that these supersharers – despite making up a small percentage of users – received more engagement than regular users and are highly connected and influential, reaching roughly 5.2% of registered voters on the platform.
In a related Perspective, Sander van der Linden and Yara Kyrychenko discuss each study and their limitations in greater detail.
For reporters interested in trends, a January 2019 study in Science evaluated the proliferation of fake news on X during the 2016 election cycle.