A new paper, "Differences in misinformation sharing can lead to politically asymmetric sanctions," published today in Nature suggests that the higher quantity of social media policy enforcement (such as account suspensions) for conservative users could be explained by the higher quantity of misinformation shared by those conservative users — and so does not constitute evidence of inherent biases in the policies from social media companies or in the definition of what constitutes misinformation.
Written by researchers from MIT Sloan School of Management, the University of Oxford, Cornell University, and Yale University, co-authors of the paper include Mohsen Mosleh, Qi Yang, Tauhid Zaman, Gordon Pennycook and David G. Rand.
The spread of misinformation has become an increasing concern, especially as the 2024 presidential election in the United States approaches. Many Americans who disagree on political issues agree that the sharing of false information is a substantial problem; sixty-five percent of Americans say that technology companies should take action to restrict the spread of false information. However, there is great dissension as to whether tech companies are actually moderating platforms fairly.
"Accusations of political bias are often based largely on anecdotes or noteworthy cases, such as the suspension from Twitter and Facebook of former President Trump," said MIT Sloan professor Rand. "This study allows us to systematically evaluate the data and better understand the differential rates of policy enforcement."
The asymmetry of conservative sanctions versus liberal sanctions should not be attributed to partisan bias on the part of social media companies and those determining what counts as misinformation, Rand and the co-authors noted.
The research began by looking at Twitter's suspension of users following the 2020 U.S. presidential election. Researchers identified 100,000 Twitter users from October 2020 who shared hashtags related to the election, and randomly sampled 9,000 — half of whom shared at least one #VoteBidenHarris2020 hashtag and half of whom shared at least one #Trump2020 hashtag. Researchers analyzed each user's data from the month before the election to quantify their tendency to share news from low-quality domains (as well as other potentially relevant characteristics), and then checked nine months later to determine which users were suspended by Twitter.
Accounts that had shared #Trump2020 before the election were 4.4 times more likely to have been subsequently suspended than those who shared #VoteBidenHarris2020. Only 4.5% of the users who shared Biden hashtags had been suspended as of July 2021, while 19.6% of the users who shared Trump hashtags had been suspended.
"We found that there were political differences in behavior, in addition to the political differences in enforcement," said Rand. "The fact that the social media accounts of conservatives are suspended more than those of liberals is therefore not evidence of bias on the part of tech companies, and shouldn't be used to pressure tech companies to abandon policies meant to reduce the sharing of misinformation."
To better understand this difference, the researchers examined what content was shared by these politically active Twitter users in terms of the reliability of the sources through two different methods. They used a set of 60 news domains (the 20 highest volume sites within the categories of mainstream, hyper-partisan and fake news), and collected trustworthiness ratings for each domain from eight professional fact-checkers. In an effort to eliminate concern about potential bias on the part of journalists and fact-checkers, the researchers also collected ratings from politically-balanced groups of laypeople. Both approaches indicated that people who used Trump hashtags shared four times more links to low-quality news outlets than those who used Biden hashtags.
"Prior work identifying political differences in misinformation sharing has been criticized for relying on the judgment of professional fact-checkers. But we show that conservative Twitter users shared much lower quality news, even when relying on ratings from politically-balanced groups of laypeople," said co-author Dr Mohsen Mosleh, Associate Professor, Oxford Internet Institute, part of the University of Oxford. "This can't be written off as the result of political bias in the ratings, and means that preferential suspension of conservative users is not necessarily the result of political bias on the part of social media companies."
The study also discovered similar associations between conservatism and low-quality news sharing (based on both expert and politically-balanced layperson ratings) were present in seven other datasets from Twitter, Facebook, and survey experiments, spanning 2016 to 2023 and including data from 16 different countries. For example, the researchers found cross-cultural evidence of conservatives sharing more unambiguously false claims about COVID-19 than liberals, with conservative political elites sharing links to lower quality new sources than liberal political elites in the U.K. and Germany as well.
"The social media users analyzed in this research are not representative of Americans more broadly, so these findings do not necessarily mean that conservatives in general are more likely to spread misinformation than liberals. Also, we're just looking at this particular period in time," said Rand. "Our basic point would be the same if it was found that liberal users shared more misinformation and were getting suspended more. Such a pattern of suspension would not be enough to show bias on the part of the companies, because of the differences in users' behavior."
Even under politically neutral anti-misinformation policies, the researchers expect that there would be political asymmetries in enforcement. While the analyses do not rule out the possibility of any bias on the part of platforms, the inequality of sanctions is not diagnostic of bias one way or the other. Policy-makers need to be aware that even if social media companies are working in an unbiased way to manage misinformation on their platforms, there will still be some level of differential treatment across groups.