Slopaganda Threatens to Upend Tight Elections

Macquarie University/The Lighthouse
Is generative AI better at producing persuasive disinformation than humans? If so, how might this play out in election campaigns? In a new paper, Professor Mark Alfano explains the powerplay between propaganda and GenAI.

'Slopaganda' is AI-generated content disseminated to manipulate beliefs for political ends.

The term unites the terms propaganda and generative AI slop. "Slop" was coined only last year to refer to mischievous AI-generated content.

Slopaganda jumps the queue ahead of political rhetoric, propaganda and misinformation – all strategies that, by design, influence the decision-making capacities of groups, shaping the way information is absorbed.

Philosophy Professor Mark Alfano

Knowledge pollution: Philosopher Professor Mark Alfano says democracies around the world are suffering from AI-generated deception - known as slopaganda - targetting citizens, immigrants and refugees.

Deceptions, lies, and simply staying silent are all ways in which agents can influence people's decision-making. Incorrect or maladaptive information can influence elections, shape institutional policy, undermine public health efforts, and even start wars.

Slopaganda makes it possible to supply an endless stream of plausible threatening messages targeted to individuals. Ultimately, they make it to memory where they can influence individuals over their entire lifetimes or at least long enough to matter for some decision-making.

Slopaganda also takes advantage of confirmation bias. In short, people actively seeking confirmatory evidence are more likely to accept information as true when it conforms with their prior knowledge. With micro-targeted slopaganda, it is possible, however imperfectly, to estimate an agent's prior beliefs – on anything from immigration and refugees to conspiracy theories and anti-semitism – then serve them content that reinforces pernicious political biases without providing alternative perspectives.

In Australia, in 2023, the mayor of a town was defamed by ChatGPT when a user prompted it to describe the crimes he had committed. He was not a convicted criminal, but ChatGPT accused him.

At least since philosopher Francis Bacon's time, the slogan "knowledge is power" has been used to capture this relationship between decision-making at a group level and information.

But rhetoric and propaganda have become almost hackneyed and commonplace and may have lost their effectiveness on more sophisticated consumers of information. Slopaganda, however, is new, unexplored, and thus more difficult to recognise for what it is. It is different from other forms of group influence. It makes new things possible, targeting sub-audiences via market segmentation and individualisation, for example, whereas a newspaper appears the same to every reader and a radio broadcast is universal in its message.

Slopaganda differs from traditional propaganda in scale, scope and speed. GenAI produces large volumes of fabricated content quickly and at low cost, making it more accessible. It generates highly personalised and diverse content, meaning it can generate engagement on a larger scale. Today, slopaganda is behind many "local" news articles, articles likely to be more trusted than those from the national or international press.

Fake it till you make it

Propaganda has long been tied to technology. For example, after the Renaissance, the printing press became a tool of mass influence. Traditional propaganda relied on media such as yellow press, pamphlets, radio broadcasts and television.

After electrification in the early 20th century came new technologies. For example, Joseph Goebbels, minister of propaganda for the German Third Reich under Adolf Hitler, commissioned the manufacture of Volksempfänger, or "people's receiver" to broadcast Nazi propaganda directly into people's homes. Propaganda is now almost universally associated with totalitarian states.

Democracies around the world suffer from these narcissistic efforts to pollute knowledge, as do citizens, denizens, immigrants, and refugees.

With the rise of the World Wide Web and social media platforms came computational propaganda and, with it, social bots and algorithmic mechanisms. Today, GenAI creates tailored messages and narratives according to user characteristics. There is emerging evidence that this can be so effective is it hard to counter with accurate messaging.

Outclassing the competition makes debunking misinformation and disinformation notoriously difficult, as has been noted in discussions of "Brandolini's Law." This "law" was formulated in 2013 by Italian software expert Alberto Brandolini in a facetious tweet, stating that "The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it."

Elon Musk and Donald Trump have been front of stage in spreading slopaganda.

During the 2024 US presidential campaign, Donald Trump posted GenAI slop that suggested Taylor Swift had endorsed him. He also falsely accused his rival of posting GenAI images.

Elon Musk posted an AI deep fake imitating US Democratic presidential nominee Kamala Harris' voice, which said deeply embarrassing and troublesome things.

In Australia, in 2023, the mayor of a town was defamed by ChatGPT when a user prompted it to describe the crimes he had committed. He was not a convicted criminal, but ChatGPT accused him. This example was a case prompted by a single user but showed that such content could be easily produced and published at scale and speed with micro-targeting.

The hierarchy of persuasion

Slopaganda may serve multiple purposes, including wrongdoing by public officials. Three levels of social organisation are typically involved with malfeasance and atrocity at scale.

At the lowest level, there are the foot soldiers of a social movement. These are the people who do things, such as the brownshirts of the Nazi party. At that level, information-shaping strategies (slopaganda) offer a rationale for the doing, especially if it involves violence.

Next, there is the middle tier of bureaucrats — the people who authorise things. Slopaganda, like propaganda, gives reasons, however flimsy, for authorisation of action by the foot soldiers who do things. The bureaucrats don't need to be persuaded; instead, they merely need to be able to point to something that looks plausible enough to justify their actions.

At the top of the pyramid are those who formulate, direct, and spread slopaganda — the idealogues who orchestrate. Like the bureaucrats, they may or may not believe what is being propagated or slopagated, but they take advantage of communication tools to get their message to an audience willing to hear it.

Democracies around the world suffer from these narcissistic efforts to pollute knowledge, as do citizens, denizens, immigrants, and refugees.

Perhaps one of the most promising interventions to counter slopaganda could be a global wealth tax that would simultaneously reduce the power of oligarchs and fund intervention measures supported by research, industry collaborations, content moderation, and community standards.

Slopaganda: The interaction between propaganda and generative AI, by Michał Klincewicz, Mark Alfano and Amir Ebrahimi Fard, was published in Filisofiska Notiser.

Mark Alfano is Professor of Philosophy at Macquarie University in the School of Humanities and a member of the Ethics and Agency Research Centre.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.