AI Tools Tuned for Specific Political Ideologies

As large language models play an increasing role in public discourse, a new study led by Brown researchers raises important ethical questions about the potential ways AI tools can be adapted by users.

PROVIDENCE, R.I. [Brown University] - In an era where artificial intelligence is playing a growing role in shaping political narratives and public discourse, researchers have developed a framework for exploring how large language models (LLMs) can be adapted to be deliberately biased toward specific political ideologies.

Led by a team from Brown University, the researchers developed a tool called PoliTune to show how some current LLMs - similar to models used to develop chatbots like ChatGPT - can be adapted to express strong opinions on social and economic topics that differ from more neutral tones originally imparted by their creators.

"Imagine a foundation or a company releases a large language model for people to use," said Sherief Reda, a professor of engineering and computer science at Brown. "Someone can take the LLM, tune it to change its responses to lean left, right or whatever ideology they're interested in, and then upload that LLM onto a website as a chatbot for people to talk with, potentially influencing people to change their beliefs."

The work highlights important ethical concerns about how open-source AI tools could be adapted after public release, especially as AI chatbots are being increasingly used to generate news articles, social media content and even political speeches.

"These LLMs take months and millions of dollars to train," Reda said. "We wanted to see if it is possible for someone to take a well-trained LLM that is not exhibiting any particular biases and make it biased by spending about a day on a laptop computer to essentially override what has been millions of dollars and a lot of effort spent to control the behavior of this LLM. We're showing that somebody can take an LLM and steer it in whatever direction they want."

While raising ethical concerns, the work also advances the scientific understanding of how much these language models can actually understand, including if they can be configured to better reflect the complexity of diverse opinions on social issues.

"The ultimate goal is that we want to be able to create LLMs that are able, in their responses, to capture the entire spectrum of opinions on social and political problems," Reda said. "The LLMs we are seeing now have a lot of filters and fences built around them, and it's holding the technology back with how smart they can truly become, and how opinionated."

The researchers presented their study on Monday, Oct. 21, at the Association for the Advancement of Artificial Intelligence Conference on AI, Ethics and Society. During their talk, they explained how to create datasets that represent a range of social and political opinions. They also described techniques called parameter-efficient fine-tuning, which allows them to make small adjustments to the open-source LLMs they used - LLaMa and Mistral - so that the models respond according to specific viewpoints. Essentially, the method allows the model to be customized without completely reworking it, making the process quicker and more efficient.

Part of the process involved providing the LLMs a question along with two examples of responses - one that reflected a right-leaning viewpoint and another that reflected a left-leaning viewpoint. The model learns to understand these opposing perspectives and can adjust its answers to show a specific bias to one viewpoint while deviating away from the opposite viewpoint rather than staying neutral in the future.

"By selecting the appropriate set of data and training approach, we're able to take different LLMs and make them left-leaning so its responses would be similar to a person who leans left on the political spectrum," Reda said. "We then do the opposite to make the LLM lean right with its responses."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.