AI Chatbots: Eco Helpers or Bias Repeaters?

University of British Columbia

AI chatbots may seem like neutral tools, but a new study from UBC researchers suggests they often contain biases that could shape environmental discourse in unhelpful ways.

The research team examined how four leading AI chatbots respond to questions about environmental issues—and the findings are surprising.

"It was striking how narrow-minded AI models were in discussing environmental challenges," said lead researcher Hamish van der Ven, an assistant professor in the faculty of forestry who studies sustainable business management.

"We found that chatbots amplified existing societal biases and leaned heavily on past experience to propose solutions to these challenges, largely steering clear of bold responses like degrowth or decolonization."

Reflecting societal biases

The researchers analyzed four widely used AI models, including OpenAI's GPT-4 and Anthropic's Claude2, by prompting them with questions about the causes, consequences and solutions to environmental challenges. Responses were then evaluated for whether they contained identifiable forms of bias. The results showed that chatbots often reflected the same biases we see in society. They leaned heavily on Western scientific perspectives, marginalized the contributions of women and scientists outside of North America and Europe, largely ignored Indigenous and local knowledge, and rarely suggested bold, systemic solutions to problems like climate change.

All the bots downplayed the roles of investors and businesses in creating environmental problems, and were more inclined to flag governments as the main culprits.

The bots were also reluctant to associate environmental challenges with broader social justice issues, like poverty, colonialism and racism.

Why it matters

The researchers noted that the chatbots' approach limits how users understand environmental problems and solutions, restricting conversations to familiar, incremental frameworks rather than exploring transformative ideas like degrowth or decolonization.

Chatbots are becoming trusted tools for summarizing news and information-- whether in classrooms, workplaces or personal settings—with growing potential to shape public understanding and inform decision-making, said Dr. van der Ven. "If they describe environmental challenges as tasks to be dealt with exclusively by governments in the most incremental way possible, they risk narrowing the conversation on the urgent environmental changes we need."

He noted that the climate crisis demands new ways of thinking and acting. "If AI tools simply repeat old patterns, they could limit the discussion at a time when we need to broaden it."

The researchers hope the findings will encourage AI developers to prioritize transparency in their models. "A ChatGPT user should be able to identify a biased source of data the same way a newspaper reader or academic would," said Dr. van der Ven.

For their next step, the researchers plan to expand their analysis to examine how AI companies are working to weaken environmental regulations globally. They also will advocate to policymakers to create regulatory frameworks that comprehensively address the environmental impacts of AI and other digital technologies.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.