Generative artificial intelligence (AI) has taken off at lightning speed in the past couple of years, creating disruption in many industries. Newsrooms are no exception.
Authors
- T.J. Thomson
Senior Lecturer in Visual Communication & Digital Media, RMIT University
- Michelle Riedlinger
Associate Professor in Digital Media, Queensland University of Technology
- Phoebe Matich
Postdoctoral Research Fellow, Generative Authenticity in Journalism and Human Rights Media, ADM+S Centre, Queensland University of Technology
- Ryan J. Thomas
Associate Professor, Washington State University
A new report published today finds that news audiences and journalists alike are concerned about how news organisations are - and could be - using generative AI such as chatbots, image, audio and video generators, and similar tools.
The report draws on three years of interviews and focus group research into generative AI and journalism in Australia and six other countries (United States, United Kingdom, Norway, Switzerland, Germany and France).
Only 25% of our news audience participants were confident they had encountered generative AI in journalism. About 50% were unsure or suspected they had.
This suggests a potential lack of transparency from news organisations when they use generative AI. It could also reflect a lack of trust between news outlets and audiences.
Who or what makes your news - and how - matters for a host of reasons.
Some outlets tend to use more or fewer sources , for example. Or use certain kinds of sources - such as politicians or experts - more than others.
Some outlets under-represent or misrepresent parts of the community. This is sometimes because the news outlet's staff themselves aren't representative of their audience.
Carelessly using AI to produce or edit journalism can reproduce some of these inequalities.
Our report identifies dozens of ways journalists and news organisations can use generative AI. It also summarises how comfortable news audiences are with each.
The news audiences we spoke to overall felt most comfortable with journalists using AI for behind-the-scenes tasks rather than for editing and creating. These include using AI to transcribe an interview or to provide ideas on how to cover a topic.
But comfort is highly dependent on context. Audiences were quite comfortable with some editing and creating tasks when the perceived risks were lower.
The problem - and opportunity
Generative AI can be used in just about every part of journalism.
For example, a photographer could cover an event. Then, a generative AI tool could select what it "thinks" are the best images, edit the images to optimise them, and add keywords to each.
These might seem like relatively harmless applications. But what if the AI identifies something or someone incorrectly, and these keywords lead to mis-identifications in the photo captions? What if the criteria humans think make "good" images are different to what a computer might think? These criteria may also change over time or in different contexts.
Even something as simple as lightening or darkening an image can cause a furore when politics are involved.
AI can also make things up completely. Images can appear photorealistic but show things that never happened. Videos can be entirely generated with AI, or edited with AI to change their context.
Generative AI is also frequently used for writing headlines or summarising articles. These sound like helpful applications for time-poor individuals, but some news outlets are using AI to rip off others' content .
AI-generated news alerts have also gotten the facts wrong. As an example, Apple recently suspended its automatically generated news notification feature. It did this after the feature falsely claimed US murder suspect Luigi Mangione had killed himself, with the source attributed as the BBC.
What do people think about journalists using AI?
Our research found news audiences seem to be more comfortable with journalists using AI for certain tasks when they themselves have used it for similar purposes.
For example, the people interviewed were largely comfortable with journalists using AI to blur parts of an image. Our participants said they used similar tools on video conferencing apps or when using the "portrait" mode on smartphones.
Likewise, when you insert an image into popular word processing or presentation software, it might automatically create a written description of the image for people with vision impairments. Those who'd previously encountered such AI descriptions of images felt more comfortable with journalists using AI to add keywords to media.
The most frequent way our participants encountered generative AI in journalism was when journalists reported on AI content that had gone viral.
For example, when an AI-generated image purported to show Princes William and Harry embracing at King Charles's coronation, news outlets reported on this false image .
Our news audience participants also saw notices that AI had been used to write, edit or translate news articles. They saw AI-generated images accompanying some of these. This is a popular approach at The Daily Telegraph, which uses AI-generated images to illustrate many of its opinion columns .
Overall, our participants felt most comfortable with journalists using AI for brainstorming or for enriching already created media. This was followed by using AI for editing and creating. But comfort depends heavily on the specific use.
Most of our participants were comfortable with turning to AI to create icons for an infographic. But they were quite uncomfortable with the idea of an AI avatar presenting the news, for example.
On the editing front, a majority of our participants were comfortable with using AI to animate historical images, like this one . AI can be used to "enliven" an otherwise static image in the hopes of attracting viewer interest and engagement.

Your role as an audience member
If you're unsure if or how journalists are using AI, look for a policy or explainer from the news outlet on the topic. If you can't find one, consider asking the outlet to develop and publish a policy.
Consider supporting media outlets that use AI to complement and support - rather than replace - human labour.
Before making decisions, consider the past trustworthiness of the journalist or outlet in question, and what the evidence says.
T.J. Thomson receives funding from the Australian Research Council. He is an affiliate with the ARC Centre of Excellence for Automated Decision Making & Society.
Michelle Riedlinger receives funding from the Social Sciences and Humanities Research Council of Canada's Global Journalism Innovation Lab. She is an affiliate with the ARC Centre of Excellence for Automated Decision Making & Society.
Phoebe Matich receives funding from the Australian Research Council. She is a post-doctoral research fellow within the ARC Centre of Excellence for Automated Decision Making and Society.
Ryan J. Thomas does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.