Stricter AI Rules Urged to Safeguard Journalism

Formal newsroom policies should be urgently adopted regarding how, when and why generative artificial intelligence is appropriate for journalistic use, says a QUT researcher and co-author of a paper examining the impact of generative AI on visual journalism.

Rather than seeing it as a new threat, the researchers argue the arrival of multimodal AI represents an opportunity to hash out expectations for news imagery and journalistic labour and increase organisational accountability.

Phoebe Matich, a Postdoctoral Research Fellow with the QUT School of Communication and the ADM+S Centre, says the paper also recommends news organisations establish clear policies and commitments around the replacement of human workers with AI. Visual journalists, photographers and other 'lens-based' workers, meanwhile, should continue to organise to protect their livelihoods and the integrity of their craft from management-imposed uses of AI.

Old Threats, New Name? Generative AI and Visual Journalism has been published in Journalism Practice and addresses longstanding concerns about the impact of technological change on both the production and consumption ends of journalism, as well as whether the threats of misinformation, diminished objectivity and job losses are unique to the move to AI.

"A major concern about AI-generated visuals is that they may be used to spread mis- or disinformation, because humans are inclined to consider photorealistic imagery as real," said Ms Matich, who co-authored the paper with Dr TJ Thomson from RMIT University and Associate Professor Ryan J. Thomas from the Edward R. Murrow College of Communication at Washington State University.

"However, we found the misinformation potential of AI-generated visuals is not unique to this technology but a product of how our brains' visual short-cutting can distract from critical interpretation of imagery, and, importantly, 'visual news' authority also depends on an uncritical reception of it by consumers.

"Emergent forms of visual news highlight unresolved questions for journalists and their affiliate organisations about the prioritisation of 'real stories' of real people, either as subjects or storytellers.

"So really, this technology renews older questions about the role of organisational context and human labour in visual news."

Photo by Google DeepMind on Unsplash

The researchers argue that, considering the consistent downsizing in visual news over the last two decades, discussions about AI replacing journalists inadvertently imply that only some areas ought to remain the province of human labour.

"AI-related concerns must be looked at in the context of the political economy of news ownership, production, and deregulation," Ms Matich said.

"Ultimately, we found existing fears about generative AI are misdirected because they mask economic imperatives that come at human (and democratic) costs, and are disproportionately experienced by individual journalists and smaller organisations with limited bargaining power over technological change.

"Cameras may have revolutionised journalism's capacity to depict the world, but news visuals have been shaped equally by technological and organisational evolution, along with journalists' ideas of what journalism can or should be and how this can be accomplished.

"Tensions underpinning news realism and interpretation highlight how the use of AI technology depends on visual news organisations. Explicit policy formation or refinement would go some way toward insulating visual journalists and visual news from further disruption at the (poorly rendered) hands of AI's organisational advocates."

"AI policies should also evolve alongside both society and technology, and be co-designed with news audiences, to be informed by their expectations and perceptions. They could cover areas including accountability for accuracy and quality; algorithmic bias; audience trust and literacies; journalism's public service mission; respect for authorship and creativity; and the human impacts of datafication and AI reliance."

The researchers also called on legislators and regulators to support the production and circulation of high-quality, trustworthy, and accurate news and information by better regulating the spread of misinformation.

"This should include advertising, sponsored, and journalistic contexts as well as algorithmic content recommendations and summaries," Ms Matich said.

Read the full paper online - Old Threats, New Name? Generative AI and Visual Journalism

Main image: Phoebe Matich. Photo: Anthony Weate

/University Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.