AI Boosts Research Understanding, Trust in Science

Michigan State University

EAST LANSING, Mich. – Have you ever read about a scientific discovery and felt like it was written in a foreign language? If you're like most Americans , new scientific information can prove challenging to understand — especially if you try to tackle a science article in a research journal.

In an era when scientific literacy is crucial for informed decision-making, the abilities to communicate and comprehend complex content are more important than ever. Trust in science has been declining for years , and one contributing factor may be the challenge of understanding scientific jargon.

New research from David Markowitz, associate professor of communication at Michigan State University, points to a potential solution: using artificial intelligence, or AI, to simplify science communication. His work demonstrates that AI-generated summaries may help restore trust in scientists and, in turn, encourage greater public engagement with scientific issues — just by making scientific content more approachable. The question of trust is particularly important, as people often rely on science to inform decisions in their daily lives, from choosing what foods to eat to making critical heath care choices.

Responses are excerpts from an article originally published in The Conversation .

How did simpler, AI-generated summaries affect the general public's comprehension of scientific studies?

Artificial intelligence can generate summaries of scientific papers that make complex information more understandable for the public compared with human-written summaries, according to Markowitz's recent study, which was published in PNAS Nexus . AI-generated summaries not only improved public comprehension of science but also enhanced how people perceived scientists.

Markowitz used a popular large language model, GPT-4 by OpenAI, to create simple summaries of scientific papers; this kind of text is often called a significance statement. The AI-generated summaries used simpler language — they were easier to read according to a readability index and used more common words, like "job" instead of "occupation" — than summaries written by the researchers who had done the work.

In one experiment, he found that readers of the AI-generated statements had a better understanding of the science, and they provided more detailed, accurate summaries of the content than readers of the human-written statements.

How did simpler, AI-generated summaries affect the general public's perception of scientists?

In another experiment, participants rated the scientists whose work was described in simple terms as more credible and trustworthy than the scientists whose work was described in more complex terms.

In both experiments, participants did not know who wrote each summary. The simpler texts were always AI-generated, and the complex texts were always human-generated. When I asked participants who they believed wrote each summary, they ironically thought the more complex ones were written by AI and simpler ones were written by humans.

What do we still need to learn about AI and science communication?

As AI continues to evolve, its role in science communication may expand, especially if using generative AI becomes more commonplace or sanctioned by journals. Indeed, the academic publishing field is still establishing norms regarding the use of AI . By simplifying scientific writing, AI could contribute to more engagement with complex issues.

While the benefits of AI-generated science communication are perhaps clear, ethical considerations must also be considered. There is some risk that relying on AI to simplify scientific content may remove nuance, potentially leading to misunderstandings or oversimplifications. There's always the chance of errors, too, if no one pays close attention. Additionally, transparency is critical. Readers should be informed when AI is used to generate summaries to avoid potential biases.

Simple science descriptions are preferable to and more beneficial than complex ones, and AI tools can help. But scientists could also achieve the same goals by working harder to minimize jargon and communicate clearly — no AI necessary.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.