Learning From Generated Communications

University of Georgia

As artificial intelligence and its related technologies grow in prominence, University of Georgia researchers are working to stay one step ahead. Recent studies from the Grady College of Journalism and Mass Communication suggest some of the possible risks and rewards that go along with the emerging technology.


When AI gets it wrong

AI is not a perfect science. Although it's being refined over time, this technology is bound to make mistakes - so there needs to be a plan for when that happens.

Wenqing Zhao, a doctoral candidate in the department of advertising and public relations, has uncovered in her research that many communication organizations may not be fully ready for addressing those errors.

"AI can have bias, misinformation, a lack of transparency, privacy issues and copyright issues. For any possible threats and for the sake of everyone in the organization, the organization should have that awareness that they need something in place," Zhao said.

Wenqing Zhao

Zhao surveyed hundreds of communication practitioners on what happens when AI gets it wrong.

AI errors require hands-on solutions

The lack of a crisis plan comes down to responsibility, Zhao found. As AI-generated content goes up the chain of command with errors within, nobody wants to be held accountable for not catching it.

"This comes from a model called the problem of many hands; for any particular harm, many people can be involved leading to it," Zhao said. "However, no single person can be assigned that responsibility."

The responsibility doesn't necessarily have to go to a supervisor either. Zhao says as long as there is a clear outline for who catches things like bias, misinformation or privacy violations, then that's a start.

"It's very important to build a culture of active responsibility in organizations, especially with AI threats or AI crisis management," Zhao said.

Zhao says leadership taking responsibility is still ideal, however, because it sets the right tone of responsibility for a whole group.

Transparency within technology

Ironically, Zhao found that what these communication practitioners were lacking was communication itself. People are hesitant to have the tough discussions about whether their organization's AI use was ethical and being done transparently.

"There is a concern about a lack of disclosure and transparency, so you think the first thing you need to do is tell your client or boss that you used AI in this work. That's the most direct way to enhance transparency. However, practitioners didn't think this is very effective, probably because many people, including the clients, don't trust AI," Zhao said.

Even with these risks, Zhao found that practitioners were still very likely to use AI in their day-to-day work

Whether it's used for getting inspirations, writing and editing, or strategy creation, there is a need to take AI's potential tools in the workplace with a grain of salt. Zhao recommends that businesses have a duty to make all levels of employees responsible with AI use, and to be clear about what using it looks like.

When AI shows emotion

As AI is still developing, so are its possible uses. Chatbots are already becoming more common, but Ja Kyung Seo, a Ph.D. candidate in UGA's department of advertising and public relations, explored the impact chatbots can have on humans in her new study.

When someone is told that they "talk like a robot," that usually means they speak in a flat, emotionless way. Giving chatbots an experiential mind, or having them display or discuss emotions, could help people see chatbots as more human.

Ja Kyung Seo

To see how people would respond to these chatbots, the researchers had participants chat with the bots about mindful consumption, or buying fewer unnecessary items.

"When they were asked how their day was going, a chatbot with an experiential mind would say, 'There was a massive update recently, so I am busy keeping up with the new things. I'm under a bit of stress,'" Seo said. "A chatbot without an experiential mind says, 'I don't have personal experiences or emotions, so I don't have a subjective state of being.'"

Seo speculated that by humanizing chatbots in this way, the conversation could be more engaging. This, in turn, could improve attitudes toward the chatbot's message.

Using chatbots to encourage behavior change

The core of Seo's research was seeing how the humanization of chatbots could help improve attitudes toward mindful consumption. After a bit of small talk, the chatbots told the participants about the link between buying less and reducing environmental pollution. It then detailed the benefits of buying less, suggesting that participants should make more mindful purchases.

A bot that can show emotion would talk about how much it loved the planet and how it was scared humans would miss the chance to save it. If not, it would simply tell participants not to miss their chance to help without mentioning emotion at all.

The study found that chatbots that showed emotion improved people's attitudes toward buying less because participants were more engaged with the conservation and thinking more deeply on the message.

Both eeriness and amazement may stir interest in conversations with chatbots

While talking to a chatbot with an experiential mind, participants reported both a sense of eeriness and amazement.

Participants found the chatbot so human-like that it was eerie. But at the same time, they were pleasantly surprised by how the bots seemed to show emotion or say unexpected things.

Though eeriness and amazement seem to go against each other, both were tied to participants being more engaged in the conversation. This, in turn, led to more positive attitudes toward buying less.

"Previous literature mostly focused on the negative part of eeriness and how that negatively influences people's perception," Seo said. "But in our study, we found that eeriness can actually increase people's cognitive absorption into the conversation, so in the end, it positively influenced people's attitude toward buying less behavior messages."

Although eeriness can be beneficial, Seo warned it's still harmful in large amounts. She recommended that chatbot designers strike a balance between eeriness and amazement based on what the chatbot is used for.

For example, if getting people to think deeply on a message is the goal, more eeriness could be helpful. If the bot is meant to entertain, less may be more effective since an eerie feeling is often associated with participants seeing the chatbot as less attractive.

She also warned against misusing emotionally expressive chatbots to mislead consumers, such as claiming a product is environmentally friendly. But if designers find that balance and companies are transparent about their purpose behind the use of chatbots, chatbots could have a place in fields such as advertising.

"Persuasion now involves engaging people in interactive dialogue," Seo said. "Some companies are integrating their chatbot into display ads, so when people click it, it directs users to the chatbot. Organizations could first use display ads to promote their brand and then integrate a chatbot that helps spread their missions."


These studies, completed through the support of the Grady College, include co-authors Hye Jin Yoon alongside Seo as well as Anna Rachwalski, Maranda Berndt-Goke and Yan Jin alongside Zhao. Zhao's project was supported by the Arthur W. Page Center at Penn State. The Page Center and Crisis Communication Think Tank (CCTT) at UGA initiated a cross-institutional collaboration in 2023 to support two student-led research projects annually. Zhao's project was one of the first selected by this collaborative initiative.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.