LAWRENCE — Many people have experienced frustration when dealing with artificial intelligence chatbots for customer support or technical assistance. New research from the University of Kansas has found when dealing with embarrassing issues, people prefer the anonymity and nonjudgmental nature of AI chatbots. However, when angry, they still preferred dealing with a fellow human.
The COVID-19 pandemic both angered and embarrassed people around the world as they dealt with new and frequently changing information and misinformation on vaccines, social distancing and related topics. KU researchers conducted a lab-based experimental study in which they gauged people's attitudes about vaccines, showed them content that could arouse anger or embarrassment and randomly assigned them AI or human assistance to further gauge their knowledge and attitudes about vaccines.
Vaibhav Diwanji, assistant professor of journalism & mass communications at KU and lead author of the study, researches new and emerging technologies' influence on consumers.
"I am interested in how AI information versus human-provided information influences people's decisions. We thought the COVID-19 pandemic was an ideal way to look at this question," Diwanji said. "Unfortunately, it was very politicized, and there was a lot of misinformation and disinformation available. People also tended to have a lot of anger and embarrassment about the topic."
The researchers examined how the emotions anger and embarrassment specifically influenced their vaccine intentions for several reasons. In the case of anger, people often felt angry because of political polarization, social pressures such as vaccine mandates, disruptions to daily life, confusion about vaccines' safety and efficacy, and because many people tied their personal identity to the decision whether to get vaccinated.
Researchers sourced embarrassment to study participants' lack of understanding, misinformation they may have believed, discomfort in social situations in which others were vaccinated, awkwardness of waiting in lines or having to prove vaccination status, or facing social pressure to get vaccinated.
For the study, researchers recruited a sample of 100 participants. They were asked about their attitudes regarding COVID-19 vaccines and boosters. Participants were then informed they would see video clips from popular culture that could make them uncomfortable. Eye-tracking software was used to measure how people focused on materials and their facial expressions when shown material such as clips from movies featuring domestic violence or erotic scenes designed to elicit anger or embarrassment or nature videos for neutral content. They were then randomly assigned an AI chatbot Diwanji designed or a lab researcher to talk about COVID-19 vaccine information in line with the Centers for Disease Control and Johns Hopkins University.
"Eye-tracking technology is a good way of tracking people's emotions without explicitly asking them how they are feeling, which may be difficult for people sometimes to express verbally what they're feeling," Diwanji said. "We were able to track retinal movements and facial expressions to gauge what they were feeling."
Finally, participants were asked if they preferred using AI chatbots or humans for information on the topic of vaccines. Those who were feeling embarrassed stated they preferred chatbots, while those who were angry preferred humans. The findings show that there are advantages to both the new technology and traditional human interactions that can help health professionals and marketers reach people in an effective way, based on their emotional state regarding health topics.
"Chatbots can be perceived to be nonjudgmental, and people said they preferred that when they were feeling embarrassed," Diwanji said. "AI is getting more and more sophisticated and so pervasive that it can't be ignored. Is it replacing human agents? Not really, but it is important for health professionals and marketers to make sure they are using it ethically. And we live in an age of personalization, so building technology that more deeply connects with people is vital. AI is not just a new trend, and marketers should not just dive in because they can but use it in a way that people like."
The study was conducted in KU's Center for Excellence in Health Communications to Underserved Populations. Written with Mugur Geana, professor in the William Allen White School of Journalism & Mass Communications and director of the center; and Jun Pei, Nhung Nguyen, Nazra Izhar and Rim Chaif, all graduate students in the school, it was published in the International Journal of Human-Computer Interaction.
The research was led by Diwanji's research group Immersion-Computation-Expression, a team of faculty and students that studies how humans interact with new and emerging media technologies. Diwanji and colleagues have previously published research on how people interact with AI ads online and whether they can tell they are AI-generated, and future studies will examine how human versus AI-generated social media content influences consumer decisions.
"In an increasingly digital and emotionally aware world, researchers and marketers will need to blend technology with emotional insight. The ability to understand, respond to and leverage emotions like anger and embarrassment will not just improve consumer satisfaction but also create humanized experiences that build long-term relationships with consumers," Diwanji said. "With tools like AI, eye-tracking and emotion recognition algorithms, both researchers and marketers will have the ability to optimize every touchpoint of the customer journey — leading to more effective, empathetic and personalized consumer interactions. This will ultimately shape the future of marketing, where emotionally intelligent AI engagement becomes just as important as product or service quality."