Research: ChatGPT Seen as More Empathetic Than Humans

By definition, robots can't feel empathy since it requires the ability to relate to another person's human experience - to put yourself in their shoes.

Yet, according to a new University of Toronto study, artificial intelligence (AI) can create empathetic responses more reliably and consistently than humans.

That includes professional crisis responders who are trained to empathize with those in need.

"AI doesn't get tired," says Dariya Ovsyannikova, lab manager in Professor Michael Inzlicht's lab at U of T Scarborough and lead author of the study.

"It can offer consistent, high-quality empathetic responses without the emotional strain that humans experience."

The research, published in the journal Communications Psychology , looked at how people evaluated empathetic responses generated by ChatGPT compared to human responses.

Across four separate experiments, participants were asked to judge the level of compassion (an important facet of empathy) in written responses to a series of positive and negative scenarios that were created by AI, regular people and expert crisis responders. In each scenario, the AI responses were preferred and rated as more compassionate and responsive, conveying greater care, validation and understanding compared to the human responses.

How does a chatbot like ChatGPT outperform trained professionals? Ovsyannikova points to AI's ability to pick up on fine details and stay objective, making it particularly adept at crafting attentive communication​ that appears empathetic.

Empathy is an important trait not only in fostering social unity, but in helping people feel validated, understood and connected to others who empathize with them, the researchers say. In clinical settings, it plays a critical role in helping people regulate emotions and feel less isolated.

But constantly expressing empathy has its costs.

"Caregivers can experience compassion fatigue," says Ovsyannikova, a U of T Scarborough alumna who has professional experience volunteering as a crisis line responder.

She adds that professional caregivers, particularly in mental health settings, may need to sacrifice some of their ability to empathize to avoid burnout and balance their emotional engagement effectively for each of their clients.

Humans also come with their own biases and can be emotionally affected by a particularly distressing or complex case, which impacts their ability to be empathetic. In addition, the researchers say empathy in health-care settings is increasingly in short supply given shortages in accessible health-care services, qualified workers and a widespread increase in mental health disorders.

Of course, that doesn't mean we should cede empathy-derived care to AI overnight, says Inzlicht, a faculty member in U of T Scarborough's department of psychology who was a co-author of the study along with PhD student Victoria Oldemburgo de Mello.

"AI can be a valuable tool to supplement human empathy, but it does come with its own dangers," Inzlicht says.

He adds that while AI might be effective in delivering surface-level compassion that people might find immediately useful, chatbots such as ChatGPT will not be able to effectively give them deeper, more meaningful care that gets to the root of a mental health disorder.

He notes that over-reliance on AI also poses ethical concerns - namely the power it could give tech companies to manipulate those in need of care. For example, someone who is feeling lonely or isolated may become reliant on talking to an AI chatbot that is constantly doling out empathy instead of fostering meaningful connections with another human being.

"If AI becomes the preferred source of empathy, people might retreat from human interactions, exacerbating the very problems we're trying to solve, like loneliness and social isolation," says Inzlicht, whose research looks at the nature of empathy and compassion.

Another issue is a phenomenon known as "AI aversion," which is a prevailing skepticism about AI's ability to truly understand human emotion. While participants in the study initially ranked AI-generated responses highly when they didn't know who - or what - had written them, that preference shifted when they were told the response came from AI. However, Inzlicht says this bias may fade over time and experience, noting that younger people who grew up interacting with AI are likely to trust it more.

Despite the critical need for empathy, Inzlicht urges for a transparent and balanced approach to deploying AI so that it is supplementing human empathy rather than replacing it.

"AI can fill gaps, but it should never replace the human touch entirely."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.