Chatbots are expected to drive 95 per cent of online customer service interactions by 2025 but QUT research has found a failure to meet customer expectations is also driving frustration among users, reducing their likelihood of making a purchase and generating anger.
Associate Professor Paula Dootson, from the QUT Business School and a Senior Research Fellow for the QUT Centre for the Digital Economy as well as a member of QUT's Centre for Future Enterprise, is co-author of Chatbots and service failure: When does it lead to customer aggression, which has just been published in the Journal of Retailing and Consumer Services.
The research found customers are less likely to become aggressive with service failure from a chatbot if they are made aware early on that human intervention is available when needed and suggests companies using chatbots should rewrite their scripts for this purpose.
"The current chatbot market is valued at USD17.7 billion and is predicted to reach USD102.29 billion by 2026. Most people will have experienced an interaction with one as they are now used widely across almost every industry," said Professor Dootson.
"This capability for comprehending natural language and engaging in conversations allows chatbots to not only deliver customer services but also improve customer experiences through lowering customers' efforts and allowing these customers to use time more efficiently elsewhere.
"Lego used a chatbot named Ralph to assist customers to navigate through its product portfolio and select a perfect gift to purchase.
"However, despite the economic benefits for companies using chatbots in service encounters, they often fail to meet customers' expectations, can undermine the customer service experience and lead to service failures.
"In Japan, a hotel virtual assistant robot was fired in 2019 for repeated malfunctions such as mistaking snoring for voice commands and waking guests, a critical service delivery failure.
"Beyond issues of voice recognition, the scripts chatbots rely on to respond to customers can become problematic when the chatbot does not correctly interpret a request which makes it challenging for the chatbot assistant to respond in a meaningful way to the customer.
"Users can then feel frustrated and angry, become reluctant to use chatbots in the future, are less likely to make the purchase, or even switch to using another service provider entirely."
Her study, conducted with Assistant Professor Yu-Shan (Sandy) Huang from Texas A&M University-Corpus Christi, considers how artificial intelligence technology is changing the way services are delivered and introducing opportunities for new sources of service failure.
"We found that in a chatbot service failure context, telling a customer late in the service interaction that a human employee is available to help can lead to a greater chance of customer aggression," said Professor Dootson.
"There is still an historical expectation that real people will be available to assist customers if they encounter a technology-related service failure, but it remains unclear how the presence of human employees can influence customers' responses to a service failure caused by chatbots.
"Our results indicate that disclosing the option to engage with a human employee late in the chatbot interaction, after the service failure, increased the likelihood of emotion-focused coping, which can lead to customer aggression.
"Unexpectedly though, we found that when customers perceive a high level of participation, the positive relationship became negative in that customers were more likely to react with emotion and aggression when the chatbot service failed, if they were offered to interact with a human employee early (compared to late) in the service interaction.
"This could be because customers with a higher level of participation often value relationship building during the service co-creation process, they may be more likely to desire interacting with a human employee. So, the early disclosure of the option to interact with a human employee may signal that a service provider has the human resources to support customers but does not value the customers enough to begin the interaction that way."
Professor Dootson said the findings offered several practical implications for managing chatbot service encounters.
"This study provides firms with evidence that customers respond to the late disclosure of human intervention with customer aggression when encountering chatbot failures," she said.
"In turn, service providers should design chatbot scripts that disclose the option of interacting with a human employee early in the customer-chatbot interaction, thereby making customers aware of the possible human intervention prior to the occurrence of chatbot service failures."
Read the full paper online at the Journal of Retailing and Consumer Services