When it comes to comparing responses written by psychotherapists to those written by ChatGPT,the latter are generally rated higher, according to a study published February 12, 2025, in the open-access journal PLOS Mental Health by H. Dorian Hatch, from The Ohio State University and co-founder of Hatch Data and Mental Health, and colleagues
Whether machines could be therapists is a question that has received increased attention given some of the benefits of working with generative artificial intelligence (AI). Although previous research has found that humans can struggle to tell the difference between responses from machines and humans, recent findings suggest that AI can write empathically and the generated content is rated highly by both mental health professionals and voluntary service users to the extent that it is often favored over content written by professionals.
In their new study involving over 800 participants, Hatch and colleagues showed that, although differences in language patterns were noticed, individuals could rarely identify whether responses were written by ChatGPT or by therapists when presented with 18 couple's therapy vignettes. This finding echoes Alan Turing's prediction that humans would be unable to tell the difference between responses written by a machine and those written by a human. In addition, the responses written by ChatGPT were generally rated higher in core psychotherapy guiding principles.
Further analysis revealed that the responses generated by ChatGPT were generally longer than those written by the therapists. After controlling for length, ChatGPT continued to respond with more nouns and adjectives than therapists. Considering that nouns can be used to describe people, places, and things, and adjectives can be used to provide more context, this could mean that ChatGPT contextualizes more extensively than the therapists. More extensive contextualization may have led respondents to rate the ChatGPT responses higher on the common factors of therapy (components that are common to all modalities of therapy in order to achieve desired results).
According to the authors, these results may be an early indication that ChatGPT has the potential to improve psychotherapeutic processes. In particular, this work may lead to the development of different methods of testing and creating psychotherapeutic interventions. Given the mounting evidence suggesting that generative AI can be useful in therapeutic settings and the likelihood that it might be integrated into therapeutic settings sooner rather than later, the authors call for mental health experts to expand their technical literacy in order to ensure that AI models are being carefully trained and supervised by responsible professionals, thus improving quality of, and access to care.
The authors add: "Since the invention of ELIZA nearly sixty years ago, researchers have debated whether AI could play the role of a therapist. Although there are still many important lingering questions, our findings indicate the answer may be "Yes." We hope our work galvanizes both the public and Mental Practitioners to ask important questions about the ethics, feasibility, and utility of integrating AI and mental health treatment, before the AI train leaves the station."