Reston, Virginia-Physicians who follow artificial intelligence (AI) advice may be considered less liable for medical malpractice than is commonly thought, according to a new study of potential jury candidates in the U.S. Published in the January issue of The Journal of Nuclear Medicine (JNM). The study provides the first data related to physicians' potential liability for using AI in personalized medicine, which can often deviate from standard care.
"New AI tools can assist physicians in treatment recommendations and diagnostics, including the interpretation of medical images," remarked Kevin Tobia, JD, PhD, assistant professor of law at the Georgetown University Law Center, in Washington D.C. "But if physicians rely on AI tools and things go wrong, how likely is a juror to find them legally liable? Many such cases would never reach a jury, but for one that did, the answer depends on the views and testimony of medical experts and the decision making of lay juries. Our study is the first to focus on that last aspect, studying potential jurors' attitudes about physicians who use AI."
To determine potential jurors' judgments of liability, researchers conducted an online study of a representative sample of 2,000 adults in the U.S. Each participant read one of four scenarios in which an AI system provided a drug dosage treatment recommendation to a physician. The scenarios varied the AI recommendation (standard or nonstandard drug dosage) and the physician's decision (to accept or reject the AI recommendation). In all scenarios, the physician's decision subsequently caused harm to the patient.
Study participants then evaluated the physician's decision by assessing whether the treatment decision was one that could have been made by "most physicians" and "a reasonable physician" in similar circumstances. Higher scores indicated greater agreement and, therefore, lower liability.
Results from the study showed that participants used two different factors to evaluate physicians' utilization of medical AI systems: (1) whether the treatment provided was standard and (2) whether the physician followed the AI recommendation. Participants judged physicians who accepted a standard AI recommendation more favorably than those who rejected it. However, if a physician received a nonstandard AI recommendation, he or she was not judged as safer from liability by rejecting it.
While prior literature suggests that laypersons are very averse to AI, this study found that they are, in fact, not strongly opposed to a physician's acceptance of AI medical recommendations. This finding suggests that the threat of a physician's legal liability for accepting AI recommendations may be smaller than is commonly thought.
In an invited perspective on the JNM article, W. Nicholson Price II and colleagues noted, "Liability is likely to influence the behavior of physicians who decide whether to follow AI advice, the hospitals that implement AI tools for physician use and the developers who create those tools in the first place. Tobia et al.'s study should serve as a useful beachhead for further work to inform the potential for integrating AI into medical practice."
In an associated JNM article, the study authors were interviewed by Irène Buvat, PhD, and Ken Herrmann, MD, MBA, both leaders in the nuclear medicine and molecular imaging field. In the interview the authors discussed whether the results of their study might hold true in other countries, if AI could be considered as a type of "medical expert," and the advantages of using AI from a legal perspective, among other topics.
The authors of "When Does Physician Use of AI Increase Liability?" include Kevin Tobia, Georgetown University Law Center, Washington, DC and Eidgenössische Technische Hochschule Zürich Center for Law and Economics, Zürich, Switzerland; and Aileen Nielsen and Alexander Stremitzer, Eidgenössische Technische Hochschule Zürich Center for Law and Economics, Zürich, Switzerland. The information in this press release, research study, and interview is general in nature and should not be construed as legal or professional advice.
The authors of the invited perspective, "How Much Can Potential Jurors Tell Us About Liability for Medical Artificial Intelligence?" include W. Nicholson Price II, University of Michigan Law School, Ann Arbor, MI, and Sara Gerke and I. Glenn Cohen, Harvard Law School, Harvard University, Cambridge, Massachusetts.
The interviewers in "Discussion with Leaders: Buvat and Herrmann Talk with Stremitzer, Tobia and Nielsen" include Irène Buvat, PhD, Centre National de la Recherche Scientifique (CNRS), Inserm Laboratory of Translational Imaging in Oncology, Institut Curie, Orsay, France, and Ken Herrmann, MD, MBA, Department of Nuclear Medicine, Universitätsklinikum Essen, Essen, Germany.