AI Enhances Clinicians' Physical Exam Assistance

Mass General Brigham

Physical examinations are important diagnostic tools that can reveal critical insights into a patient's health, but complex conditions may be overlooked if a clinician lacks specialized training in that area. While previous research has investigated using large language models (LLMs) as tools to aid in providing diagnoses, their use in physical exams remains untapped. To address this gap, researchers from Mass General Brigham prompted the LLM GPT-4 to recommend physical exam instructions based on patient symptoms. The study suggests the potential of using LLMs as aids for clinicians during physical exams. Results are published in the Journal of Medical Artificial Intelligence .

"Medical professionals early in their career may face challenges in performing the appropriate patient-tailored physical exam because of their limited experience or other context-dependent factors, such as lower resourced settings," said senior author Marc D. Succi, MD, strategic innovation leader at Mass General Brigham Innovation, associate chair of innovation and commercialization for enterprise radiology and executive director of the Medically Engineered Solutions in Healthcare (MESH) Incubator at Mass General Brigham. "LLMs have the potential to serve as a bridge and parallel support physicians and other medical professionals with physical exam techniques and enhance their diagnostic abilities at the point of care."

Succi and his colleagues prompted GPT-4 to recommend physical exam instructions based on the patient's primary symptom, for example, a painful hip. GPT-4's responses were then evaluated by three attending physicians on a scale of 1 to 5 points based on accuracy, comprehensiveness, readability and overall quality. They found that GPT-4 performed well at providing instructions, scoring at least 80% of the possible points. The highest score was for "Leg Pain Upon Exertion" and the lowest was for "Lower Abdominal Pain."

"GPT-4 performed well in many respects, yet its occasional vagueness or omissions in critical areas, like diagnostic specificity, remind us of the necessity of physician judgment to ensure comprehensive patient care," said lead author Arya Rao, a student researcher in the MESH Incubator attending Harvard Medical School.

Although GPT-4 provided detailed responses, the researchers found that it occasionally left out key instructions or was overly vague, indicating the need for a human evaluator. According to researchers, the LLM's strong performance suggests its potential as a tool to help fill gaps in physicians' knowledge and aid in diagnosing medical conditions in the future.

Authorship: In addition to Succi, Mass General Brigham authors include Arya S. Rao, Christian Rivera, Husayn F. Ramji, Sarah Wagner, Andrew Mu, John Kim, William Marks, Benjamin White, David C. Whitehead, and Michael J. Senter-Zapata.

Funding: The project was supported in part by the National Institute of General Medical Science (T32GM144273). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of General Medical Sciences or the National Institutes of Health.

Paper cited: Rao, Arya S et al. "A Large Language Model-Guided Approach to the Focused Physical Exam" Journal of Medical Artificial Intelligence DOI: 10.21037/jmai-24-275

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.