Stanford Medicine researchers have built an artificial intelligence tool that can read thousands of doctors' notes in electronic medical records and detect trends, providing information that physicians and researchers hope will improve care.
Typically, experts seeking answers to questions about care need to pore over hundreds of medical charts. But new research shows that large language models — AI tools that can find patterns in complex written language — may be able to take over this busywork and that their findings could have practical uses. For instance, AI tools could monitor patients' charts for mentions of hazardous interactions between drugs or could help doctors identify patients who will respond well or poorly to specific treatments.
The AI tool, described in a study that published online Dec. 19 in Pediatrics, was designed to figure out from medical records if children with attention deficit hyperactivity disorder received appropriate follow-up care after being prescribed new medications.
"This model enables us to identify some gaps in ADHD management," said the study's lead author, Yair Bannett , MD, assistant professor of pediatrics.
The study's senior author is Heidi Feldman , MD, the Ballinger-Swindells Endowed Professor in Developmental and Behavioral Pediatrics.
The research team used the tool's insights to pinpoint tactics that could improve how doctors follow up with ADHD patients and their families, Bannett noted, adding that the power of such AI tools could be applied to many aspects of medical care.
A slog for a human, a breeze for AI
Electronic medical records contain information such as lab results or blood pressure measurements in a format that's easy for computers to compare among many patients. But everything else — about 80% of the information in any medical record — is in the notes that physicians write about the patient's care.
Although these notes are handy for the next human who reads a patient's chart, their freeform sentences are challenging to analyze en masse. This less-organized information must be categorized before it can be used for research, typically by a person who reads the notes looking for specific details. The new study looked at whether researchers could employ artificial intelligence for that task instead.
The study used medical records from 1,201 children who were 6 to 11 years old, were patients at 11 pediatric primary care practices in the same health care network, and had a prescription for at least one ADHD medication. Such medications can have disruptive side effects, such as suppressing a child's appetite, so it is important for doctors to inquire about side effects when patients are first using the drugs and adjust dosages as necessary.
The team trained an existing large language model to read doctors' notes, looking for whether children or their parents were asked about side effects in the first three months of taking a new drug. The model was trained on a set of 501 notes that researchers reviewed. The researchers counted any note that mentioned either the presence or absence of side effects (e.g., either "reduced appetite" or "no weight loss") as indicating that follow-up had happened, while notes with no mention of side effects were counted as meaning follow-up hadn't occurred.