Research: Learning Strategies Boost AI Model Efficiency in Hospitals

York University

If data used to train artificial intelligence models for medical applications, such as hospitals across the Greater Toronto Area, differs from the real-world data, it could lead to patient harm. A new study out today from York University found proactive, continual and transfer learning strategies for AI models to be key in mitigating data shifts and subsequent harms.

To determine the effect of data shifts, the team built and evaluated an early warning system to predict the risk of in-hospital patient mortality and enhance the triaging of patients at seven large hospitals in the Greater Toronto Area.

The study used GEMINI, Canada's largest hospital data sharing network, to assess the impact of data shifts and biases on clinical diagnoses, demographics, sex, age, hospital type, where patients were transferred from, such as an acute care institution or nursing home, and time of admittance. It included 143,049 patient encounters, such as lab results, transfusions, imaging reports and administrative features.

"As the use of AI in hospitals increases to predict anything from mortality and length of stay to sepsis and the occurrence of disease diagnoses, there is a greater need to ensure they work as predicted and don't cause harm," says senior author York University Assistant Professor Elham Dolatabadi of York's School of Health Policy and Management, Faculty of Health. "Building reliable and robust machine learning models, however, has proven difficult as data changes over time creating system unreliability."

The data to train clinical AI models for hospitals and other health-care settings need to accurately reflect the variability of patients, diseases and medical practices, she adds. Without that, the model could develop irrelevant or harmful predictions, and even inaccurate diagnoses. Differences in patient subpopulations, staffing, resources, as well as unforeseen changes to policy or behaviour, differing health-care practices between hospitals or an unexpected pandemic, can also cause these potential data shifts.

"We found significant shifts in data between model training and real-life applications, including changes in demographics, hospital types, admission sources, and critical laboratory assays," says first author Vallijah Subasri, AI scientist at University Health Network. "We also found harmful data shifts when models trained on community hospital patient visits were transferred to academic hospitals, but not the reverse."

To mitigate these potentially harmful data shifts, the researchers used a transfer learning strategies, which allowed the model to store knowledge gained from learning one domain and apply it to a different but related domain and continual learning strategies where the AI model is updated using a continual stream of data in a sequential manner in response to drift-triggered alarms.

Although machine learning models usually remain locked once approved for use, the researchers found models specific to hospital type which leverage transfer learning, performed better than models that use all available hospitals.

Using drift-triggered continual learning helped prevent harmful data shifts due to the COVID-19 pandemic and improved model performance over time.

Depending on the data it was trained on, the AI model could also have a propensity for certain biases leading to unfair or discriminatory outcomes for some patient groups.

"We demonstrate how to detect these data shifts, assess whether they negatively impact AI model performance, and propose strategies to mitigate their effects. We show there is a practical pathway from promise to practice, bridging the gap between the potential of AI in health and the realities of deploying and sustaining it in real-world clinical environments," says Dolatabadi.

The study is a crucial step towards the deployment of clinical AI models as it provides strategies and workflows to ensure the safety and efficacy of these models in real-world settings.

"These findings indicate that a proactive, label-agnostic monitoring pipeline incorporating transfer and continual learning can detect and mitigate harmful data shifts in Toronto's general internal medicine population, ensuring robust and equitable clinical AI deployment," says Subasri.

The paper, Diagnosing and remediating harmful data shifts for the responsible deployment of clinical AI models , was published today in the journal JAMA Network Open.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.