Artificial intelligence can scan a chest X-ray and diagnose if an abnormality is fluid in the lungs, an enlarged heart or cancer. But being right is not enough, said Ngan Le, a University of Arkansas assistant professor of computer science and computer engineering. We should understand how the computer makes its diagnosis, yet most AI systems are black boxes whose "thought process" even their creators cannot explain.
"When people understand the reasoning process and limitations behind AI decisions, they are more likely to trust and embrace the technology," Le said.
Le and her colleagues developed a transparent, and highly accurate, AI framework for reading chest X-rays called ItpCtrl-AI, which stands for interpretable and controllable artificial intelligence.
The team explained their approach in "ItpCtrl-AI: End-to-end interpretable and controllable artificial intelligence by modeling radiologists' intentions," published in the current issue of Artificial Intelligence in Medicine.
The researchers taught the computer to look at chest X-rays like a radiologist. The gaze of radiologists, both where they looked and how long they focused on a specific area, was recorded as they reviewed chest X-rays. The heat map created from that eye-gaze dataset showed the computer where to search for abnormalities and what section of the image required less attention.
Creating an AI framework that uses a clear, transparent method to reach conclusions — in this case a gaze heat map — helps researchers adjust and correct the computer so it can provide more accurate results. In a medical context, transparency also bolsters the trust of doctors and patients in an AI-generated diagnosis.
"If an AI medical assistant system diagnoses a condition, doctors need to understand why it made that decision to ensure it is reliable and aligns with medical expertise," Le said.
A transparent AI framework is also more accountable, a legal and ethical concern in areas with high stakes, such as medicine, self-driving vehicles or financial markets. Because doctors know how ItpCtrl-AI works, they can take responsibility for its diagnosis.
"If we don't know how a system is making decisions, it's challenging to ensure it is fair, unbiased, or aligned with societal values," Le said.