In a commentary in the American Journal of Bioethics, Anthony P. Weiss, MD, MBA, MSc, Chief Medical Officer at Beth Israel Deaconess Medical Center (BIDMC), argues that the current approach to regulating artificial intelligence (AI) applications used in medicine is inadequate. Instead, Weiss proposes a model that mirrors the supervisory relationship between trainees and attending physicians.
Weiss focuses on the growing use of Medical Adaptive Machine Learning Systems (MAMLS), systems that continually learn and change, and the regulatory conundrum they present. Currently, MAMLS are thought of as something akin to medical devices, which must gain FDA approval before being used in the clinic. However, MAMLS are uniquely subject to what's known as the "update problem." That is, a Learning System's post-deployment adaptation by definition makes it into a machine quite different than the one approved for safe use, making the standard FDA oversight process inadequate, Weiss notes.
"We may be better served by using a framework which considers MAMLS not a machine, but another form of learning healthcare provider," Weiss proposes, noting machines and humans are both fallible, biased and constantly adapting to a fluid healthcare landscape. "The apprenticeship approach is as old as medicine itself and remains the foundation of medical education."
Read the full commentary in The American Journal of Bioethics
BILH Study Authors: Anthony P. Weiss. MD
COI: The author declares no competing financial or non-financial interest.
Citation: Weiss, A. P. (2024). Adaptive Machine Learning Systems in Medicine – More Learner, Less Machine. The American Journal of Bioethics, 24(10), 80–82. https://doi.org/10.1080/15265161.2024.2388740