AI Burden Hinders Healthcare Adoption: Study

University of York

The potential benefits of AI to patient care may be overlooked if urgent steps are not taken to ensure that the technologies are effective for the clinicians using them, a new White Paper outlines.

The White Paper builds on results from the Shared CAIRE (Shared Care AI Role Evaluation) research project

The healthcare sector is one of the biggest areas of AI investment globally and is at the heart of many nations' public policies for more efficient and responsive healthcare systems. Earlier this year the UK Government set out strategies to 'turbocharge AI' in healthcare.

The White Paper – a collaboration between the Centre for Assuring Autonomy at the University of York, the MPS Foundation and the Improvement Academy hosted at the Bradford Institute for Health Research – says the greatest threat to AI uptake in healthcare is the "off switch".

If frontline clinicians see the technology as burdensome, unfit for purpose or are wary about how it will impact upon their decision-making, their patients and their licences, then they are unlikely to want to use it.

Liability sinks

Among the key concerns in the paper is that clinicians risk becoming "liability sinks" – absorbing all legal responsibility for AI-influenced decisions, even when the AI system itself may be flawed.  

The White Paper builds on results from the Shared CAIRE (Shared Care AI Role Evaluation) research project, which ran in partnership with the Centre for Assuring Autonomy. The research examined the impact of six AI decision-support tools on clinicians, bringing together researchers with expertise in safety, medicine, AI, human-computer interaction, ethics and law.  

Professor Ibrahim Habli, from the University of York's Centre for Assuring Autonomy and Safety Lead on the Shared CAIRE project, said: "This White Paper offers clinicians, who are at the front-line of the use of these technologies in the NHS and wider healthcare sector, clear and concrete recommendations on using these tools safely.

"The research from which these recommendations were developed, involved insights from both patients and clinicians and are based on real-world scenarios and near-future AI decision-support tools, which means they can be applied to present day situations."

Autonomy

The team evaluated different ways in which AI tools could be used by clinicians - ranging from tools which simply provide information, through to those which make direct recommendations to clinicians, and those which liaise directly with patients. 

Clinicians and patients included in the study both agreed on preserving clinician autonomy, with clinicians preferring an AI model that highlighted relevant clinical data, such as risk scores, without providing explicit recommendations for treatment decisions - demonstrating a preference for informative tools that support rather than direct clinical judgment.  

The White Paper also highlights that clinicians should be fully involved in the design and development of the AI tool they will be using, and that reform to product liability for AI tools is needed, due to significant challenges in applying the current product liability regime.

Burnout

Professor Tom Lawton, a consultant in Critical Care and Anaesthetics at Bradford Teaching Hospitals NHS Trust, Clinical and AI lead on Shared CAIRE said: "AI in healthcare is rapidly moving from aspiration to reality, and the sheer pace means we risk ending up with technologies that work more for the developers than clinicians and patients.

"This kind of failure risks clinician burnout, inefficiencies, and the loss of the patient voice - and may lead to the loss of AI as a force for good when clinicians simply reach for the off-switch. We believe that this White Paper will help to address this urgent problem." 

The White Paper provides seven recommendations to avoid the 'switch off' of AI tools, but the authors say the Government, AI developers and regulators should consider all the recommendations with urgency.

Rapid change

Professor Gozie Offiah, Chair of the MPS Foundation, which funded the research, said: "Healthcare is undergoing rapid change, driven by advances in technology that could fundamentally impact on healthcare delivery. There are, however,real challenges and risks that must be addressed, chief among those is the need for clinicians to remain as informed users of AI, rather than servants of the technology."

The team has written to the regulators and the government minister to urge them to take on board the new recommendations. 

Further information:

Full seven recommendations from the White Paper:

AI tools should provide clinicians with information, not recommendations

With the current product liability regime, the legal weight of an AI recommendation is unclear. By providing information, rather than recommendations, we reduce any potential risk to both clinicians and patients.

Revise product liability for AI tools before allowing them to make recommendations

There are significant difficulties in applying the current product liability regime to an AI tool. Without reforms there is a risk that clinicians will act as a 'liability sink', absorbing all of the liability even where the system is a major cause of the wrong. 

AI companies should provide clinicians with the training and information required to make them comfortable accepting responsibility for an AI tool's use

Clinicians need to understand the intended purpose of an AI tool, the contexts it was designed and validated to perform in, and the scope and limitations of its training dataset, including potential bias, in order to deliver the best possible care to patients. 

AI tools should not be considered akin to senior colleagues in clinician-machine teams

It should be made explicit in new healthcare AI policy guidance and in guidance from healthcare organisations how clinicians should approach conflicts of opinion with the AI. Clinicians should not always be expected to agree with, or defer to, an AI recommendation in the same way they would for a senior colleague.  

Disclosure should be a matter of well-informed discretion

As the clinician is responsible for patient care, and that disagreement with an AI tool could end up worrying the patient, it should be at the clinician's discretion, depending on context, whether to disclose to the patient that their decision has been informed by an AI tool.  

AI tools that work for users need to be designed with users 

In the safety-critical and fast-moving healthcare sector, engaging clinicians in the design of all aspects of an AI tool – from the interface, to the balance of information provided, to the details of its implementation – can help to ensure that these technologies deliver more benefits than burdens.  

AI tools need to provide an appropriate balance of information to clinician users 

By involving clinicians in the design and development of AI decision-support tools can find the 'goldilocks' zone of the right levels of information being supplied by the AI tool.  

Explore more news

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.