Clinicians Use AI For Health Records: Key Points

Imagine this. You've finally summoned up the courage to see a GP about an embarrassing problem. You sit down. The GP says:

Authors

  • Stacy Carter

    Professor and Director, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong

  • Farah Magrabi

    Professor of Biomedical and Health Informatics at the Australian Institute of Health Innovation, Macquarie University

  • Yves Saint James Aquino

    Research Fellow, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong

before we start, I'm using my computer to record my appointments. It's AI - it will write a summary for the notes and a letter to the specialist. Is that OK?

Wait - AI writing our medical records? Why would we want that?

Records are essential for safe and effective health care. Clinicians must make good records to keep their registration. Health services must provide good record systems to be accredited. Records are also legal documents: they can be important in insurance claims or legal actions.

But writing stuff down (or dictating notes or letters) takes time. During appointments, clinicians can have their attention divided between good record-keeping and good communication with the patient. Sometimes clinicians need to work on records after hours, at the end of an already-long day.

So there's understandable excitement, from all kinds of health-care professionals, about "ambient AI" or "digital scribes".

What are digital scribes?

This is not old-school transcription software: dictate letter, software types it up word for word.

Digital scribes are different. They use AI - large language models with generative capabilities - similar to ChatGPT (or sometimes, GPT4 itself).

The application silently records the conversation between a clinician and a patient (via a phone, tablet or computer microphone, or a dedicated sensitive microphone). The AI converts the recording to a word-for-word transcript.

The AI system then uses the transcript, and the instructions it is given, to write a clinical note and/or letters for other doctors, ready for the clinician to check.

Most clinicians know little about these technologies: they are experts in their speciality, not in AI. The marketing materials promise to "let AI take care of your clinical notes so you can spend more time with your patients."

Put yourself in the clinician's shoes. You might say "yes please!"

How are they regulated?

Recently, the Australian Health Practitioner Regulation Agency released a code of practice for using digital scribes. The Royal Australian College of General Practitioners released a fact sheet. Both warn clinicians that they remain responsible for the contents of their medical records.

Some AI applications are regulated as medical devices, but many digital scribes are not. So it's often up to health services or clinicians to work out whether scribes are safe and effective.

What does the research say so far?

There's very limited data or real world evidence on the performance of digital scribes.

In a big Californian hospital system, researchers followed 9,000 doctors for ten weeks in a pilot test of a digital scribe.

Some doctors liked the scribe: their work hours decreased, they communicated better with patients. Others didn't even start using the scribe.

And the scribe made mistakes - for example, recording the wrong diagnosis, or recording that a test had been done, when it needed to be done.

So what should we do about digital scribes?

The recommendations of the first Australian National Citizens' Jury on AI in Health Care show what Australians want from health care AI, and provide a great starting point.

Building on those recommendations, here are some things to keep in mind about digital scribes the next time you head to the clinic or emergency department:

1) You should be told if a digital scribe is being used.

2) Only scribes designed for health care should be used in health care. Regular, publicly available generative AI tools (like ChatGPT or Google Gemini) should not be used in clinical care.

3) You should be able to consent, or refuse consent, for use of a digital scribe. You should have any relevant risks explained, and be able to agree or refuse freely.

4) Clinical digital scribes must meet strict privacy standards. You have a right to privacy and confidentiality in your health care. The whole transcript of an appointment may contain a lot more detail than a clinical note usually would. So ask:

  • are the transcripts and summaries of your appointments processed in Australia, or another country?
  • how are they kept secure and private (for example, are they encrypted)?
  • who can access them?
  • how are they used (for example, are they used to train AI systems)?
  • does the scribe access other data from your record to make the summary? If so, is that data ever shared?

Is human oversight enough?

Generative AI systems can make things up, get things wrong, or misunderstand some patient's accents. But they will often communicate these errors in a way that sounds very convincing. This means careful human checking is crucial.

Doctors are told by tech and insurance companies that they must check every summary or letter (and they must). But it's not that simple. Busy clinicians might become over-reliant on the scribe and just accept the summaries. Tired or inexperienced clinicians might think their memory must be wrong, and the AI must be right (known as automation bias).

Some have suggested these scribes should also be able to create summaries for patients. We don't own our own health records, but we usually have a right to access them. Knowing a digital scribe is in use may increase consumers' motivation to see what is in their health record.

Clinicians have always written notes about our embarrassing problems, and have always been responsible for these notes. The privacy, security, confidentiality and quality of these records have always been important.

Maybe one day, digital scribes will mean better records and better interactions with our clinicians. But right now, we need good evidence that these tools can deliver in real-world clinics, without compromising quality, safety or ethics.

The Conversation

Stacy Carter has received funding from the National Health and Medical Research Council, the Medical Research Future Fund, and National Breast Cancer Research Foundation for research on AI in healthcare, has received travel support to speak at conferences about consumer views on AI in health care, and has worked with the Australian Commission on Safety and Quality in Healthcare, consumer organisations and medical professional organisations on AI governance in health care.

Farah Magrabi receives funding from the National Health and Medical Research Council, the Digital Health Collaborative Research Centre and Macquarie University; has worked with the Australian Commission on Safety and Quality in Health Care on ensuring the clinical safety of digital health and AI enabled services. She is Co-Chair of the Australian Alliance for AI in Healthcare's Safety, Quality and Ethics Working Group; an Advisor to the Australian Digital Health Agency; and serves on the Therapeutic Goods Administration's Technical Reference group for Software as a Medical Device and AI.

Yves Saint James Aquino receives funding from the National Health and Medical Research Council. He is a member of the Australian Alliance for AI in Healthcare. He has worked with the Australian Commission on Safety and Quality in Healthcare on projects related to AI ethics and governance.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).