Learning To Speak Language Of AI In Healthcare

Queen Mary University of London

How can we reduce bias in healthcare data?

It's a complex, multifaceted question. One that requires us to think not just about quantitative data and trends, but about the language used by doctors and nurses across the country to describe, diagnose, and evaluate conditions from oncological cancers to paediatric anxiety.

My research looks into what role AI can play in tackling this need for large-scale qualitative analysis through Natural Language Processing (NLP) programs.

You may have already noticed how artificial intelligence tools are proliferating the consumer market with generative-AI tools such as ChatGPT. Recently however, research into large language models is helping extend the capabilities of AI across sectors, including the large-scale analysis of patient records. This is raising debate around data privacy, the accuracy of neural networks, and the role of AI in public health. When it comes to our nation's health, understandably there is a pressing need to ensure any tool that is rolled out is done so in a safe and comprehensive fashion.

The benefits are great, but to achieve them we must ensure we understand the uses of AI in healthcare, and how we can mitigate any risks. So how can we use NLP programs to overcome bias in health records and better treat conditions such as paediatric anxiety.

New voices in healthcare

AI has a unique ability to analyse complex data at a large scale. This especially concerns textual data such as mental health records which contain significant amounts of detail that can be difficult for humans to aggregate and identify trends.

Mental health notes are written qualitatively, so there is no objective method to describe the complex and varied symptoms of mental health conditions.

We're using AI to address this by employing NLP programs called Transformers. Where less sophisticated tools struggle with the context and complexity of written language, Transformers have the ability to analyse textual context and resolve ambiguity. This means they can condense written notes into more accurate datasets.

In my work detecting bias in paediatric mental health notes, AI was able to identify that the intensity of symptoms across age groups differs for males and for females.

This is helping clinicians make adjustments to their diagnosis process to ensure every child is given the specific and tailored care they require.

Mitigating bias through human-machine partnership

In a similar way, AI methods could be used to find discrepancies in complex data across other demographic groups. But there are still challenges with ensuring these NLP methods are as accurate as they can be.

It can sometimes be difficult for the machines to resolve ambiguity in language when it is removed from real-world context.

For example, the sentence 'Where is the mouse?' may refer to the animal or the device. In a conversation between humans, we can use the context that the conversation is taking place in, say an office, to make an educated assumption that we're referring to the device. But an NLP program which does not have access to this context may instead make an uneducated assumption which can lead to misunderstanding and anomalies in data.

To resolve this, my recent research is looking to develop protocols for human-machine interaction, where in the case of doubt the machine asks the human expert having access to the real-world context to resolve this ambiguity.

This partnership between human and machine is helping ensure that health record analysis can be done on a large scale in an accurate and considered way. Building these partnerships will be crucial to ensuring healthcare is personalised and highly accurate, and will empower research into new treatments and solutions.

The data debate

It is an uncomfortable truth that for AI to learn from a collective knowledge, it must harvest the data of individuals.

So it is unsurprising the rapid development of AI has triggered serious discussions around privacy: reactions range from fully refraining from data sharing to support for the death of privacy. However, AI has a unique capacity to distinguish between sensitive information and non-specific data elements.

Furthermore, its ability to extract and summarise salient non-sensitive information and anonymise demographic data to prevent anyone from being able to identify specific individuals means it is actually a powerful tool in protecting our privacy.

By leveraging these analytical capabilities, we can model the future and predict patient outcomes and trajectories in safe and secure ways. Understanding and exploiting these capacities will enable us to harness AI's potential responsibly.

An AI-powered future?

My findings to date only touch the surface of the potential capabilities of AI across healthcare.

Going forward, the reliable and accurate modelling of patient data can help us improve the treatments we can deliver, and we may even be able to complete large-scale modelling of different treatments in real time, making drug development and new treatment rollout safer and more effective.

But to achieve this, we must navigate the data debate with care and consideration. That's why my research aims to lead us to finding answers to the big questions. How can we use AI ethically? How can we preserve privacy and mitigate bias? How do we protect the safety and security of the public?

All of this comes down to ethical knowledge creation. If data is biased, the results of analysis will be biased as well, so we must create AI systems that are held to the highest standards of accuracy.

With the right tools and processes in place to mitigate underlying bias in data insights, AI can help improve medical treatments for patients around the world.

As we continue to develop and optimise NLP programs, we're helping AI to speak our language – so we can build a world where it has a powerful voice in the future of healthcare, and where everyone can get the specific treatment they need.

Want to learn more about how AI is transforming society for good? Watch our webinar today.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.