As excitement builds throughout health and information systems worldwide over the rich potential benefits of new tools generated by artificial intelligence (AI), the UN health agency on Tuesday called for action to ensure that patients are properly protected.
Cautionary measures normally applied to any new technology are not being exercised consistently with regard to large language model (LLM) tools, which use AI for crunching data, creating content, and answering questions, the World Health Organization (WHO) warned.
"Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI, and thereby undermine or delay the potential long-term benefits and uses of such technologies around the world," the agency said.
As such, the agency proposed that these concerns are addressed and clear evidence of benefits are measured before their widespread use in routine health care and medicine.
Avoiding health-related errors
While enthusiastic about the appropriate use of technologies to support healthcare professionals, patients, researchers, and scientists, WHO said these new AI-based tools require vigilance, especially in light of such rapidly expanding platforms such as ChatGPT, Bard, BERT, and many others that imitate understanding, processing, and producing human communication.
For instance, these new tools can generate answers that may appear authoritative and plausible to an end user. The danger is that these responses may be completely incorrect or contain serious errors, especially concerning for any health issues, WHO said.
They can also be misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video content that is difficult for the public to differentiate from reliable health content.