False and misleading health information online and on social media is on the rise, thanks to rapid developments in deepfake technology and generative artificial intelligence (AI).
Author
- Lisa M. Given
Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University
This allows videos, photos and audio of respected health professionals to be manipulated - for example, to appear as if they are endorsing fake health-care products, or to solicit sensitive health information from Australians.
So, how do these kinds of health scams work? And what can you do to spot them?
Accessing health information online
In 2021, three in four Australians over 18 said they accessed health services - such as telehealth consultations with doctors - online. One 2023 study showed 82% of Australian parents consulted social media about health-related issues, alongside doctor consultations.
However, the worldwide growth in health-related misinformation (or, factually incorrect material) and disinformation (where people are intentionally misled) is exponential.
From Medicare email and text phishing scams , to sales of fake pharmaceuticals , Australians are at risk of losing money - and damaging their health - by following false advice.
What is deepfake technology?
An emerging area of health-related scams is linked to the use of generative AI tools to create deepfake videos, photos and audio recordings. These deepfakes are used to promote fake health-care products or lead consumers to share sensitive health information with people they believe can be trusted.
A deepfake is a photograph or video of a real person, or a sound recording of their voice, that is altered to make the person appear to do or say something they haven't done or said.
Up to now, people used photo- or video-editing software to create fake images, like superimposing someone's face on another person's body. Adobe Photoshop even advertises its software's ability to " face swap " to "ensure everyone is looking their absolute best" in family photos.
While creating deepfakes isn't new, healthcare practitioners and organisations are raising alarm bells about the speed and hyper-realism that can be achieved with generative AI tools. When these deepfakes are shared via social media platforms, which increase the reach of misinformation significantly, the potential for harm also increases.
How is it being used in health scams?
In December 2024, for example, Diabetes Victoria called attention to the use of deepfake videos showing experts from The Baker Heart and Diabetes Institute in Melbourne promoting a diabetes supplement.
The media release from Diabetes Australia made clear these videos were not real and were made using AI technology.
Neither organisation endorsed the supplements or approved the fake advertising, and the doctor portrayed in the video had to alert his patients to the scam.
This isn't the first time doctors' (fake) images have been used to sell products. In April 2024, scammers used deepfake images of Dr Karl Kruszelnicki to sell pills to Australians via Facebook. While some users reported the posts to the platform, they were told the ads did not violate the platform's standards.
In 2023, Tik Tok Shop came under scrutiny , with sellers manipulating doctors' legitimate Tik Tok videos to (falsely) endorse products. Those deepfakes received more than 10 million views.
What should I look out for?
A 2024 review of more than 80 scientific studies found several ways to combat misinformation online. These included social media platforms alerting readers about unverified information and teaching digital literacy skills to older adults.
Unfortunately, many of these strategies focus on written materials or require access to accurate information to verify content. Identifying deepfakes requires different skills.
Australia's eSafety Commissioner provides helpful resources to guide people in identifying deepfakes.
Importantly, they recommend considering the context itself. Ask yourself - is this something I would expect this person to say? Does this look like a place I would expect this person to be?
The commissioner also recommends people look and listen carefully, to check for:
blurring, cropped effects or pixelation
skin inconsistency or discoloration
video inconsistencies, such as glitches, and lighting or background changes
audio problems, such as badly synced sound
irregular blinking or movement that seems unnatural
content gaps in the storyline or speech.
How else can I stay safe?
If you have had your own images or voices altered, you can contact the eSafety Commissioner directly for help in having that material removed.
The British Medical Journal has also published advice specific to dealing with health-related deepfakes , advising people to:
contact the person who is endorsing the product to confirm whether the image, video, or audio is legitimate
leave a public comment on the site to question whether the claims are true (this can also prompt others to be critical of the content they see and hear)
use the online platform's reporting tools to flag fake products and to report accounts sharing misinformation
encourage others to question what they see and hear, and to seek advice from health-care providers.
This last point is critical. As with all health-related information, consumers must make informed decisions in consultation with doctors, pharmacists and other qualified health-care professionals.
As generative AI technologies become increasingly sophisticated, there is also a critical role for government in keeping Australians safe. The release in February 2025 of the long-awaited Online Safety Review makes this clear.
The review recommended Australia adopts duty of care legislation to address "harms to mental and physical wellbeing" and grievous harms from "instruction or promotion of harmful practices".
Given the potentially harmful consequences of following deepfake health advice, duty of care legislation is needed to protect Australians and support them to make appropriate health decisions.
Lisa M. Given receives funding from the Australian Research Council. She is a Fellow of the Academy of the Social Sciences in Australia and the Association for Information Science and Technology. She is an Affiliate of the International Panel on the Information Environment.