A study of people's ability to detect 'deepfakes' has shown humans perform fairly poorly, even when given hints on how to identify video-based deceit.
Dr Klaire Somoray and Dr Dan J Miller from James Cook University led the study. They said that high-quality deepfake videos, in which a person in an existing image or video is manipulated to have another person's likeness, can now be generated with ease.
"This has raised concerns about this technology being used for nefarious purposes such as creating political misinformation. For instance, in March 2022, a manipulated video of Ukrainian President Volodymyr Zelensky was circulated, in which Zelensky is depicted appealing for Ukrainian soldiers to surrender," said Dr Somoray.
Dr Somoray and Dr Miller recruited more than 450 people and showed them 20 videos, 10 of which were real and 10 of which were deepfakes. Participants were then graded on their ability to judge which videos were real, and which were not.
Half of the volunteers were given training on how to spot a deepfake video.
"This includes paying attention to things such as lighting, whether the cheeks and forehead looked too smooth or wrinkly, whether the agedness of the skin was similar to the agedness of the hair and eyes and whether facial hair looked real," the researchers said.
On average, participants correctly identified approximately 12 out of 20 videos.
"The poorest performers correctly categorised 5 out of 20 videos and the best performers correctly categorised 19 out of 20. Teaching people detection strategies did not impact detection accuracy or detection confidence, nor did time spent per video, or the average number of page clicks on each video," said Dr Somoray.
"The findings cast doubt on whether simply providing the public with strategies for detecting deepfakes can meaningfully improve detection.
"Also, worryingly, it appears that individuals may be overly optimistic regarding their abilities to ascertain the authenticity of individual videos," said Dr Miller.