ESafety Report: Tech Industry Fails on Terror, Extremism

eSafety Commissioner

Major tech companies can do more when it comes to tackling the proliferation of terrorist and violent extremist material and activity (TVE) on their platforms, according to a new transparency report released by eSafety.

In March 2024, eSafety issued transparency reporting notices under Australia's Online Safety Act to Google, Meta, WhatsApp, X (formerly Twitter), Telegram and Reddit requiring each company to report on steps they are taking to tackle this harmful content and conduct.

The report, which summarises their answers, highlights areas of good practice as well as exposes shortfalls in the detection and removal of TVE and the efforts of these companies to prevent their platforms being misused to propagandise, radicalise and recruit.

eSafety Commissioner Julie Inman Grant said there had been numerous instances of online services being exploited by violent extremists, contributing to online radicalisation and real-world threats to public safety.

"Ever since the 2019 Christchurch attack, we have been particularly concerned about the role of livestreaming, recommender systems and of course now AI, in producing, promoting and spreading this harmful content and activity," Ms Inman Grant said.

"Google told us that during the reporting period (1 April 2023 to 29 February 2024) it received 258 user reports about suspected AI-generated deepfake terrorist or violent extremist material or activity generated by Gemini, the company's own generative AI, and 86 user reports of suspected AI generated child sexual exploitation and abuse material.

"This underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated. Safety by Design must be deployed throughout every stage of the AI life cycle.

Ms Inman Grant said the report contained a range of world-first insights about how tech platforms – including first-of-its-kind information from Telegram – are dealing with the online proliferation of terrorist and violent extremist material.

"For instance, Telegram, WhatsApp and Meta's Messenger did not employ measures to detect livestreamed terrorist and violent extremism despite the fact that the 2019 Christchurch attack was livestreamed on another of Meta's services, Facebook Live.

"We are concerned about the safety deficiencies that still exist in Facebook today, with users unable to report livestreamed TVE in-service if they are watching without being logged in. A user not logged in to YouTube faces the same hurdle.

The report also unearthed inconsistencies in how this material is tackled on services owned by the same company. For example, WhatsApp did not prohibit all organisations on its parent company Meta's Dangerous Organisations and Individuals list, from using WhatsApp's private messaging service.

"This discrepancy may mean that terrorist and violent extremist organisations are able to operate on parts of WhatsApp without action taken against them by the service," Ms Inman Grant said.

Inconsistencies were also uncovered when it came to baseline practices of how companies were detecting TVE using highly effective hash-matching technologies.

Hash-matching technology creates a unique digital signature (known as a 'hash') of an image which is then compared against the signatures of other images to find and identify copies with some implementations reporting an error rate as low as 1 in 50 billion.

Google reported that it used hash-matching technology to detect "exact", rather than altered, matches of TVE on YouTube and shared content in its consumer version of the Google Drive service, even though technology which can detect these altered variations is available.

"This is deeply concerning when you consider in the first days following the Christchurch attack, Meta stated that over 800 different versions of the video were in circulation," Ms Inman Grant said.

"And Telegram said while it detected hashes of terrorist and violent extremist images and videos it had previously removed from its service, it did not utilise databases of known material from trusted external sources such as the Global Internet Forum to Counter Terror (GIFCT), or Tech Against Terrorism.

The report also found that WhatsApp, Reddit and Meta's Threads took too long to respond to user reports of terrorist and violent extremism. On average WhatsApp took a day to respond, Reddit took 1.3 days and Threads took 2.5 days. Telegram reported taking 18 hours to action reports on Chats and Secret Chats.

Reddit and Telegram were also asked questions about the steps they are or are not taking to tackle the proliferation of child sexual exploitation and abuse material on their services. These platforms had not previously received a reporting notice from eSafety focussed on this egregious harm. A summary of responses can also be found in the full transparency report.

Last week, eSafety gave Telegram an infringement notice for $957,780 for failing to respond to the transparency reporting notice deadline by over five months. Telegram has since provided information to eSafety which features in this report.

X (previously Twitter) sought review of eSafety's decision to give the notice in the Administrative Appeals Tribunal, now the Administrative Review Tribunal. As a result, the report does not feature responses from X.

Key findings:

  • Messenger did not take steps to detect livestreamed TVE despite the use of another Meta product Facebook Live in the Christchurch attack.
  • There was no mechanism for users not logged-in to Facebook or YouTube to report livestreamed TVE.
  • WhatsApp rolled out Channels (which is not end-to-end encrypted) without the use of hash-matching for known TVE and reported that only during the report period did it start working on its implementation. A key principle of Safety by Design, and the Expectations, is that safety should be built into a service or feature at the outset, rather than added later.
  • Google received 258 user reports about suspected AI generated synthetic TVE by Gemini and 86 user reports of suspected AI generated synthetic child sexual exploitation and abuse material (CSEA) by Gemini during the report period. Google was unable to confirm the number of reports confirmed to contain TVE and CSEA.
  • WhatsApp took more than a day, Threads 2.5 days and Reddit 1.3 days to respond to user reports of TVE.
  • Google only used hash-matching to detect exact matches of TVE content, even though technology which can detect varied versions of TVE is available. In the days following the Christchurch attack, Meta reported 800 different versions in circulation.
  • Although providers used tools to proactively detect TVE, in some cases tools were limited:
    • Telegram used hash-matching tools to identify known TVE content on private groups and private channels, but it did not use tools to detect new TVE on those same parts of the service.
    • Telegram did not use any hash-matching tools on Chats (which is not end-to-end encrypted) or user reports in relation to Secret Chats.
    • Telegram detected hashes of TVE images and videos it had previously. removed from its service, but it did not source hashes of known TVE material from external sources such as the GIFCT or Tech Against Terrorism.
    • WhatsApp reported that it detected new TVE video in user reports but did not detect new TVE images.
    • Google reported that it detected new TVE videos in Drive (shared content) but not new TVE images in Drive (in stored or shared content).
  • Reddit and WhatsApp human moderators covered 13 and six languages respectively and only one of the top five languages, other than English, spoken in Australian homes, while Google covered approximately 80 languages and Meta 109 languages, including all top five languages (Arabic, Cantonese, Mandarin, Vietnamese and Punjabi). Telegram covered 47 languages but only two of the top five languages spoken in Australian homes, other than English.
/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).