Australia's eSafety Commissioner has issued legal notices to tech giants including Apple, Google, Meta and Microsoft, requiring the companies to report to the regulator every six months about measures they have in place to tackle online child sexual abuse.
Issued under Australia's Online Safety Act, notices were also sent to services Discord, Snap, Skype and WhatsApp and require all recipients to explain how they are tackling child abuse material, livestreamed abuse, online grooming, sexual extortion and where applicable the production of "synthetic" or deepfaked child abuse material created using generative AI.
For the first time the notices will require tech companies to report periodically to eSafety for the next two years, with eSafety publishing regular summaries of the findings to improve transparency, demonstrate safety weaknesses and incentivise improvements.
eSafety Commissioner Julie Inman Grant said the companies were chosen partly based on answers many of them provided to eSafety in 2022 and 2023 exposing a range of safety concerns when it came to protecting children from abuse.
"We're stepping up the pressure on these companies to lift their game," Ms Inman Grant said. "They'll be required to report to us every six months and show us they are making improvements.
"When we sent notices to these companies back in 2022/3, some of their answers were alarming but not surprising as we had suspected for a long time that there were significant gaps and differences across services' practices. In our subsequent conversations with these companies, we still haven't seen meaningful changes or improvements to these identified safety shortcomings.
"Apple and Microsoft said in 2022 that they do not attempt to proactively detect child abuse material stored in their widely used iCloud and OneDrive services. This is despite the fact it is well-known that these file storing services serve as a haven for child sexual abuse and pro-terror content to persist and thrive in the dark.
"We also learnt that Skype, Microsoft Teams, FaceTime, and Discord did not use any technology to detect live-streaming of child sexual abuse in video chats. This is despite evidence of the extensive use of Skype, in particular, for this long-standing and proliferating crime.
"Meta also admitted it did not always share information between its services when an account is banned for child abuse, meaning offenders banned on Facebook may be able to continue perpetrating abuse through their Instagram accounts, and offenders banned on WhatsApp may not be banned on either Facebook or Instagram."
eSafety also found that eight different Google services, including YouTube, are not blocking links to websites that are known to contain child abuse material. This is despite the availability of databases of these known abuse websites that many services use.
Despite eSafety investigators regularly observing use of Snapchat for grooming and sexual extortion, eSafety found that the service was not using any tools to detect grooming in chats.
"The report also unearthed wide disparities in how quickly companies respond to user reports of child sexual exploitation and abuse on their services. Back in 2022, Microsoft said on average it took two days to respond, or as long as 19 days when these reports required re-review, which was the longest of all the providers. Snap on the other hand reported responding within 4 minutes..
"Speed isn't everything, but every minute counts when a child is at risk.
"These notices will let us know if these companies have made any improvements in online safety since 2022/3 and make sure these companies remain accountable for harm still being perpetrated against children on their services.
"We know that some of these companies have been making improvements in some areas – this is the opportunity to show us progress across the board."
Key potential safety risks considered in this round of notices include the ability for adults to contact children on a platform, risks of sexual extortion, as well as features such as livestreaming, end-to-end encryption, generative AI and recommender systems.
The transparency notices under Australia's Basic Online Safety Expectations are designed to work hand-in-hand with eSafety's industry codes and standards which require the online industry to take meaningful action to combat child abuse and other class 1 material on their services.
Compliance with a notice is mandatory, and there may be financial penalties of up to $782,500 a day for services that do not respond.
The companies will have until 15 February 2025 to provide their first round of responses.