Tech Giants' Safety Codes Fall Short for Kids Online

Ludovic Toinel/Unsplash

In July last year, Australia's eSafety Commissioner, Julie Inman Grant, directed tech companies to develop codes of practice to keep children safe from online porn and harmful content. Now, after seven months, the industry has submitted draft codes to eSafety for approval.

Author

  • Toby Murray

    Professor of Cybersecurity, School of Computing and Information Systems, The University of Melbourne

eSafety is currently assessing the draft codes.

Assuming Grant approves the new codes, what can we expect the future to look like for children and teens online? And how effective will the proposed codes be at protecting children?

A coordinated approach

The codes submitted for approval were developed by a group of industry associations .

They cover social media platforms such as Facebook and Snapchat. But they also cover internet service providers, search engines such as Google, online messaging services such as WhatsApp, online gaming platforms, as well as the manufacturers of the computers, mobile phones and software we use to access online services.

The codes will also cover online app stores such as those operated by Apple and Google. However, app store codes aren't expected to be released until late March.

As well as covering a range of companies, the codes also cover a range of harms. They aim to protect kids not only from online pornography but also content that promotes self-harm, eating disorders, suicide and violence.

Given the difficulty of protecting kids from this kind of content, this coordinated approach is absolutely essential .

If the draft codes are approved, companies will have six months to implement the proposed safety measures. They will face fines of up to A$50 million for non-compliance.

What's in store?

The draft codes are broken up across different parts of the tech ecosystem. The requirements they place on individual tech platforms depend on the danger harmful content on each platform poses to children.

Large social media platforms such as Facebook, Instagram and X (formerly Twitter) are likely to be categorised among the most dangerous. That's because it's possible for users to access extremely harmful content such as child sexual abuse or terrorist material on these platforms. Plus, these platforms serve millions of people and also allow users to create public profiles, maintain "friend" lists, and share content widely.

According to the draft codes, these platforms will need to implement the most stringent safety measures. These include using age-assurance measures to prevent children under the minimum age allowed to access the service from doing so, having an appropriately resourced trust and safety team, and using automated systems to detect and remove child abuse and pro-terror material.

On the other hand, less risky platforms won't be subject to any requirements under the draft codes. These include online platforms that allow only limited communication within a specific group of people and without social media features such as friends lists and public profiles. Platforms for communication within a primary school such as Compass would be among the least risky.

Online search engines such as Google and Bing - which provide access to adult and self-harm content, but are legitimately used by children - will be required to implement appropriate measures to prevent children accessing that content.

This may include enabling safe-search features and establishing child-user accounts. These accounts would include features that automatically blur harmful content and filter such content from search results and recommendation algorithms

The codes also cover emerging harmful technology, such as deepfake porn apps powered by generative artificial intelligence. Like traditional porn sites, these will be required to implement age-assurance technology to prevent children using these services.

What about age assurance?

The codes specifically define what age-assurance measures are considered "appropriate".

Importantly, just because an age-checking system can be bypassed doesn't disqualify it. Instead, age assurance measures must include "reasonable steps" to ensure someone is of age, while balancing privacy concerns .

Requiring users to self-declare their age is not appropriate. So expect to see porn sites do away with click-through dialogs asking visitors to declare they are really adults.

Instead, sites will have a range of options for assuring their users' ages, including photo ID, estimating age based on facial images or video, having a parent attest to a child's age, leveraging credit card checks, or AI-based methods for age inference.

Different measures are likely to be used by different companies and systems.

For example, Apple has already announced a range of new child safety measures that appear to align with many parts of the draft codes. These include making it easier for parents to set up child safety features on kids' iPads and iPhones, using a parent's payment information to ensure they can safely attest to their child's age, as well as app store integration of child safety features to enable app developers to make their apps safer for children.

On the other hand, adult sites and apps are likely to adopt age-assurance mechanisms that users perceive to be more private. For paying subscribers, they are likely to leverage the credit information already stored to assure the users' age.

Non-subscribers may instead be required to submit to a facial scan or other AI-based methods to estimate their age.

Publicly available data on state-of-the-art systems for age estimation from facial images suggests the best systems have an average error of 3.7 years.

Whether eSafety will agree such technology is "appropriate" remains to be seen. However, if it is adopted, there is a real risk many teens will remain able to access online porn and harmful deepfake apps despite these new codes.

The Conversation

Toby Murray receives funding from Google. He is director of the Defence Science Institute, which receives funding from Victorian and Tasmanian state governments, and from the Commonwealth Department of Defence.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).