ESafety Warns of Terrorist Content on Social Media

The spread of terrorist and extremist material on social media and its role in online radicalisation remains a concern both in here in Australia and overseas.

We have very real concerns about how violent extremists weaponise technology like live-streaming, algorithms and recommender systems and other features to promote or share this hugely harmful material.

We are also concerned by reports that terrorists and violent extremists are moving to capitalise on the emergence of generative AI and are experimenting with ways this new technology can be misused to stoke division and cause harm.

The tech companies that provide these services have a responsibility to ensure that these features and their services cannot be exploited to perpetrate such harm, which goes to the heart of eSafety's safety by design principles and mandatory industry codes and standards.

The 2019 terrorist attacks in Christchurch NZ and Halle Germany, and Buffalo NY, really underscore how social media and other online services can be exploited by violent extremists, contributing to radicalisation, loss of life and threats to public safety.

More recently, riots on the streets in the UK are demonstrating how online invective can be fomented on social media and spilled over into real life conflict and harm.

At eSafety, we continue to receive reports about perpetrator-produced material from such attacks, including Christchurch, that are reshared on mainstream platforms.

In March we sent transparency notices under Australia's Online Safety Act to platforms - Google, Meta, Twitter/X, WhatsApp, Telegram and Reddit - to find out what they are doing to protect Australians from terrorist and violent extremist material and activity. We will publish appropriate findings in due course.

Transparency is a key pillar of the Global Internet Forum to Counter Terrorism and the Christchurch Call, global initiatives that many of these companies are signed up to - but it still is not clear what commitments they are living up to. It is of great concern that we do not know the answer to a number of fundamental questions about the systems, processes and resources that these tech behemoths have in place to keep Australians safe.

And, disappointingly, none of these companies have chosen to provide this information through the existing voluntary framework - developed in conjunction with industry - provided by the OECD. This shows why regulation, and mandatory notices, are needed to truly understand the true scope of challenges, and opportunities to make these companies more accountable for the content and conduct they are amplifying on their platforms.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.