Seriously harmful mis and disinformation poses a threat to safety, the integrity of elections, democracy and national security, and 80% of Australians want action.
The Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 would combat seriously harmful content on digital platforms, while maintaining strong protections for freedom of speech.
Members of the House crossbench worked constructively with the Government over the latter half of this year to refine the Bill and support its passage through the House.
The Coalition committed to legislating safeguards when in Government, but chose to place partisanship above any attempt to navigate the public interest.
Based on public statements and engagements with Senators, it is clear that there is no pathway to legislate this proposal through the Senate.
The Government will not proceed with the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024.
The Government invites all Parliamentarians to work with us on other proposals to strengthen democratic institutions and keep Australians safe online, while safeguarding values like freedom of expression.
It is incumbent on democracies to grapple with these challenges in a way that puts the interests of citizens first.
Alternative proposals include:
Legislating to strengthen offences targeting the sharing of non-consensual and sexually explicit deep fakes - a vital and urgent first step secured by the Attorney-General.
The Special Minister of State has progressed a proposal to enforce truth in political advertising for elections.
The Minister for Industry and Science is progressing reforms on regulation of Artificial Intelligence.
Mis and disinformation is an evolving threat and no single action is a perfect solution, but we must continue to improve safeguards to ensure digital platforms offer better protections for Australians.
BACKGROUND TO THE BILL
The Bill would have ushered in an unprecedented level of transparency, holding big tech to account for their systems and processes to prevent and minimise the spread of harmful misinformation and disinformation online, with enforceable rules to:
Address seriously harmful content with measures around the use of algorithms, bots, fake accounts, malicious deep fakes, advertising and monetisation.
Provide transparency with the publication of risk assessments, policies, and reports as well as a data access scheme for independent researchers.
Empower users with complaints and dispute procedures to challenge the content moderation decisions of platforms as well as measures to support media literacy.