Law Reform In Age Of AI

*Check against delivery*

Acknowledgments omitted

Welcome to my hometown.

I grew up right here in Fremantle. My primary school is around the corner on Henry Street. My childhood home on the same road as Fremantle Prison, a building now on the World Heritage List. Back then, home to 337 of Western Australia's prisoners.

I enjoyed the freedom of a social media free childhood. The only technology that terrified me was the Swan Blimp, roaring in the skies above Esplanade Park, while Fremantle boomed with the America's Cup. So technology can scare us, but also enable us to achieve greatness.

I now live in North Perth. The Australia II still lives in Fremantle at the Maritime Museum. It was first launched in 1982, a year away from its history-making America's Cup win. With a winged keel and the 1980s best 3D design.

As the TELEX message that was sent amongst the designers said:

"ABOUT TO TAKE YACHT DESIGN INTO THE SPACE AGE.

DARTH VADER LOOKS GOOD IN COMPUTER IN 3 DIMENSION WILL TEST ON WEDNESDAY 10th JUNE, BEN SKYWALKER"

That was designer Ben Lexcen's cryptic Telex message of May 1981. The Australia II team did enter the yacht race space age. And far away down in Hobart, an eccentric politician made a bold prediction.

Barry Jones had just published a book, called 'Sleepers, Wake!' exploring the potential impacts of the ICT Revolution on society. The book suggested that technological innovation would be a major component of economic growth, that the increased accessibility of information would transform our lives in almost every conceivable way. The book was ridiculed by some and its claims were regarded by many as wildly exaggerated.

Barry Jones delivered his famous prediction in a speech to a public meeting in Hobart. He predicted that by the year 2000 there would be more computers in Tasmania than cars. This prediction was considered laughable. The Mercury newspaper suggested he had lost his grip on reality. But he was right.

Many of us start our days by turning off the alarm blaring out of our small handheld smartphone computers. We get up and dressed and put on our smart watches. We get into our car and use our GPS systems to get to work, where we log on to our work computers for a long day ahead before we can watch some TV on our smart TVs at home.

Few in 1982 would have had the foresight to make this prediction, and few had the foresight to take it seriously.

So, what technological advancements are we in danger of overlooking in 2024? The obvious answer is of course Artificial Intelligence.

The age of AI

The age of AI is now here. AI is no longer the stuff of science fiction, it is here and it is already embedding itself into our daily lives. The names are cute. Inoffensive. Co-pilot. Chat GPT. Gemini. Cyber Dynamics Model 101.

Well, that last one is the official name of The Terminator, but I am sure the others are harmless. Australians are already using AI in the workplace. Teachers are now providing students with personalised AI chatbots to help provide additional tutoring to students needing support. AI is assisting medical doctors to scan vast data sets and gather medical insights that were previously not possible. In the public sector, the Australian Government recently conducted a six-month trial of Co-pilot for Microsoft 365. And of course, AI is also impacting the legal sector.

Recent surveys suggest that a majority of lawyers are already using AI in their work. They are also optimistic for the potential for AI to bring significant innovation to the sector. AI tools are being developed to assist lawyers with document review, legal research and more. Most of us wish we had time to be an incredible professional, as well as an accomplished artist, writer and musician.

Generative AI is that best version of our imagined selves. Producing music, art and video that has already won artistic competitions when submitted anonymously alongside the work of human artists.

AI Regulation

This is where wonder and risk collide. There are serious risks associated with the development and deployment of AI. AI has implications in copyright law, where vast amounts of data and creative work have been scraped for the training of AI models from web sources. AI generative content can also be created to mimic the works of existing Australian artists and creatives. This raises serious concerns for Australian artists and creatives, about the future of their work and livelihoods.

As Australian Artist Ben Lee said on AI:

"I don't think art has ever succeeded in trying to fight technology…

[but] we have to consider what we will lose if we put all our eggs in that basket."

And even if we aren't recording artists - every Australian has eggs in this basket. We know the risks of having our sensitive data harvested and used. Your information could be training AI without your knowledge or consent.

AI creates potential challenges in the areas of law enforcement and criminal behaviour, notably in relation to cybercrime. So we must consider the role of regulation and legislative frameworks for the development of AI.

I am aware I am in a room of legal experts. I expect many of you may have an interest in AI. Equally, the current opportunities for law reform in the age of AI.

It is worth noting that Barry Jones, when he made his famous prediction, was no great scientist. He studied arts and law. He had been a schoolteacher. It was deep thinking about Australian society and the road ahead of us. He couldn't avoid the impacts of emerging technologies.

Similarly, you all witness the iterative way in which law and society steadily adapt to each other, every day in the course of your work. Like Barry, you are in a position to see and understand the transformative impacts of new technology on how a society and its legal framework function. I hope you engage with and contribute to the current conversation about the safe and effective development and implementation of AI in Australia.

Law reform in the age of AI

Things are changing. Fast.

Our regulatory approach is engaged with those changes. It is the role of law makers to balance risk with opportunity. To shield the Australian public from the dangers of AI, while not restricting the potential for AI to deliver positive and profound improvements in living standards.

Later this month the Susan McKinnon Foundation will release new research on AI. Its report, 'Partisanship, polarisation and social cohesion in Australia' surveyed 3,000 Australians. It found familiar divides across many issues amongst progressives and conservatives.

Surprisingly in one area they found agreement from left and progressive, centre and moderate, right and conservative. They all had similar results on the increased use of AI in daily life, and they all opposed the AI intrusion. Negative 15 per cent support from the left and progressives. Negative 20 per cent support from the right and conservatives.

So Australians are looking for leadership on how best to protect themselves from potential harms. When conducting law reform we must keep front of mind the rights and needs of those who are most subject to vulnerability. To make sure those who are most disadvantaged are not put to further disadvantage.

Some legislation is developed for specific technologies, like gene technologies or nuclear technologies. Other legislation is crafted to be technology neutral.

The Australian Government is continually working to ensure that our robust system of existing legislative frameworks is fit-for-purpose. Capable of responding to harms, including harms enabled by AI.

Australians know that the regulation of AI is a challenging issue. They recognise the potential dangers and benefits and the importance of getting it right. Where the community has expectations, law reform must respond to and uphold those community expectations. The laws of Australia, are ultimately, a mirror held up to our society. Our laws must reflect those expectations and beliefs of the collection of diverse individuals that make up this country.

International developments

The questions Australia faces are not ours alone. The United Nations has alerted the world to the growing energy demands of AI.

Noting:

"A request made through ChatGPT, an AI-based virtual assistant, consumes 10 times the electricity of a Google Search, reported the International Energy Agency.

While global data is sparse, the agency estimates that in the tech hub of Ireland, the rise of AI could see data centres account for nearly 35 per cent of the country's energy use by 2026."

Then there is the European Union Artificial Intelligence Act - designed to specifically address unique high-risk considerations associated with AI.

By assigning AI systems and applications to three risk categories:

  1. unacceptable risk
  2. high-risk, and
  3. minimal risk.

In this framework, unacceptable risk systems and applications are prohibited.

Last year in the UK, an AI white paper was released which argues for a risk-based approach to AI regulation. The paper classifies AI systems based on the level of risk they pose. It emphasises the development of AI systems that are human-centric and trustworthy, whilst also promoting innovation through the development of AI innovation hubs to support research and development.

In the United States, the first state-based AI legislation has been passed. Known as the Colorado AI Act, it will come into effect from February 2026. The Act requires developers of high-risk artificial intelligence systems to use reasonable care to protect consumers from foreseeable risks of algorithmic discrimination.

Canada has proposed legislation, the Artificial Intelligence and Data Act, which is broadly aligned with the EU AI Act. The Bill established initial classes of high-impact AI systems and parameters for government to deem further classes of systems as high-impact systems. It would also require developers and deployers of general-purpose high-risk AI systems to establish accountability frameworks. It also provides new enforcement powers for the AI and Data Commissioner.

These are all developments that the Australian Public Service is monitoring closely.

AI regulation in Australia

I began this speech talking about the 1980s here in Fremantle. The 1980s in Canberra saw computers occupy the desk real estate of the public service. Forty years ago, the Attorney-General's Department assisted with the Copyright Amendment Act 1984, clarifying copyright protection for computer programs.

The same year the Standing Committee of Attorneys-General "agreed on the desirability of uniform legislation to penalise the appropriation or use of computer data without lawful authority or excuse".

Forty years on the technology changed, but the work continues. The Minister for Industry and Science recently held consultations on proposals for introducing mandatory guardrails for AI in high-risk settings. This process is informing the Government's consideration of how we can most effectively regulate the development and deployment of AI.

The Senate Select Committee on Adopting AI is currently investigating opportunities and impacts for Australia arising out of the uptake of AI technologies. The Committee is scheduled to present its final report on the 26th of November.

The Australian Public Service is also working to ensure that government serves as an exemplar for the responsible use of AI. On the 1st of September 2024, the Digital Transformation Agency introduced a policy for responsible use of AI in government, providing a framework for the safe and responsible use of AI by public servants.

Attorney-General's Department - AI law reform

I would like to also talk specifically about some of the law reform being led by the Commonwealth Attorney-General relevant to AI regulation. This reform crosses a number of policy areas, including privacy, copyright, automated decision making, cybercrime, and technology facilitated abuse.

Privacy reforms

In the privacy space, Australians are becoming increasingly aware that the advent of AI technologies has introduced the potential for new privacy risks. While AI has the potential to provide major economic benefits, we know Australians are also cautious about the use of AI to make decisions which may affect them.

In a survey by the Office of the Australian Information Commissioner, respondents made clear they want conditions in place before AI is used in this way.

In particular - they want to be told when this is the case. Our Government believes that entities have a responsibility to protect Australians' personal information and ensure individuals have control and transparency over how it is used.

On 12 September 2024, the Attorney-General introduced legislation to Parliament to reform the Privacy Act. The Bill implements a first tranche of reforms, agreed by Government in its response to the Privacy Act Review, ahead of consultation on a second tranche of reforms. The Bill will amend the Privacy Act to enhance its effectiveness, strengthening the enforcement tools available to the privacy regulator, while better facilitating safe overseas data flows.

The Bill will also introduce a statutory tort for serious invasions of privacy, and criminal offences for the malicious release of an individual's personal data online, otherwise known as 'doxxing.' Importantly, the Bill will provide individuals with transparency about the use of their personal information in automated decisions which significantly affect their interests. Entities will need to specify the kinds of personal information used in these sorts of decisions in their privacy policies.

The Government is approaching this important reform work carefully. Ensuring increased privacy protections are balanced alongside other impacts, so that we deliver the fairest outcome for all Australians.

Copyright and AI

AI and copyright issues are another complex global challenge needing to be worked through in an Australian context. The Attorney-General's Department is considering complex and contested AI and copyright issues in a careful and consultative way. This approach is consistent with advice from industry stakeholders that participated in a series of Copyright Roundtables in 2023.

The Government is conscious of the need for balance. Between - on the one hand - the urgency with which the rapid development and adoption of AI demands a policy response.And on the other - the importance of taking the time necessary to get that response right, avoiding harmful repercussions.

In December 2023, the Attorney-General established the Copyright and AI Reference Group as a standing mechanism for engagement with stakeholders. These stakeholders represent a wide range of sectors, including the creative, media and technology sectors. The Reference Group's role is to consider copyright and AI issues. The Attorney-General's Department's ongoing consultation with the Reference Group is informing the development of policy for Government's consideration.

This work on copyright is part of the Government's broader engagement on AI-related matters. It complements the work being led by the Minister for Industry and Science on the safe and responsible use of AI.

Automated decision-making

Automated decision making (or 'ADM') has long been part of administrative processes, inside and outside of government. When implemented thoughtfully and responsibly - which is the majority of cases - we can all benefit from faster, more efficient, and more accurate service delivery. From e-Gates at airports through to faster processing of claims, these benefits can meaningfully improve the services individuals receive from Government.

However, where ADM is used to make decisions that adversely affect people's rights or wellbeing, the community is understandably concerned. In particular, concerns centre on how these automation and artificial intelligence technologies are governed. When assurance processes fail, there can be life-altering impacts on individuals. As many of you would recall, this was this was vividly and painfully illustrated in the 'Robodebt' scandal and resulting Royal Commission.

The Royal Commission made several recommendations to improve governance and safeguards around the use of ADM in administrative decision-making. The Government has fully accepted those recommendations and work is well underway in the Attorney-General's Department to develop stronger safeguards.

Australia learnt many lessons from the Robodebt scandal. We heard that individuals were able to successfully challenge particular decisions. However, most individuals did not feel they were in a position to challenge the assessments they received.

Considerable harm across a large number of individuals was done before the system was brought to an end. The legal system was able to compensate individuals for what had happened.

A key focus for better governing ADM, including systems that use AI, is therefore to ensure that systems and processes are sufficiently robust. To ensure that flaws in ADM design and implementation are identified and addressed before decisions are made that affect individuals. This could include ensuring that any use of ADM systems in administrative processes is consistent with the principles of administrative law.

Cybercrime and technology-facilitated abuse

Generative AI is being rapidly adopted by criminal actors in a range of contexts. For example, artificial intelligence is already being used to generate hyper realistic deepfakes. These can be used as a tool for sexual exploitation, abuse and harassment online.

It is essential that the Australian Government keeps our laws under constant review. To ensure they remain fit-for-purpose in responses to rapid changes in technology - such as the emergence of AI.

Earlier this year, the Attorney-General led legislative reform through the Criminal Code Amendment (Deepfake Sexual Material) Act 2024. The Act introduces new offences and strengthens the current criminal law framework. Ensuring the non-consensual transmission of sexual material developed or altered by such technologies is criminalised and subject to significant penalties. This came into force in September 2024.

Partnership with the states and territories is also important, to ensure a cohesive national approach. In September, the Police Ministers Council agreed to a review of Commonwealth, state and territory frameworks. The review seeks to ensure they adequately address the issue of technology-facilitated abuse, including deepfakes.

In March 2024, the Joint Standing Committee on Electoral Matters commenced an inquiry into civics education, engagement and participation in Australia. This came from a referral from Government. The inquiry is considering how governments and the community can prevent or limit inaccurate or false information influencing electoral outcomes. Particularly with regard to AI, foreign interference, social media, and mis- and disinformation.

As AI technologies continue to evolve and transform, it is critical that Australia harnesses the opportunities arising from the uptake of AI technologies. To bolster Australia's economic and social prosperity, as well as ensuring our legal frameworks remain fit for purpose. Making sure we combat the misuse and abuse of AI for criminal purposes.

Conclusion

I started this speech talking about the excitement of the America's Cup. What it did to my hometown of Fremantle. The joy that win gave the nation.

I see that excitement again in the possibility of Artificial Intelligence. To unlock the potential of our people, wherever they live. Powered by a publicly owned National Broadband Network.

In 2024 we stand on the doorstep of the AI age and that door is opening.

The age of AI is now here. This is a time of great excitement, where the bounds of human creativity and imagination are currently being pushed. But it is also, a time to stop, and to carefully consider the potential hazards and pitfalls, as we move forward.

The Australian Government is working hard to ensure our legislative framework shields Australians from the potential harms of AI technologies.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.