Media, creative, and arts workers are in Canberra today to warn Parliament that the burgeoning use of Artificial Intelligence (AI) will erode creative sector jobs and undermine public trust in the media.
They will demand the Federal Government enacts laws to regulate the rollout of AI in the creative and media industries amid mounting concern that the work of Australian creatives and journalists is being systematically scraped to train AI without their knowledge, consent, or compensation.
Appearing at the Senate Select Committee hearing on Adopting Artificial Intelligence, Media, Entertainment & Arts Alliance members will urge the government to introduce laws requiring disclosure of data used to train AI and enforcing the right for creators to consent to and be paid for their work being used for such purposes.
Reforms and regulatory change must also stop attacks on First Nations culture and creative practice by fixing gaps in copyright and intellectual property law.
MEAA Chief Executive Erin Madeley said the union's Stop AI Theft campaign was a response to artists, creators, and journalists fears about the use of AI technologies to steal their work and devalue their professions.
"Artificial Intelligence presents the most profound change in the relationship between work and production since the advent of the Internet," Ms Madeley said.
"What we're seeing is the biggest corporate swindle in history.
"It is theft, plain and simple - theft of people's voices, their faces, their music, their stories and art.
"For the big Silicon Valley tech companies that own these machines, their business model is built on taking others' work and selling it as their own and what we've seen so far is the thin end of the wedge.
"If left unchecked, the increased use of AI tools in the media, arts, and creative industries will lead to mass job losses and the end of intellectual property as we know it.
"It will also drive the erosion of our news and information to the point where the community cannot tell fact from fiction."
Ms Madeley said MEAA members held grave concerns that policymakers had been slow to respond.
"We are behind the eight ball on AI and the companies profiting from other people's work are taking advantage of that right now," she said.
"We are calling on the government to take urgent action to protect their hard work and livelihoods from AI theft."
The Stop AI Theft campaign launch follows a recent survey by MEAA that revealed:
- 75% of creative professionals are concerned about theft of intellectual or creative work.
- 70% are concerned about the proliferation of deliberately harmful content.
- 66% are concerned about the loss of human-led creativity.
- 59% are concerned about AI-related job losses.
The union is calling for new legislation and greater government oversight over AI to boost transparency and accountability, updates to workplace laws to ensure workers are consulted on the intended use of AI in the workplace, as well as a tax on businesses that replace human workers with AI tools.
AI has already been at the centre of numerous allegations of copyright theft in Australia, including accusations by a group of voice artists about their voices being cloned and used by AI without their consent.
The issue is equally live in the media, with recent revelations journalists from a regional news organisation had work plagiarised by a network of AI websites. MEAA journalist members have also raised concerns about the impact of generative AI for editorial and production in newsrooms on their ability to comply with the MEAA Journalist Code of Ethics.
"Since the beginning of human history, technological change has influenced how we engage in artistic and cultural expression and how we tell stories and report current affairs," Ms Madeley said.
"While this has always been the case, we must be conscious that these creative processes have always required imagination and human technical skill at their centre.
"What would the world be like without the creative workers that tell our stories, inform our communities, and hold our institutions accountable? Unless we deal with the very real risks that AI poses, we could be about to find out."