A rush by Australian companies to use generative Artificial Intelligence (AI) is escalating the privacy and security risks to the public as well as to staff, customers and stakeholders, according to a new study.
The University of the Sunshine Coast research, published in a paper in Springer Nature journal AI and Ethics on the weekend, warns that rapid AI take-up is leaving companies open to wide-ranging consequences.
These include mass data breaches that expose third-party information, and business failures based on manipulated or "poisoned" AI modelling - whether accidental or deliberate.
The study included a five-point checklist for businesses to ethically implement AI solutions.
'Organisations caught in the hype can leave themselves vulnerable by either over-relying on or over-trusting AI systems' - Dr Declan Humphreys
UniSC Lecturer in Cyber Security Dr Declan Humphreys said the corporate race to adopt generative AI solutions like ChatGPT, Microsoft's Bard or Google's Gemini was fraught with not just technical, but moral issues.
Generative AI applications turn large amounts of real-world data into content that appears to be created by humans. ChatGPT is an example of a language-based AI application.
"The research shows it's not just tech firms rushing to integrate the AI into their everyday work - there are call centres, supply chain operators, investment funds, companies in sales, new product development and human resource management," Dr Humphreys said.
"While there is a lot of talk around the threat of AI for jobs, or the risk of bias, few companies are considering the cyber security risks.
"Organisations caught in the hype can leave themselves vulnerable by either over-relying on or over-trusting AI systems."
The paper was co-authored by UniSC experts in cyber security, computer science and AI, including Dr Dennis Desmond, Dr Abigail Koay and Dr Erica Mealy.
It found that many companies were making their own AI models or using third-party providers without considering the potential for hacking.