Why Use Of GenAI In Higher Education Is Cautionary Tale

Research from Dr Armin Alimardani shows greater vigilance is needed when incorporating GenAI into assessments

New research into the use of Generative AI (GenAI) among students studying law at university has found that guidelines and training are essential, but are not enough to ensure the responsible use of AI.

Dr Armin Alimardani, from the University of Wollongong's (UOW) School of Law, authored the empirical study, published this week (Wednesday 26 February) in IEEE Transactions on Technology and Society, which revealed both the promise and pitfalls of using GenAI in student assessments.

As part of an elective subject, Law and Emerging Technologies, Dr Alimardani examined how law students engaged with GenAI in their assignments.

Students were tasked with preparing a government policy submission on the ethical and legal implications of autonomous vehicles.

They had the freedom to use GenAI, provided they critically reflected on their usage and substantiated AI-generated content with credible sources.

The students received explicit training in responsible AI use, yet Dr Alimardani said the results of the study were revealing and highlighted the challenges of including AI in the learning process.

"Many students successfully leveraged AI to refine arguments, distill complex information, and enhance engagement. However, a considerable number of students disregarded instructions on responsible AI use," he said.

"Some included entirely fabricated academic sources, while others misrepresented legitimate sources, citing papers that did not contain the claims they were purported to support. This phenomenon, known as AI 'hallucination', is a well-documented issue with large language models, but its impact in education is particularly concerning."

Dr Alimardani said if law students develop a habit of relying on unverified AI outputs, the consequences could extend far beyond the classroom.

"This is not merely a hypothetical concern—there have already been real-world cases where lawyers have cited non-existent case law or misinterpreted legal precedents due to AI-generated misinformation, resulting in professional embarrassment and even disciplinary action. In response, the Supreme Court of NSW has issued a practice note to promote the responsible use of GenAI in legal proceedings," he said.

However, the study's findings suggest that while such guidelines and training are essential, they may not be sufficient to ensure accuracy and ethical AI use in practice. Dr Alimardani attributes part of the issue to what he calls 'verification drift'. This phenomenon occurs when GenAI users are aware of the technology's limitations and understand the need to verify AI-generated content. However, as they review the material, the authoritative tone and polished presentation of GenAI gradually lead them to perceive it as reliable, ultimately making verification seem unnecessary.

While many academics have already incorporated GenAI into their assessments, Dr Alimardani urged them to exercise greater vigilance. In some student submissions, it took Dr Alimardani hours to uncover the content that was not credible or the cited sources that were misrepresented.

"I suspect that many other educators may have unknowingly overlooked instances of seemingly plausible content that were, in fact, hallucinations," he said.

"I don't believe students should bear the blame. Even experienced lawyers who are aware of GenAI's tendency to hallucinate, have repeatedly submitted fabricated materials to the court". This demonstrates that simple guidelines are insufficient.

"Educators need to go beyond just providing instructions on AI usage; they should actively engage students in responsible GenAI practices, provide ongoing feedback, and, most importantly, showcase real examples of situations where AI-generated content has been misleading or completely fabricated."

About the research

'Borderline Disaster: An Empirical Study on Student Usage of GenAI in a Law Assignment', by Dr Armin Alimardani, was published in the IEEE Transactions on Technology and Society https://ieeexplore.ieee.org/document/10903146

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.