Oxford Proposes AI Impact Study on Youth Mental Health

University of Oxford

A new peer-reviewed paper from experts at the Oxford Internet Institute, University of Oxford, highlights the need for a clear framework when it comes to AI research, given the rapid adoption of artificial intelligence by children and adolescents using digital devices to access the internet and social media.

Its recommendations are based on a critical appraisal of current shortcomings in the research on how digital technologies' impact young people's mental health, and an in-depth analysis of the challenges underlying those shortcomings. 

The paper, "From Social Media to Artificial Intelligence: Improving Research on Digital Harms in Youth," published 21 January in The Lancet Child and Adolescent Health, calls for a "critical re-evaluation" of how we study the impact of internet-based technologies on young people's mental health, and outlines where future AI research can learn from several pitfalls of social media research. Existing limitations include inconsistent findings and a lack of longitudinal, causal studies. 

The analysis and recommendations by the Oxford researchers are divided into four sections:  

  • A brief review of recent research on the effects of technology on children's and adolescents' mental health, highlighting key limitations to the evidence. 
  • An analysis of the challenges in the design and interpretation of research that they believe underlie these limitations. 
  • Proposals for improving research methods to address these challenges, with a focus on how they can apply to the study of AI and children's wellbeing. 
  • Concrete steps for collaboration between researchers, policymakers, big tech, caregivers and young people. 

"Research on the effects of AI, as well as evidence for policymakers and advice for caregivers, must learn from the issues that have faced social media research," said Dr Karen Mansfield, postdoctoral researcher at the OII and lead author of the paper. "Young people are already adopting new ways of interacting with AI, and without a solid framework for collaboration between stakeholders, evidence-based policy on AI will lag behind, as it did for social media." 

The paper describes how the impact of social media is often interpreted as one isolated causal factor, which neglects different types of social media use, as well as contextual factors that influence both technology use and mental health. Without rethinking this approach, future research on AI risks getting caught up in a new media panic, as it did for social media. Other challenges include measures of social media use that are quickly outdated, and data that frequently excludes the most vulnerable young people.

The authors propose that effective research on AI will ask questions that don't implicitly problematise AI, ensure causal designs, and prioritise the most relevant exposures and outcomes. 

The paper concludes that as young people adopt new ways of interacting with AI, research and evidence-based policy will struggle to keep up. However, by ensuring our approach to investigating the impact of AI on young people reflects the learnings of past research's shortcomings, we can more effectively regulate the integration of AI into online platforms, and how they are used. 

"We are calling for a collaborative evidence-based framework that will hold big tech firms accountable in a proactive, incremental, and informative way," said Professor Andrew Przybylski, OII Professor of Human Behaviour and Technology and contributing author to the paper. "Without building on past lessons, in ten years we could be back to square one, viewing the place of AI in much the same way we feel helpless about social media and smartphones. We have to take active steps now so that AI can be safe and beneficial for children and adolescents." 

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.