Oxford to Lead AI Security in New National Lab Partnership

The University of Oxford, in collaboration with The Alan Turing Institute and UK Government, will play a lead role in the newly announced Laboratory for AI Security Research (LASR). The £8.22 million government-funded initiative marks a significant step forward in strengthening Britain's cyber resilience.

Announced at the recent NATO Cyber Defence Conference , LASR will bring together industry, academic, and government experts to boost Britain's cyber resilience and support growth. Leading researchers from Oxford University's Mathematical, Physical and Life Sciences (MPLS) Division and Global Cyber Security Capacity Centre will work alongside partner institutions, government bodies, and commercial stakeholders.

By bringing together diverse expertise and perspectives, LASR will take a comprehensive and strategic approach towards addressing the complex security challenges emerging in AI technologies.

Professor James Naismith, Head of the MPLS Division, said: 'The Laboratory for AI Security Research is an exciting and important collaboration between government and Oxford University scientists across the MPLS Division. We will work together to develop a rigorous understanding of the security and reliability of emerging AI systems. As these systems grow in power and utility, this work is extremely timely.'

Oxford will advance LASR with a multidisciplinary approach, drawing on expertise across the University's research ecosystem. Five departments from the MPLS Division will support an initial cohort of ten doctoral students conducting fundamental and applied research into AI and Machine Learning security. The Global Cyber Security Capacity Centre will conduct investigations into emerging system risks, with a particular focus on AI supply chains and national cybersecurity preparedness.

'I'm delighted to help LASR build a team of exceptional DPhil students who will conduct research in areas critical to our future," said Professor David De Roure, Oxford's LASR Academic Director. 'We're building on extensive prior work in cybersecurity and AI across the institution and look forward to providing a collaborative research centre in Oxford which will address emerging challenges and opportunities.'

The laboratory will employ a catalytic funding model, with the initial government investment expected to attract substantial industry participation.

The LASR partnership approach reflects Oxford's tradition of combining academic excellence with practical innovation, while strengthening ties between Britain's leading research centres.

Professor Sadie Creese, Director of the Global Cyber Security Capacity Centre, said: 'The Laboratory for AI Security Research is a crucial initiative at a time when understanding the interplay between AI and cybersecurity is more important than ever. At Oxford, we bring deep expertise and a history of pioneering frameworks. Through the Global Cyber Security Capacity Centre, we are researching topics related to emerging system risks around AI supply chains and developing insights into national AI cybersecurity readiness. Together, we aim to shape a future where AI technologies can be both transformative and secure.'

The Laboratory for AI Security Research (LASR) is a partnership between the University of Oxford, the Foreign, Commonwealth & Development Office (FCDO), Department for Science, Innovation & Technology (DSIT), National Cyber Security Centre (NCSC), Government Communications Headquarters (GCHQ), Plexal, The Alan Turing Institute, and Queen's University Belfast. The University of Oxford Departments that are involved are the Department of Computer Science, the Department of Earth Sciences, the Mathematical Institute, the Department of Physics, and the Department of Statistics.

/University Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.