Oxford Wins ARIA Grant for AI Safety Research

University of Oxford researchers are leading two major projects as part of the UK Government's Advanced Research and Invention Agency (ARIA) Safeguarded AI programme. Backed by £59 million funding, this programme aims to develop novel technical approaches to the safe deployment of AI.

As part of the Technical Area 3 (TA3) of the programme, nine research teams across the UK will focus on developing mathematical and computational methods to provide quantitative safety guarantees for AI systems. This will help ensure that advanced AI can be deployed responsibly in safety-critical sectors and domains such as infrastructure, healthcare, and manufacturing. Two of these projects are led by researchers at the University of Oxford:

Towards Large-Scale Validation of Business Process Artificial Intelligence (BPAI)

Led by Professors Nobuko Yoshida and Dave Parker at Oxford's Department of Computer Science, this project will provide formal, quantitative guarantees for AI-based systems in Business Process Intelligence (BPI). Using probabilistic process models and the PRISM verification toolset, the team will develop a workflow to analyse automated BPI solutions and evaluate them against safety benchmarks. Senior Research Associate Dr Adrián Puerto Aubel and Research Associate Joseph Paulus will also contribute to the project, which will involve collaboration with industry to apply the methods in practical settings.

Professors Nobuko Yoshida and Dave Parker said: 'Through the Safeguarded AI programme, ARIA is creating space to explore rigorous, formal approaches to AI safety. Our project addresses the challenge of verifying AI-based business process systems using probabilistic models and automated analysis techniques. By developing scalable workflows and benchmarks, we aim to provide quantitative guarantees that support the safe deployment of these systems in real-world settings.'

Portrait style photographs of Professors Nobuko Yoshida, Professor Dave Parker, Associate Professor Thomas Morstyn and Professor Jakob Foerster.

From left to right: Professor Nobuko Yoshida, Professor Dave Parker (credit: John Cairns), Associate Professor Thomas Morstyn and Professor Jakob Foerster.

SAGEflex: Safeguarded AI Agents for Grid-Edge Flexibility

This project aims to develop an AI-based framework for scalable and adaptive coordination of a net-zero power grid in Great Britain, that will involve millions of additional grid-edge devices including electric vehicles, heat pumps, and home/community batteries. Research shows that small changes in how these devices are used could free up as much energy as several big power plants. But managing all these devices would be too complex for traditional, centralized control systems.

The SAGEflex project will explore an AI-based approach called multi-agent reinforcement learning (MARL), which can help devices make smart, local decisions while working together. However, despite successes in other domains, and significant work by the power systems research community, the lack of safety guarantees has prevented industrial adoption of MARL by system operators and flexibility aggregators.

A team of researchers at the Oxford's Department of Engineering Science will address this by developing rigorous safety specifications, a test problem curriculum, and a software platform supporting the design, benchmarking and scaling up of MARL solutions.

Project Lead Thomas Morstyn , Associate Professor in Power Systems, said: 'Our project was motivated by the lack of rigorous approaches to AI safeguarding for power system applications, which we identified as the fundamental gap for industry adoption. Our ambition is to give power system operators confidence in adopting safeguarded MARL, unlocking large amounts of clean low-cost flexibility for Great Britain's power grid. This will accelerate decarbonisation and help lower customer energy bills.'

The project brings together Professor Morstyn, with expertise in power system coordination, and Professor Jakob Foerster, who brings methodological expertise in AI and multi-agent reinforcement learning. Professor Morstyn leads the Power Systems Architecture Lab (PSAL) and Professor Foerster leads the Foerster Lab for AI Research (FLAIR) at the Department of Engineering Science, University of Oxford.

/University Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.