An Imperial spinout has raised £4.15m seed funding for software designed to ensure that safety- and business-critical AI systems perform as intended.
The platform offered by spinout Safe Intelligence is designed to validate the performance of AI models in a wide range of scenarios and make them more reliable.
It uses verification techniques developed in Imperial's Department of Computing to validate how models will perform under a broader set of conditions than standard testing methods can. It can also automatically improve AI systems to make them more robust to unexpected inputs that could otherwise disrupt their performance.
Ensuring AI safety and robustness is one of the most urgent challenges we face. With the expertise of Professor Lomuscio and seasoned entrepreneurs and investors, Safe Intelligence will make an important contribution. Dr Simon Hepworth Co-Director of Enterprise, Imperial College London
The company is working with partners in a range of industry sectors to help overcome a key barrier to the adoption of AI, namely insufficient trust that the models will perform as intended in applications where errors could be very expensive or threaten human safety. These include AI algorithms that could be used by financial services companies to make lending decisions, and systems under development by the aviation sector to control autonomous cargo aircraft.
Dr Manjari Chandran-Ramesh of Amadeus Capital Partners, which led the seed round, said: "Banks, insurers and other corporates using complex AI models internally are holding back from applying them to frontline, customer-facing or regulated activity because of fears that their models are not robust enough. Safe Intelligence can identify fragilities, tackle them, and unleash the power of AI across industries from transport to finance."
The core of the problem is that to trust an AI, one needs to be confident that it will respond reliably to scenarios it has not encountered before. This is by nature hard to verify, and the problem is compounded by the fact that more sophisticated AI models are more fragile, or prone to critical changes of behaviour in response even to very small changes in input.
Rather than testing how models respond to individual input perturbations, for example how an AI autopilot will land an aircraft under the illumination condition created by a specific sunset, Safe Intelligence's platform efficiently tests how they respond to whole libraries of input perturbations – for example, the full suite of illumination changes a sunset could create.
The technology builds on decades of research by founder Alessio Lomuscio, Professor of Safe Artificial Intelligence in Imperial's Department of Computing, who launched the spinout in 2021. Professor Lomuscio, now serving as Chief Technology Officer, has been joined by industry AI expert Dr Steven Willmott as the company's CEO.
"Our mission is to provide tools to radically improve our ability to validate machine learned components and get back to a world where we can have high confidence in our systems," said Dr Willmott.
The lead investor, Amadeus Capital Partners, is joined by OTB Ventures and Vsquared Ventures, who are helping Safe Intelligence bring its technology to industry partners. The company is currently offering its platform to customers via an early user programme.
Dr Simon Hepworth, Co-Director of Enterprise (Commercialisation) at Imperial College London, said: "With AI set to transform society, ensuring its safety and robustness is one of the most urgent challenges we face. By drawing on the academic expertise of Professor Lomuscio and the support of seasoned entrepreneurs and investors, Safe Intelligence is set to make an important contribution to this challenge.
"At Imperial, we're helping academics create commercial solutions through initiatives such as I-X and our forthcoming Schools of Convergence Science, and the regular backing of investors is a strong vote of confidence in Imperial's people and technologies."