DASA Invests in AI Assurance for Future Enhancement

UK Gov

Advai, an AI focussed SME, is leading the way in military and commercial AI safety.

How DASA and Dstl funding helped Advai become an AI Safety Leader

  • DASA's and Dstl's investment helped Advai build the UK's first dedicated AI assurance capability before the generative AI boom
  • Advai's solutions range from physical patches that degrade AI computer visions systems, to a system which can automatically retrain AI models out in the field
  • Advai's evolution saw them develop into a UK leader in military and commercial AI safety, influencing national policy and standards

When AI assurance saves lives

Artificial Intelligence (AI) is revolutionising modern life and with the boom of large language models (LLMs) and generative AI, its impact on defence and security is growing more each day. Yet as militaries worldwide rush to implement AI systems, an equally crucial challenge emerges: how to protect them.

Understanding the challenge

Advai's CEO David Sully, who brought public sector experience to this critical challenge, explains:

Everyone was talking about AI unlocking value, but nobody was asking what happens when AI goes wrong and why it does so.

AI systems need rigorous examination to understand exactly when and how they might fail. This insight led to Advai's simple but powerful mission: "We don't make AI - we break it."

Advai's early vision: Adversarial AI

Beginning in 2020, DASA recognised the strategic importance of AI assurance and funded Advai through multiple innovative projects, starting with the development of Adversarial AI attack and defence methods for Computer Vision and Natural Language Processing (NLP).

What exactly is Adversarial AI? Adversarial AI involves finding ways to make AI systems fail or produce incorrect outputs, essentially "breaking" AI to understand its vulnerabilities. Similar to testing safety equipment - before trusting it, you need to know exactly under what conditions it might fail.

Advai's initial projects aimed to develop methods of confusing AI while being undetected by humans. Such tools are invaluable for identifying weaknesses in any AI systems.

Adversarial AI in defence: Physical patches

Building on their expertise, Advai embarked on another project alongside the Defence Science and Technology Laboratory (Dstl) to develop Physical Adversarial Patches to manipulate computer vision systems. This innovative technology uses printable patterns capable of disrupting AI recognition systems. David Sully explains:

We can apply a filter so an object is labelled as something completely different, or disappear entirely. An automated AI-based drone might read a van as a tree or fail to detect a vehicle entirely.

Advai are just completing a second phase of this work, directly with Dstl, to significantly advance the concept of adversarial patterns. Significantly, says Sully:

We can create an adversarial texture on a 'blackbox' as well as 'whitebox' basis. Additionally, the textures can be optimised to be visually similar to existing patterns, avoiding the problem of creating visually-jarring patches.

Ahead of the curve

When generative AI and large language models exploded onto the scene in 2021, Advai was already deeply experienced in AI assurance and had a head start in understanding how these systems work. Their early work provided crucial insights that transferred directly to new challenges in language model security, Advai's leadership notes:

No one saw the generative AI explosion coming, but our focus on AI robustness gave us a huge advantage in understanding how to manage and assure these systems.

Commercial impact

From its defence origins, Advai has expanded to serve commercial customers needing to ensure their AI systems are trustworthy and secure. Some of their tools and achievements include:

  • Independent verification and benchmarking
  • Live monitoring systems for AI vulnerability detection
  • Automated stress testing procedures
  • Protection against private information extraction

The company's work has influenced national policy, and contributed to the Turing Institute's framework for AI security, in-turn helping to raise political awareness about AI safety. Advai also acted as an examiner for the Defence Cyber Marvel 2024 competition, organised by the Army Cyber Association.

The future of AI safety and DASA's crucial role

Today, Advai stands at the forefront of AI assurance, planning to strengthen their defence sector credibility while promoting a "safety-first, not safety-last" approach. Their roadmap includes greater commercialisation using their scalable platform. But this evolution comes with challenges. Advai CEA, David Sully emphasises:

Most of the world's leading AI research is happening in the private sector behind closed doors rather than in universities. For AI assurance to have a chance of keeping up, companies like Advai need support from stakeholders like DASA to help ensure the UK has a domestic capability in AI safety.

Advai is a demonstration of what is achievable by DASA. We have created a genuinely world-leading AI company, working across UK defence and security. As we expand, Advai is increasingly enabling and protecting critical commercial companies. Our ambition is for Advai to be the biggest player in AI Assurance, generating the most advanced IP and technology as a sovereign UK entity, all of which can be traced back to this initial funding and support.

Advai's adversarial AI expertise was highlighted when they demonstrated their technology to Secretary of State for Defence, John Healey and Chancellor, Rachel Reeves during a visit to Wellington Barracks, Westminster on 26 March 2025.

The Secretary of State for Defence John Healey (left), Chief of the Defence Staff Admiral Tony Radakin (centre) and the Chancellor of the Exchequer Rachel Reeves (right), visit Wellington Barracks in London.

The road ahead

As AI technology continues to evolve, so do its potential vulnerabilities. The problems and adversaries keep changing, requiring AI safety to evolve just as quickly. Advai's journey from research to commercial success demonstrates how early government investment in critical technologies can create lasting national capabilities. Their story shows that in the race to develop artificial intelligence, ensuring its trustworthiness and security is just as important as advancing its capabilities. Sully concludes:

The world is still coming to terms with generative AI and LLMs, let alone generative AI assurance. But thanks to DASA's early vision, we're ready to meet these challenges and ensure that as AI becomes more powerful, it also becomes more trustworthy.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.