The Artificial Intelligence Safety Institute (AISI) will advance the world's knowledge of AI safety by carefully examining, evaluating, and testing frontier AI models and systems. It will conduct fundamental research on how to keep society safe in the face of fast and unpredictable progress in AI. The Institute will make its work available to the world, enabling an effective global response to the opportunities and risks of advanced AI.
The Institute is the first state-backed organisation focused on advanced AI safety for the public interest. Its mission is to minimise surprise to the UK and humanity from rapid and unexpected advances in AI. It will work towards this by developing the sociotechnical infrastructure needed to understand the risks of advanced AI and enable its governance.
This mission stems from our conviction that governments have a key role to play in providing publicly accountable evaluations of AI systems and supporting research. Governments will only be able to develop effective policy and regulatory responses to AI if they understand the technology better than they do today. By building a body of evidence on the risks from advanced AI, the Institute will lay the foundations for technically grounded international governance.
Cyber Misuse Evaluations Lead
Build and lead a team employing a range of techniques (threat modelling, cyber ranges, etc) to test cyber capabilities of frontier systems, including studying potential uplift to novice cyber actors. Apply at Civil Service Jobs.
Loss of Control Evaluations Lead
Build and lead a team focused on evaluating capabilities that are precursors to extreme harms from loss of control, with a current focus on autonomous replication and adaptation, and uncontrolled self-improvement. Apply at Civil Service Jobs.
Safeguard Analysis Lead
Build and lead a team at the intersection of ML and security to understand how effectively the safety and security components of frontier AI systems stand up to a range of threats. Apply at Civil Service Jobs.
Research Engineer
Develop the tools and methods for testing AI systems and pushing the frontier of understanding and mitigations. Research Engineers will be embedded across all research teams at AI Safety Institute (AISI). Apply at Civil Service Jobs.
Research Scientist
Lead and contribute to projects designed to be integrated into our evaluation suite, evaluating frontier model capabilities and safeguards, as well as more speculative work aimed at mitigations and system understanding. Apply at Civil Service Jobs.
Chief Information Security Officer
Build and lead a team to strengthen AISI's cyber resilience and forge key partnerships with top AI firms and other government departments. Apply at Civil Service Jobs.
Software Engineer
Create expert interfaces and highly accessible red-teaming frontends, providing fast and secure inference channels for internal and external models, as well as ML ops capabilities for hosting and fine-tuning our own models. Apply at Civil Service Jobs.
Head of Engineering
Lead our Platform Team of software engineers and research engineers. Apply at Civil Service Jobs.
Frontend Developer
Blend web development, UX engineering and data visualisation to provide inference channels, facilitate hosting our own models, and create expert interfaces for evals development. Apply at Civil Service Jobs.
UX Engineer
Implement user interfaces that form part of our cutting-edge evaluations platform. Apply at Civil Service Jobs.
Expression of Interest
If you're interested in joining the AISI but aren't sure where you fit in, submit an Expression of Interest here.