It's an enormous pleasure to be with you and I'm very grateful to be back at RUSI.
I gave my first foreign policy speech when I took over the Chairmanship of the Foreign Affairs Committee here.
I know RUSI's vision has always been to inform, influence and enhance public debate to help build a safer and more stable world.
The mission has endured for 200 or so years now. The mission has not changed but the medium has.
Today the range of challenges we face has never been greater.
So it's right that here, at the home of strategic thinking, we're gathering to build on the foundations of those who shaped our security in the generations before us to make sure that endures for the generations to come.
So a profound thanks to our hosts, and also to you all, for being here on the eve of the first major global summit on AI security.
As with the summit itself, we have representatives here from government, from industry, from civil society, academia, and law enforcement.
Whatever your profession, whatever sector you represent, you are here because we need you.
Because we need each other.
Like so many areas of my responsibility, the government cannot do this alone.
Our role in government is to understand the threats that we face and target resources, helping others to come together and meet our challenges in the most effective way possible.
You can tell a lot about a government from the operating system they build for society.
Some countries build a system that are designed to control.
Other build a system designed to exploit.
Here in the UK we build systems that are designed to liberate.
To free individual aspiration and creativity for the benefit of all.
And that's what security means to me.
It's not a means of closing things down.
It's about creating the conditions required to open up a society.
A safe environment in which ideas can take root, and opportunity is available to all.
That's why we need to get this right.
Because technology as transformative as AI will touch every part of our society.
If we succeed, hardworking families up and down the country will reap the benefits.
If we don't we will all pay the price.
The stakes are very high, but coming together today, in this way today sends the right message.
There are two core themes for the programme today. They come from different eras.
The first is fraud, which in its various guises, is as old as crime itself.
When Jacob stole Esau's inheritance by passing himself off as his brother, that was perhaps the first description of fraud in the Bible.
The first record of fraud actually is possibly older, it dates from a fraud case related to copper ingots and is recorded 4000 years in Babylon.
The last time I spoke about Babylon in RUSI I was in uniform describing how I was one of many armies to have camped under its walls.
The challenges posed by Artificial Intelligence are comparatively new.
Its democratisation will bring about astonishing opportunities for us all.
Sadly that includes criminals.
We know that bad actors are quick to adopt new technologies.
Unchecked, AI has the power to bring about a new age of crime.
Already we're seeing large language models being marketed for nefarious purposes.
One chatbot being sold on the darkweb - FraudGPT - claims to be able to draft realistic phishing emails:
mimicking the format used by your bank, and even suggesting the best place to insert malicious links.
That doesn't just have implications for the realism of scams.
It has huge implications for their scale as well.
I don't want to be in a situation where individuals can leverage similar technologies to pull off sophisticated scams at the scale of organised criminal gangs.
We don't want to find the Artful Dodger has coded up into Al Capone.
At a fundamental level, fraudsters try to erase the boundary between what's real and what's fake.
Until relatively recently, that was a theoretical risk.
It wasn't so long ago that I believed I was immune to being fooled online.
That is, until I saw a viral picture of the Pope in a coat.
Not just any coat.
A fashionable puffer jacket that wouldn't look out of place on the runway in Paris.
One that my wife assured me was 'on trend'.
I quickly forgot about it.
That is, until I learned that that image wasn't actually of the Pope at all.
It was created on Midjourney. Using AI.
On the one hand it was a harmless gag, Pope Francis had never looked better.
On the other hand, it left me deeply uneasy.
If someone so instantly recognisable as the Holy Father could be wholly faked, what about the rest of us?
The recent Slovakian elections showed us how this could work in practice.
Deepfake audio was released in the run up to polling day.
It purported to show a prominent politician discussing how to rig the vote.
The clip was heard by hundreds of thousands of individuals.
Who knows how many votes it changed - or how many were convinced not to vote at all.
This is of course an example of a very specific type of fraud.
But all fraudsters blur the boundary between fact and fiction.
They warp the nature of reality.
It does not take a massive leap of imagination to see the implications of that in the fraud space.
Thankfully, relatively few AI-powered scams have come to light so far.
However, the ones that have highlight the potential of AI to be used by criminals to defraud people of their hard-earned cash.
The risks to citizens, businesses and our collective security are clear.
A few lines of code can act like Miracle Gro on crime, and the global cost of fraud is already estimated to be in the trillions.
In the United Kingdom, fraud accounts for around 40% of all estimated crime.
There's an overlap with organised crime, terrorism and hostile activity from foreign states.
It is in a very real sense a threat to our national security.
But while there is undoubtedly a need to be proactive and vigilant, we need not despair.
And the wealth of talent, insight and expertise I see in front of me here gives me hope.
For the Government's part, we are stepping up our counter-fraud efforts through the comprehensive strategy we published this summer and the work of Anthony Browne, my friend, who is the Anti-Fraud Champion.
Fraud is a growing, transnational threat, and has become a key component of organised criminality and harm in our communities. So international co-operation is essential.
That's why the UK will host a summit in London next March to agree a co-ordinated action plan to reform the global system and respond to this growing threat.
We expect Ministers, law enforcement and intelligence agencies to attend from around the world.
The Online Safety Act which has completed its passage through Parliament and will require social media and search engine companies to take robust, proactive action to ensure users are not exposed to user-generated fraud or fraudulent advertising on their platforms.
And we are working on an Online Fraud Charter with industry that includes innovative ways for the public and private sector to work together to protect the public, reduce fraud and support victims.
This will build on the charters that are already agreed with the accountancy, banking, and telecommunications sectors to combat fraud, which have already contributed to a significant reduction in scam texts and a 13% fall in reported fraud in the last year.
New technologies don't just bring about risk.
They create huge opportunities too.
AI is no different.
We know that the possibilities are vast, endless even.
What's more it's essential.
As the world grows more complex, only advanced intelligence systems can meet the task before us.
We need the AI revolution to deliver services and supply chains in an ever more globalised world.
I'm particularly interested in the question of how we can harness this new power in the public safety arena.
As we will hear shortly, AI is already driving complex approaches to manage risk, protect from harm and fight criminality.
There is a real-world benefit in combating fraud and scams, such as payment processing software that is stopping millions of scam texts from reaching potential victims.
No doubt I've barely scratched the surface, and there's lots more excellent work going on.
What we absolutely have to do is break down any barriers that might exist between the different groups represented here this evening.
The only people who benefit from a misaligned, inconsistent approach are criminals, so it's critical that we work hand in glove, across sectors and borders.
I want to come back to the point I started on.
For me AI and the security it enables is an essential part of the State's responsibility to keep us all safe.
It's not to increase our control.
Not to keep people in a box.
But to set people free.
We cannot eliminate risk, but we can understand it.
Using AI to map and measure today's environment will ensure we do that.
The pursuit of progress is essential to human experience.
And the reality is that even if we wanted to, we cannot put the genie back in the bottle.
That does not mean, though, that we simply sit back and what and see what happens.
We can't be passive in the face of this threat.
So what I want us to be thinking about is how we move forward.
Well, the way I see it there are three key questions that align to the aims of the AI Safety Summit:
- The first, how do we build safe AI models that are resilient to criminal intent?
- Second, as the vast majority of fraud starts online, how do we harness AI to ensure that harmful content is quickly identified and removed?
- And lastly, what do governments need to be doing globally to balance progress and growth with safety and security?
That's far from an exhaustive list.
But I think by addressing these core questions we can put ourselves on the right path.
So, thank you once again for being here; thank you RUSI for hosting us, I hope you will find it a valuable exercise.
And most of all I hope we can look back and say that today was a day when we took important steps forward in our shared mission to reduce the risks and seize the opportunities associated with AI. I remain hugely optimistic, but than optimism depends on the work we do today together.