EPFL researchers developed a ground-breaking new tool to help build safer AI.
Today, almost everybody has heard of AI and millions around the world already use, or are exposed, to it - from ChatGPT writing our emails, to helping in medical diagnosis.
At its base, AI uses algorithms - sets of mathematically rigorous instructions - that tell a computer how to perform a variety of advanced functions or transform facts into useful information. The Large Language Models (LLMs), that drive today's increasingly powerful AI, are special kinds of algorithms that learn from massive, mostly centralized datasets.
Yet, centralizing these huge datasets generates issues around security, privacy and the ownership of data - indeed the phrase 'data is the new oil' signifies that it has become a crucial resource, driving innovation and growth in today's digital economy.
To counter these concerns, an approach called federated learning is now revolutionizing AI. Contrary to training AI models on huge, centralized datasets, federated learning allows these models to learn across a network of decentralized devices (or servers), keeping the raw data at its source.
Untrusting Data
"Today's AI trained with federated learning gathers data from all over the world - the internet, other large databases, hospitals, smart devices and so on. These systems are very effective but at the same time there's a paradox. What makes them so effective also makes them very vulnerable to learning from 'bad' data," explains Professor Rachid Guerraoui, Head of the Distributed Computing Laboratory (DCL) in the School of Computer and Communication Sciences.
Data can be bad for many reasons. Perhaps a lack of attention or human error means it is incorrectly entered into a database, maybe there are mistakes in the data to begin with, perhaps sensors or other instruments are broken or malfunctioning, incorrect or dangerous data may be recorded maliciously, etc. Sometimes, the data is good but the machine hosting it is hacked or bogus. In any case, if this data is used to train AI, it makes the systems less trustworthy and unsafe.
"All this brings up one key question," says Guerraoui, "can we build trustworthy AI systems without trusting any individual source of data?" After a decade of theoretical work dedicated to addressing this challenge, the professor and his team say the answer is yes! A recent book summarizes their main findings.
Trusting Datasets
In collaboration with the French National Institute for Research in Digital Science and Technology, they are now putting their ideas to work. They have developed ByzFL, a library using the Python programming language that is designed to benchmark, and improve, federated learning models against adversarial threats, particular bad data.
"We believe that the majority of data is good but how do we know which datasets we can't trust?" asks Guerraoui. "Our ByzFL library tests whether a system is robust against priori unknown attacks and then makes that system more robust. More specifically, we give users software to emulate bad data for testing as well as including security filters to ensure robustness. The bad data is often distributed in a subtle way so that it's not immediately visible."
ByzFL doesn't isolate and locate good from bad data but uses robust aggregation schemes (e.g., median) to ignore extreme inputs. For example, if three sensors record a temperature of 6, 7 and 9 degrees but another records -20, it ruins an entire computation. The ByzFL software excludes the extremes so that the impact of the bad data is limited, while information is aggregated.
Ensuring that next-generation AI works
Artificial intelligence is expected to touch every part of our lives in the not too distant future. Guerraoui argues that today, most companies use very primitive forms of AI, for example streaming platforms recommending movies or AI assistants helping to write text. If someone doesn't like the movie they are recommended or an email isn't perfect it's no big deal.
Looking ahead, for any application that is mission critical, such as diagnosing cancer, driving a car or controlling an aeroplane, safe AI is essential. "The day that we really put Generative AI in hospitals, cars or transport infrastructure I think we will see that safety is problematic because of bad data," Guerraoui says. "The biggest challenge right now is going from what I call an animal circus to the real world with something that we can trust. For critical applications, we are far from the point where we can stop worrying about safety. The goal of ByzFL is to help bridge this gap."
A role for Switzerland
The professor worries that it may take some big accidents for the public and policy makers to understand that the AI created to date shouldn't be used for medicine, transport or anything mission critical and that the development of a new generation of safe and robust AI is essential.
"I think Switzerland can play a role here because we have a tradition of seriousness. We build things that work, we can use the guarantee of Swiss quality to demonstrate a certification system using this kind of software to show that AI really is safe without trusting any individual component," he concluded.
ByzFL was designed and developed by John Stephen, Geovani Rizk,Marc Gonzalez Vidal,Rafael Pinot,Rachid Guerraoui (all from EPFL) and Francois Taiani (from INRIA).
Mehdi El Mhamdi, Julian Steiner, Peva Blanchard, Nirupam Gupta, Rafael Pinot, Youssef Allouah, Abdellah El Mrini, John Stephan, Sadegh Farhadkhani, Geovani Rizk, Arsany Guiguis, Georgios Damaskinos, Sebastien Rouault, Richeek Patra, Mahsa Taziki, Hoang Le Nguyen and Alexandre Maurer are all students and postdocs who have all worked with Professor Guerraoui in the challenge of trustworthy AI systems without trusted data.