
© 2025 EPFL
Encryption is a way to ensure that data sent from one device to another can't be read if intercepted. But what if the data pertains to criminal activity? Ana-Maria Cretu from EPFL's Security and Privacy Engineering Lab shares her expertise on safety versus privacy concerns.
In today's digital age, we are all generating a digital trail and end-to-end encrypted messaging like Signal and WhatsApp is one of the go-to solutions for ensuring private communication. But governments and security agencies argue that strong encryption systems prevent them from detecting criminal activities such as the sharing of child sexual abuse material (CSAM), terrorist behavior, drug and person trafficking. They call for the use of client-side scanning technology which would allow them to detect CSAM shared in end-to-end-encrypted communications, claiming that the scanning does not weaken encryption. Ana-Maria Cretu, a researcher at the Security and Privacy Engineering Lab at EPFL specializing in the intersection between machine learning, privacy, and security, explains concerns surrounding client-side-scanning, as per a discussion organized by EPFL's Center for Digital Trust.
Broadly speaking, encryption is like putting a letter into an envelope. The letter can't be read en route from the sender to the receiver, and breaking encryption would be like tampering with the envelope while it's in transit. In client-side-scanning, the encryption stays intact, but privacy is broken since scanning is akin to reading a letter while it's being written or after opening the envelope.
End-to-end encryption ensures that only clients at each end of the communication, the sender and the receiver, can read a message. "More than two billion people around the world use end-to-end encryption enabled platforms such as Signal and Whatsapp, exchanging more than 100 billion messages daily on Whatsapp alone," says Cretu.
Client-side scanning and image fingerprinting
Client-side scanning refers to scanning data at the client side of the communication, like the letter or a photograph which you intend to send to, or received from, your smartphone. For child sexual abuse material, client-side scanning would scan your photos and compare them with an established database of child-sexual abuse materials. Since it is illegal to possess these materials, cybersecurity experts have come up with a way to characterize the photos in the database through a process called "fingerprinting". "The fingerprinting algorithm is designed such that it preserves the main characteristics of the image in order to perform efficient comparisons, while remaining resistant to simple image transformations such as image resizing, cropping or image conversion to a different format," explains Cretu. Comparison is performed on fingerprints of the images. Since 2021, there has been push to pass a UK Online Safety Actand a EU "chat control" legislation to provide government agencies with powers to mandate companies to detect illegal content in their communications.
Privacy is breached with scanning
Cretu says that there are too many problems with client-side scanning in its current state, both technical and ethical, and it would be premature to implement it. Scanning would not only defeat the purpose of encryption, it would also put confidentiality at stake and introduce the threat of mass surveillance which could undermine modern democracy.
"Although child sexual abuse material is a major societal issue, effectively fighting against it requires measures other than technology," concludes Cretu. "A stronger and closer collaboration between researchers and policymakers could be envisaged to help towards defining effective solutions that are robust enough to be deployed safely and that still preserve private communications."
Client-side scanning may wrongly flag content
Detection systems have error rates, meaning images may be erroneously flagged by the system. "These are images that are legal to possess and not instances of the targeted content, yet they end up being incorrectly flagged by the system as malicious," explains Cretu. "They could, for instance, include consensual intimate imagery, images of children at the beach, and images of a child shared by the parents with a pediatrician. When an image is flagged, it would be shared with authorities such as law enforcement agencies or the content provider for a manual review. This could lead to consequences for the users such as their account being blocked or suspended. In this situation, people would not want to risk their email account or instant messaging account being blocked. As a result, there will probably be a chilling effect. The important point here is that the privacy of millions of people would be invaded."
Detection can be evaded
Cretu points out that another problem with scanning is that it can be tricked, especially in the context of child sexual abuse materials where criminals will try to conceal their activities. She says, "[…] all the evidence that we have suggests that client-side-scanning solutions would not be effective in detecting child sexual abuse materials in the presence of adversaries. We show that the adversary can manipulate the image by applying a filter to it, which would enable them to evade detection."
Free speech would be at stake
"The goal of client-side-scanning today is to detect edited copies of child sexual abuse materials. Tomorrow, new kinds of content could be added to the list of targeted content," says Cretu. "Hence, governments might want to expand the scope, for instance, to terrorist content and other kinds of criminal activity, to dissident speech or LGBTQ+ content, depending on where this is implemented. One way to expand the scope is to expand the database. But then, who controls it? The client-side-scanning system could be used to turn people's phones into tools of surveillance, and this is a bit scary."