A feature that Google demonstrated yesterday at its I/O conference, using its generative artificial intelligence technology to scan voice calls in real time for conversation patterns associated with financial scams, has sent a collective chill through privacy experts and security warning about the feature. represents the thin end of the wedge. They warn that once client-side scanning is integrated into mobile infrastructure, it could usher in an era of centralized censorship.
Google’s demonstration of the scam call detection feature, which the tech giant says would be built into a future version of its Android operating system (estimated to run on about three-quarters of the world’s smartphones) is powered by Gemini Nano, the smallest of its current generation of AI models, is designed to run entirely on the device.
This is essentially client-side scanning – a nascent technology that has generated a lot of controversy in recent years regarding efforts to detect child sexual abuse material (CSAM) or even stalking activities. sexual on messaging platforms.
Apple abandoned a plan to implement client-side scanning for CSAM in 2021 after a major privacy backlash. However, policymakers have continued to put pressure on the tech industry to find ways to detect illegal activity on their platforms. Therefore, any industry move to build on-device scanning infrastructure could pave the way for all kinds of default content scanning, whether government-led or related to a particular commercial agenda.
Responding to Google Call Scanning Demo on a publish in X, Meredith Whittaker, president of US-based encrypted messaging app Signal, warned: “This is incredibly dangerous. Paves the way for centralized client-side analytics at the device level.
“From detecting ‘scams’ it is a short step to ‘detecting patterns commonly associated with[ith] seeking reproductive care” or “commonly associated with w[ith] provide LGBTQ resources’ or ‘commonly associated with whistleblowing of tech workers.’”
Cryptography expert Matthew Green, a professor at Johns Hopkins, also led to to raise the alarm. “In the future, AI models will perform inferences on your text messages and voice calls to detect and report illicit behavior,” she warned. “In order for your data to pass through service providers, you will need to attach zero-knowledge proof that the scan was performed. This will block open clients.”
Green suggested that this dystopian future of censorship by default is just a few years away from being technically possible. “We are a little ways away from this technology being efficient enough to do it, but only a few years. A decade at most,” he suggested.
European privacy and security experts were also quick to object.
Reacting to Google Demo in XLukasz Olejnik, a Poland-based independent researcher and consultant on privacy and security issues, welcomed the company’s anti-scam feature, but warned that the infrastructure could be repurposed for social surveillance. “[T]This also means that technical capabilities have already been developed or are being developed to monitor calls, creation, redaction of texts or documents, for example for illegal, harmful, hateful or otherwise undesirable or iniquitous content, with respect to the standards of someone”. he wrote.
“Moreover, such a model could, for example, display a warning. Or block the possibility of continuing,” Olejnik continued emphatically. “Or report it somewhere. Technological modulation of social behavior, or similar. This is a major threat to privacy, but also to a series of basic values and freedoms. The capabilities are already there.”
Further elaborating on his concerns, Olejnik told TechCrunch: “I haven’t seen the technical details, but Google claims that the detection will be done on the device. This is great for user privacy. However, there is much more at stake than privacy. This highlights how AI/LLM embedded in software and operating systems can be used to detect or monitor various forms of human activity.
“For now, fortunately everything is for the better. But what awaits us if the technical capacity exists and is integrated? Such powerful features point to possible future risks related to the ability to use AI to control the behavior of societies at scale or selectively. This is probably one of the most dangerous information technology capabilities ever developed. And we are getting closer to that point. How do we govern this? Are we going too far?
Michael Veale, associate professor of technology law at UCL, also raised the chilling specter of Google’s slow-flowing conversation-scanning AI: warning in a reaction. publish in X which “establishes an infrastructure for client-side scanning on the device for more purposes than this, which regulators and legislators will want to abuse.”
Privacy experts in Europe have one particular cause for concern: The European Union has had a controversial message scanning legislative proposal on the table from 2022, which critics, including the bloc’s own Data Protection Supervisor, warn represents a turning point for democratic rights in the region. as it would force platforms to scan private messages by default.
While the current legislative proposal claims to be technology agnostic, it is widely expected that such a law will lead to platforms implementing client-side scanning in order to respond to the so-called detection order that requires them to detect known and unknown CSAMs and CSAMs. It also collects grooming activity in real time.
Earlier this month, hundreds of privacy and security experts wrote an open letter warning that the plan could generate millions of false positives per day, as the client-side scanning technologies that platforms will likely deploy in response to an order legal are not proven. , deeply flawed and vulnerable to attack.
Google was contacted to respond to concerns that its conversation-scanning AI could erode people’s privacy, but had not responded at the time of publication.
We’re launching an AI newsletter! Register here to start receiving it in your inboxes on June 5.