EFFecting Digital Freedom
When AI and Secure Chat Meet, Users Deserve Strong Controls Over How They Interact
by Thorin Klosowski
Both Google and Apple keep cramming new AI features into their phones and other devices, and neither company offers clear ways to control which apps those AI systems can and cannot access. AI features can create a variety of potential privacy problems, but one of the most important aspects to get right is how those tools interact with secure messaging apps, like Signal or WhatsApp. There's confusion around how "device-level" AI tools like Apple Intelligence and Google Gemini handle information, whether it's kept local or sent to a server, and what that information gets used for. This makes it far more difficult to lock down your privacy than it should be.
The current issues with secure messaging relate to two primary privacy problems: composing messages using AI tools, and having a receiver's copy of messages potentially end up in AI tools automatically without the sender realizing it.
Let's start with sending messages. As an example, Google Gemini lets you optionally link Gemini and WhatsApp, so you can compose a message in Gemini and then send that through WhatsApp. In this case, Google can usually see the content of the created message. Depending on your settings, Google may use the contents of that message for continued AI training and it may be saved to your account, making it potentially accessible to law enforcement if requested.
Apple doesn't offer a similar WhatsApp integration feature, but its "Writing Tools" pop-up offers some of the same functionality, though it doesn't appear inside WhatsApp (or Signal, for that matter). Any text created using the Apple Intelligence writing tool in Apple Messages could go to Apple's "Private Compute" cloud servers, where hardware protections limit Apple from easily accessing this data. (Google recently announced a similar "private compute" cloud in the fall of this year, but which features will use it isn't clear yet.)
When receiving messages, things get trickier. When you use an AI like Gemini or Apple Intelligence to summarize or read notifications, it's not always clear where the text of those notifications goes, how long it might be stored for, or if the company has the technical means to read it. Poor documentation and weak guardrails often fail to clarify the privacy practices as clearly as we'd like. In Google's case, we found that if a user opts into a series of different features, including granting Gemini access to notifications through the Utilities app, then that data is sent to Google and appears to be readable by the company regardless of whether the recipient sees the messages. Since this choice is out of the hands of the sender of that message, it creates the potential for a privacy issue. In contrast, Apple claims its summarize feature happens entirely on-device.
New AI Features Must Come With Strong User Controls
As more device-makers cram more AI features into their devices, the more necessary it is for us to have clear and simple controls over what personal data these features can access on our devices. If users do not have control over when text leaves a device for any sort of AI processing - whether that's to a "private" cloud or not - it erodes our privacy and potentially threatens the foundations of end-to-end encrypted communications. Some solutions we would like to see:
Per-app AI Permissions: Google, Apple, and other device makers should add an operating system-enforced AI permission, just like they do for other potentially invasive privacy features, like location sharing, to their phones. You should be able to tell the operating system's AI to not access an app, even if that comes at the "cost" of missing out on some features.
Offer On-Device-Only Modes: Device-makers should offer an "on-device only" AI mode for those interested in using some features without having to try to figure out what happens on device and on the cloud.
Improve Documentation: Both Google and Apple should improve their documentation about how these features interact with various apps. Apple doesn't seem to clarify notification processing privacy anywhere outside of a press release, and we couldn't find anything about Google's Utilities privacy at all.
The current user options are not enough. It's clear that the AI features come with significant confusion about their privacy implications, and it's time to push back and demand better controls. The privacy problems introduced alongside new AI features should be taken seriously, and remedies should be offered to both users and developers who want real, transparent safeguards over how a company accesses their private data and communications.