WhatsApp Introduces Private AI Processing, But Leaves Out Key Use Cases

4 minute read

WhatsApp has launched ‘Private Processing,’ a new feature that lets users access AI tools like message summarisation without exposing their chats to Meta. The company says even it cannot see the messages while processing them. The system uses encrypted cloud infrastructure and hardware-based isolation to process data without storing it or making it visible to anyone including Meta itself.

Meta claims this system keeps WhatsApp’s end-to-end encryption intact while offering AI features in chats. However, the feature currently applies only to select use cases and excludes Meta’s broader AI deployments, including those used in India’s public service systems.

WhatsApp Says Its AI Can Process Messages Without Accessing Them

Private Processing uses Trusted Execution Environments (TEEs) — secure virtual machines that run on cloud infrastructure to keep AI requests confidential. The system:

  • Encrypts user requests from the device to the TEE using end-to-end encryption
  • Hides IP addresses by routing traffic through third-party relays
  • Blocks storage or logging of messages after processing
  • Publishes logs and binary images for external audit and verification

WhatsApp says no one — not Meta, not WhatsApp staff, and not the cloud provider can see the content of the messages being processed. The system also includes tools for researchers to independently verify its behavior.

WhatsApp Aligns AI Architecture with Broader Industry Privacy Models

WhatsApp describes Private Processing as a technical response to privacy concerns around AI and messaging. The company joins other major platforms like Apple that have introduced confidential AI compute models in the past year.

It follows some of the same ideas as Apple’s Private Cloud Compute, such as stateless processing and public transparency. However, WhatsApp is currently using them for only a few features. Apple, in contrast, has announced plans to apply this model across all its AI tools, while WhatsApp hasn’t made such a commitment.

What Meta Hasn’t Answered

Despite detailing its privacy architecture, Meta hasn’t clarified key aspects of Private Processing:

  • Will it apply to all AI tools inside WhatsApp?
  • Will it roll out in India, WhatsApp’s largest market?
  • How will users know when Private Processing is active?
  • What happens when users switch between Private Processing and Meta’s other AI features?

Currently, users can chat with Meta AI, an assistant that doesn’t use Private Processing and may retain messages to improve its models. WhatsApp hasn’t clearly explained how this system differs from the new privacy-focused one, making it harder for users to know when their data is truly protected.

Meta Brings AI to Indian Governance Without Similar Safeguards

While Meta restricts its own access to chats through Private Processing, it continues to partner with Indian State governments to power WhatsApp-based citizen service bots and backend AI infrastructure without the same level of privacy safeguards.

In January, the Andhra Pradesh government launched Mana Mitra, a WhatsApp chatbot that now offers over 200 services through a single number. Meta helped design the interface, while its business partner managed the deployment. Maharashtra and Odisha plan to launch similar bots later this year.

Meta also provides its Llama AI model for backend operations in Andhra Pradesh and Maharashtra. These tools help government staff retrieve documents and answer administrative queries. Meta hasn’t indicated whether it applies any comparable protections such as encrypted processing or data minimization in these deployments.

India Has No Rules for AI in Public Service Delivery

India currently lacks legal safeguards for how governments or vendors like Meta deploy AI in citizen-facing systems. According to the Draft Digital Personal Data Protection Rules, 2025, there are no provisions requiring audits, disclosures, or governance standards specific to AI-based systems used by private companies or government platforms.

As a result, state governments can integrate AI models into public infrastructure without informing users, seeking consent, or offering opt-out controls. Citizens interacting with these chatbots often don’t know whether AI handles their inputs or how their data is stored and used.

Advertisements

Why This Matters

Meta runs two different privacy setups. On WhatsApp, it says it won’t access AI-processed messages. But in its work with governments, it hasn’t applied the same protections, and it hasn’t clearly explained what privacy rules apply.

Private Processing shows that Meta can build AI systems that protect people’s data. But it hasn’t used that same approach in its public-sector projects, where people often have less choice and face greater risks.

By using Llama in government services without matching safeguards, Meta decides where privacy counts and where it doesn’t.

Calls for Comment

We asked the Vidhi Centre for Legal Policy and SFLC.in for their views on WhatsApp’s Private Processing system and Meta’s use of AI in government services.

To Vidhi, we asked if India’s data law covers privacy-by-design systems and whether platforms should follow rules on transparency, data limits, and user control, even for opt-in AI. We also asked if India needs separate rules for AI in private chats and public services, and how regulators should handle technical claims from platforms.

To SFLC.in, we asked how enforceable WhatsApp’s privacy claims are without audits, and whether Indian law protects users if the system fails. We also asked if India needs stronger consent and transparency rules for AI in messaging, and how Meta’s use of Llama in State-run bots fits into this picture.

We’ll update this story if they respond.

Also read:

Support our journalism:

For You

CategoriesNews