EU ‘Chat Control’ Bill Faces Academic Criticism Over Privacy Risks

Key Points

  • EU ‘Chat Control’ bill revised to make scanning voluntary, but scope expanded to include text.
  • Academic experts warn the broader surveillance could infringe on privacy without clear child‑protection benefits.
  • Current AI detection tools are deemed insufficiently accurate, raising false‑positive concerns.
  • New age‑verification requirements for apps and messaging services could expose minors to data‑collection risks.
  • Critics argue that both mandatory and voluntary on‑device detection lack proven effectiveness and pose abuse potential.

Chat Control
Conceptual image of a large group of cctv camera watching and spying on a mobile phone with messages, it illustrates digital surveillance concept

Conceptual image of a large group of cctv camera watching and spying on a mobile phone with messages, it illustrates digital surveillance concept

Background and Recent Changes

The European Union is revisiting its “Chat Control” proposal, which aims to combat the distribution of illegal sexual content online. After earlier attempts failed to secure a majority, lawmakers altered the legislation by making the previously mandatory on‑device scanning of content voluntary. This shift was initially presented as a compromise that could break a three‑year stalemate in negotiations.

Expanded Scope of Detection

Despite the move to voluntary scanning, the revised bill broadens the range of material that could be examined. Originally limited to URLs, pictures, and videos, the legislation now also targets text messages. Academics warn that this expansion “opens the door to surveil and examine a larger part of conversations” without demonstrable benefits for child protection.

Reliance on AI Technology

The proposal depends on artificial‑intelligence tools to identify prohibited content. Researchers caution that current AI systems lack the precision required for reliable detection, increasing the likelihood of false‑positive results. They argue that the technology is “far from being precise enough to undertake these tasks with guarantees for the necessary level of accuracy.”

Age‑Verification Measures

Another controversial element is the introduction of age‑verification mechanisms on app stores and encrypted messaging platforms such as WhatsApp. Experts contend that existing methods cannot verify age in a privacy‑preserving manner because they rely on biometric, behavioral, or contextual data, which could lead to increased data collection on minors. They also note that such measures could be easily bypassed using VPNs or services outside the EU.

Potential Risks and Criticisms

The academic coalition emphasizes several risks associated with the bill:

  • Expanded surveillance of private communications.
  • Inadequate AI accuracy leading to wrongful content flagging.
  • New privacy and security vulnerabilities from mandatory age verification.
  • Potential exclusion of users who cannot provide official documents for age proof.

They conclude that on‑device detection technologies, whether mandatory or voluntary, “cannot be considered a reasonable tool to mitigate risks” given the lack of proven benefit and the high potential for abuse.

Outlook

The open letter to the European Council underscores a growing unease among privacy experts about the direction of the legislation. While the shift to voluntary scanning may have eased political tensions, the academics argue that the core concerns—overreach, inaccuracy, and privacy intrusion—remain unresolved, casting doubt on the bill’s effectiveness in safeguarding children without infringing on fundamental digital rights.

Source: techradar.com