The European Union likes to think of itself as the world’s responsible adult when it comes to privacy. While others run around hoovering up personal data like loose change, the EU clears its throat, produces the GDPR, and reminds everyone—sternly—that private life is a fundamental right. All of this is true. And admirable.
Which makes it slightly awkward that, at the same time, the EU has been enthusiastically debating proposals that would require automated scanning of private communications—messages, photos, encrypted chats—to detect illegal material. This is often referred to, with touching understatement, as Chat Control.
The tension here is not a minor drafting problem. It is structural.
Why this matters (and not just to lawyers)
No one seriously disputes that protecting children, preventing terrorism, and improving public safety are legitimate goals. The problem is not the destination; it’s the route being proposed to get there.
The EU has built an entire legal framework on a few clear ideas. Private communications are private by default. Surveillance must be targeted and based on suspicion. Mass, suspicionless monitoring is not allowed, and Data collection should be minimal and purpose-limited.
Automated scanning of everyone’s private messages conflicts with these principles almost immediately, like a dog that has learned the rules of the house and then urinates on them anyway.
This conflict has not gone unnoticed. It has been pointed out, politely but firmly, by the European Data Protection Supervisor, the European Data Protection Board, national regulators, and a small army of constitutional lawyers who tend to know where the bodies are buried, legally speaking.
So how does the EU manage to prohibit mass surveillance while also proposing it?
The answer is not denial. It is reframing. First, exceptions by legislation.
Instead of openly breaking privacy law, the EU proposes a special law that overrides it “in this specific case.” Privacy remains protected, except where it isn’t. This is known in legal circles as lex specialis, and in ordinary life as “yes, but.”
This means that rights become negotiable.
Fundamental rights can be restricted if the restriction is lawful, necessary, and proportionate. This sounds sensible until you realise it turns rights into adjustable settings rather than firm boundaries. The question quietly shifts from “Is this allowed?” to “Can we justify it?” Once you are balancing rights, someone is always standing on the scale.
The alternative would be an outsourcing of the surveillance.
Rather than the state scanning messages directly, private platforms are required to do it and send reports onward. Formally, this is not state surveillance. Functionally, it is hard to see the difference. The state gets the results without getting its hands dirty.
Temporary measures become voluntary schemes. Voluntary schemes become obligations. Each step seems modest. By the time anyone realises how far things have gone, the system is already built and humming quietly in the background.
The television series Person of Interest is often cited in these debates, not because it predicted evil AI overlords, but because it got something subtler right. The problem was never that the system was intelligent. It was that it was authorised.
The machine didn’t decide to watch everyone. Humans did. The danger came not from rogue autonomy, but from perfectly obedient systems embedded in institutions that prioritised security outcomes over civil constraints.
That is the real parallel here, and it does not require science fiction.
The EU is not secretly villainous. AI is not plotting anything. No one needs bad intentions for this to go wrong. All that is required is surveillance made legal by exception, privacy made conditional, and AI used to scale what humans could never do manually. Oversight then becomes ceremonial. Rights remain on paper. The system works exactly as designed.
Why I raised this at all
Most debates about AI focus on hypothetical future dangers while overlooking present, structural ones.
You do not need sentient machines to undermine rights.
You do not need rogue algorithms.
You only need aligned technology embedded in misaligned governance.
That is not dystopian speculation. It is an institutional reality.
And it deserves scrutiny, preferably before the exception becomes the rule.
