Invisible Censorship: How Democracies Shape Online Speech Without Saying They Do

26 March 2026  |  Internet Freedom, Digital Governance

I attended a seminar recently on internet freedom — censorship, online safety, access to news, and the relationship between technology, politics, and national identity. It covered serious ground: internet governance, government censorship, legal frameworks, firewalls and VPNs, and case studies spanning China, Iran, and the United States. It was sharp and well‑argued. But it left me with a question that wasn't quite asked — and I haven't been able to shake it since.

Visible Walls vs. Invisible Mazes

Every serious conversation about internet censorship follows the same geography. China's Great Firewall. Iran's network shutdowns. Saudi Arabia's content controls. The repression is real and documented. But we have built our entire vocabulary of censorship around the version designed to be visible. That may be leaving us blind to the version that isn't.

In democracies, no content is technically banned. No law is explicitly broken. The mechanism is subtler — and it operates on several fronts simultaneously.

Platform Compliance With Government Pressure

The first front is platform compliance with government requests. According to internal Meta data leaked by whistleblowers and reported by Drop Site News in April 2025, Meta complied with 94% of takedown requests issued by the Israeli government since October 7, 2023, removing more than 90,000 posts in about 30 seconds, without human review. Only 1.3% of these requests targeted Israeli users — making Israel an outlier, as governments typically focus censorship on their own citizens. The requests overwhelmingly targeted users in Arab and Muslim‑majority countries criticising Israeli policy.

Although, Meta disputes this characterisation, but the dispute itself is revealing: leaked internal data, a 94% compliance rate, and a corporate denial — with no independent regulator positioned to adjudicate between them.

Surveillance That Silences Speech Before It Exists

The second front operates without any content being removed at all. NSO Group’s Pegasus spyware has been used to facilitate human rights violations on a global scale, according to an investigation by more than 80 journalists from 17 media organisations. Researchers identified 50,000 phone numbers of potential surveillance targets, including activists and journalists. Pegasus was sold to at least 14 EU countries, including Hungary, Poland, and Spain.

This is not censorship in the traditional sense. When journalists know their phones can be compromised, sources go silent. Stories are never written. Nothing is removed — because nothing is published.

Regulation That Incentivises Over‑Removal

The third front is regulatory design. Germany’s NetzDG law incentivises platforms to overpolice speech: companies prefer removing borderline content to risking a fifty‑million‑euro fine. Empirical research examining deleted comments from major German platforms found that 87.5% to 99.7% of removed content was legally permissible. The pattern is clear: large financial penalties combined with vague legal definitions lead to over‑removal as the most rational response.

No explicit order is given. No law is technically broken. The speech disappears anyway.

The Subtle Erosion of Free Expression

The principle of regulating harmful content is not in question — platforms should be accountable. But when government takedown requests are executed in 30 seconds without human review, when journalists across fourteen EU countries face potential phone compromise, and when studies repeatedly show that the overwhelming majority of removed content is legal, then the architecture of free expression is being quietly redirected, not protected.

China builds walls. The West builds mazes. The destination is the same — you just don't realise you never arrived.

A wall announces itself. A maze lets you keep walking, keep posting, keep feeling free — while quietly ensuring your voice never reaches the room it was meant for. The wall creates martyrs. The maze creates the illusion of freedom — a far more durable, and far more difficult, form of control.

The Question We Are Failing to Ask

So the question I left that seminar asking is this: are we designing our frameworks for internet freedom to catch what we actually want to catch? Or are we measuring censorship in ways calibrated to find it in Beijing and Riyadh, while overlooking its quieter operation in London, Washington, and Brussels?

The most effective cage is the one the occupant believes is not there. And if that is what is being built — in democracies, under the banner of safety — then the question is not whether censorship exists. It is whether we are willing to recognise it for what it is, wherever it appears.

← Back to Blog