Loading news...
Publications
Back to articles
Regulating without understanding: why Europe prefers compliance to proof
FR EN ZH

Regulating without understanding: why Europe prefers compliance to proof

The CNIL, an administrative tiger with foam teeth

Regulating without understanding: why Europe prefers compliance to proof

The CNIL likes to present itself as the French bulwark against digital abuses. On paper, it's appealing: an independent authority, guardian of freedoms, capable of calling both the State and companies to order. In reality, it's an administrative tiger with foam teeth. Not from lack of individual goodwill, but for structural reasons. The CNIL was born in a world where computing was centralized, slow, and above all comprehensible by generalist lawyers. However, contemporary digital power is diffuse, transnational, and profoundly technical. Result: the CNIL reasons in terms of compliance where the problem is systemic. It checks if boxes are ticked, if legal notices are there, if forms comply with GDPR. Meanwhile, the architectures themselves – massive collection, algorithmic inferences, radical information asymmetries – remain intact. The CNIL sanctions after the fact, with fines that make headlines for two days before being absorbed as operating expenses. It dismantles nothing, it doesn't open hoods, it never publicly tears apart a system saying: "here's precisely how you surveil us, line of code by line." It has neither the culture, nor the means, nor especially the real political mandate.

Facing it, the EFF and the Chaos Computer Club play in another category. The EFF is juridico-political: it attacks, defends, creates jurisprudence, transforms digital freedoms into enforceable rights. It acts in the arena where rules are written, knowing very well that law is slow but when it decides, it decides durably. The CCC operates upstream: it proves. It shows that the emperor has no clothes, that the voting machine is broken, that biometrics is a dangerous farce, that proclaimed security is marketing lies. Where the CNIL asks for guarantees, the CCC pulls out an oscilloscope, a memory dump and a public demonstration. The difference is brutal: the CNIL moralizes, the CCC falsifies official discourse through proof. And an institution that can neither audit deeply nor make visible the mechanisms of technical domination is condemned to polite impotence.

"Bug bounties" fit exactly into this same logic of impotence disguised as modernity. On paper, it's brilliant: companies pay those who find flaws. In practice, it's a cynical privatization of technical criticism. You're authorized to search, but only where you're told to search, only on what has been decided to be made visible, and especially provided you keep your mouth shut once paid. The bug bounty is not a security tool, it's a narrative control tool. It transforms the researcher into a silent service provider, the flaw into an isolated incident, and prevents any public debate about architectural choices. Finding a critical vulnerability in a surveillance or payment system never opens the political question of whether this system should exist in this form. We patch, pay a few thousand euros, and continue as before. The CCC, conversely, considers that certain flaws are not bugs but symptoms. And a symptom is shown, discussed, politicized. That's why it's hated by legal departments and adored by those who still think technology is not neutral.

As for the European Union, it watches all this with mixed fascination and fear. It admires the CCC for its ability to anticipate drift, to understand before others what will go wrong. But it's incapable of embracing its method. The EU prefers to legislate after the fact, produce massive, verbose regulations, often intelligent in intention, but built on an abstraction of technical reality. The DMA, the DSA, the AI Act: necessary texts, sometimes courageous, but always a war behind. Europe tries to imitate the CCC through norms, without ever accepting the central idea: that we need actors capable of publicly breaking technological toys so that politics can regain control. But breaking a toy means accepting conflict, diplomatic embarrassment, industry anger and sometimes even the temporary ridicule of one's own institutions. The EU wants virtue without sweat, regulation without confrontation, sovereignty without the courage to watch its own systems collapse under a technical demonstration.

Fundamentally, everything is at stake here. The CNIL reassures, bug bounties anesthetize, the EU regulates. The CCC disturbs. And as long as European institutions prefer procedural comfort to the test of proof, they will continue to chase technologies they only understand once it's too late. This is not a problem of individual competence, it's a problem of political backbone. And that, no regulation will correct.