Cory Doctorow: Hard (Sovereignty) Cases Make Bad (Internet) Law

Cory Doctorow (Copyright Julia Galdo & Cody Cloud)

Let’s start with two obvious facts:

  1. The internet is a communications medium, that
  2. crosses international borders.

That means that every single policy question related to the internet will have:

  1. a) A free expression dimension, and
  2. b) A national sovereignty dimension.

With that out of the way….

Late last August, Pavel Durov – the billionaire owner of the Telegram app – was arrested by French authorities after he landed his private jet at a small airport on the outskirts of Paris. Durov – a French citizen – was taken into custody over complicated issues related to Telegram’s failure to comply with rules requiring companies doing business in France to nominate official, in-country representatives who would receive and act upon lawful orders, such as orders to delete unlawful content, or to ban or disclose the identity of users who posted unlawful materials or used Telegram in furtherance of a crime.

Just days later, a judge in Brazil ordered the country’s internet service providers to block Twitter; as with the Telegram affair, the immediate issue behind the block was Twitter’s failure to nominate an official representative within Brazil, and to have that official receive and execute lawful orders related to Twitter’s users and content.

Both of these issues clearly relate to national sovereignty. Countries rou­tinely require firms that do business in their borders to name official business agents who represent the company to the government. You can’t operate a bowling alley or a mail-order vitamin business in a US state without telling the regional government whom to talk to if they have questions about your operation. Why should these gigantic companies, owned by cartoonishly evil billionaires, do any less?

The sovereignty question is only heightened by statements made by the billionaire owners of these businesses that made it clear that they viewed the democratically accountable governments of Brazil and France as little more than a joke. Elon Musk (the owner of Twitter, which he rebranded as “X”) went so far as to use his bully pulpit to harass and threaten the judge who handed down the order, as well as various high-ranking Brazilian of­ficials. Musk has a well-deserved reputation for arrogance and lawlessness, which made it easy to cheer on the Brazilian authorities in the battle. It was especially delicious to watch Musk as he realized that the concept of “limited liability” under Brazilian corporate law is very different from its counterpart in US statutes, such that Brazilian courts could lawfully extract cash penalties from other Musk-controlled businesses, such as Starlink.

It’s a truism that “hard cases make bad law.” It’s tempting to cheer on the downfall of disagreeable, flawed defendants, regardless of the underlying principle or the precedent being set. The clownish billionaires here are poster children for this effect. Take Telegram’s Durov: He boasts of father­ing over 100 children, putting the efforts of self-avowed “natalist” Musk in the shade, despite his having fathered (at least) 12 children. Musk may trail Durov by 88 children, but he makes up for it with public displays of racism and unhinged conspiratorialism. He’s about what Howard Hughes would have been like – if he’d been an extrovert instead of a urine-hoarding hermit.

The sovereignty of a nation turns on its ability to control its borders and the things that happen within them. Sovereignty allows countries to establish the rule of law, and to use it to protect its people from the capricious and unjust exercise of authority. A sovereign nation can establish a human rights regime that requires public officials to act in the public interest, to subject themselves to scrutiny, and to be publicly accountable for their actions.

A sovereign nation can do these things, but it needn’t. If you care about human rights, then you should think of sovereignty as a means to an end. When countries use their sovereign authority to protect human rights, that’s a good reason to defend that country’s sovereignty. When sovereignty is harnessed to human rights abuses, the rest of us can – and should – work to undermine the sovereign power of that state and defend its people’s rights. The fact that the Saudis have the sovereign power to discriminate against Saudi women doesn’t militate against other countries offering asylum to women’s rights advocates who are targeted by the Saudi state, and when we undermine Saudi sovereignty by sharing the stories of these advocates in the teeth of Saudi bans on doing so, that’s just fine.

Eleven years ago, Edward Snowden leaked a massive trove of documents detailing the NSA’s policy of illegal, indiscriminate, global mass surveillance. The NSA’s mass surveillance depended on the fact that most of the world’s largest tech companies were founded in the USA, and they transported all their users’ data to US servers for processing and storage. These companies routinely colluded with secret, illegal orders from the NSA to grant the agency access to their customers’ records. The NSA also had an illegal program of spying on this corporate data without the companies’ knowledge or consent (often, when the NSA asked a company for data, they were seeking data they’d already obtained through the nonconsensual program, known as “upstream”; the NSA was seeking to keep “upstream” secret from tech executives by creating a plausible story about how they learned the things they knew – in surveillance circles, this is called “parallel construction”).

In the wake of the Snowden revelations, countries around the world passed “data localization” laws banning tech companies from moving data about people within their borders to the USA, explicitly to block the NSA from accessing that data. This is clearly a matter of national sovereignty. These countries had laws governing the processing of their residents’ data, and the US government was shamelessly flouting those rules for virtually every person within every country’s sovereign borders.

So naturally the EU’s then-28 member states banned US tech companies from moving EU residents’ data to the USA. So did Russia. In both cases, these countries were defending their sovereignty.

But that’s not the whole story. Data localization also allows sovereign nations to access and use the data stored within their borders, which has serious implications for free expression. When your data is accessed in ac­cord with the rule of law and a human rights framework, the free expression dimensions of this action are well balanced against the sovereignty question.

Of course, you may differ with this assessment when it’s your data, but when the data is accessed without a rule of law and human rights framework, the free expression question is far more important (even when it’s not your data at stake).

As far as we know, the EU’s data localization laws were not widely abused to access the data of people in the EU (or elsewhere) without due process and respect for human rights. Unsurprisingly, this is not true of Russia’s data localization laws. In Russia, the legal requirement for tech companies to store user data about Russians within Russia was widely exploited by the Russian state to target its political enemies, whose data was accessed as a prelude to identifying dissidents who were then subject to harassment, arrest and torture, as well as likely extrajudicial killings.

In other words: both the EU and the Russian data localization rules were expressions of national sovereignty, both rules implicated free expression, and, in the case of Russia, the free expression issues outweighed the national sovereignty issues.

Today, the tension between sovereignty and human rights are higher than they’ve ever been. As I write this, the United Nations is winding up negotia­tions on a “Cybercrime Treaty” that embodies these tensions. The Cyber­crime Treaty binds the nations that sign it to assist one another in fighting internet crimes that cross borders, a goal that – in the age of ransomware and other transborder, internet-enabled crimes – sounds like a good one.

But the Cybercrime Treaty isn’t just about stopping ransomware attacks and other serious crimes. Arguably, the treaty isn’t even primarily about this. The treaty fails to define “cybercrime,” leaving it to each sovereign signatory to define this for themselves. In autocracies, “cybercrime” boils down to “do­ing something the government doesn’t like…with a computer.” Criticizing the ruling party on a message board can be “cybercrime.” Likewise, using an online dating service to discover a same-sex romantic party can be a “cybercrime” in a country where homosexuality is banned.

It gets worse: The Cybercrime Treaty doesn’t just require each sovereign signatory to help all the other signatories fight “cybercrime,” however it is defined – it also contains provisions for keeping this help a secret. That means that – say – the Russian government could demand that – say – the French government order a company within its borders to provide all the messages exchanged by Russian dissidents, and keep the fact that they did this a secretforever.

The internet is a communications medium that crosses borders, so the Cybercrime Treaty was bound to implicate both sovereignty and free expres­sion. But it didn’t have to get it this wrong.

(I hear that the US Trade Representative might have had a change of heart on this one, so by the time you read this, the situation could have improved – or not.)

All internet policy questions have implications for free expression, but when free expression touches a computer, there’s the potential for a completely different set of complexities to enter the picture, thanks to encryption.

Encryption really works, extremely well. Extremely well. In the instant between your pressing the button to take a picture with your phone and that picture being saved to your phone’s memory, that picture is scrambled so thoroughly that if every hydrogen atom in the universe was converted to a computer that was tasked with doing nothing but guessing the key needed to unscramble the photo, we would run out of universe before we ran out of keys. A world with computer-assisted encryption has a capability that was never seen before, in the whole history of human civilization: the ability for groups and individuals to create and share secrets that are technically unbreakable.

This has obvious implications for free expression: For one thing, it makes the business of government interception a lot less threatening to free expres­sion. Government interception of your messages in transit, or harvesting after they’ve been received, is a lot less salient if all the government gets for its trouble is unscramblable junk, indistinguishable from random noise.

Governments really don’t like this fact. For most of the second half of the 20th Century, the US government regulated working encryption as a “munition” and denied the public access to powerful scrambling systems. In retrospect, it might seem weird that the US had a law banning the publication of certain kinds of mathematics, but that was truly the case until 1996, when the Electronic Frontier Foundation represented a UC Berkeley grad student named Daniel J. Bernstein and convinced a judge that he had a First Amend­ment right to publish working encryption code, because “code is speech” (disclosure: I am proud to have worked for EFF for over 20 years now).

EFF’s “code is speech” victory over the NSA and US government didn’t end the “crypto wars.” All over the world and ever since, governments have tried to ban working encryption – even as encryption became more central to the safety and security of internet users (the same encryption that secures your chat messages also secures the firmware updates delivered to your pacemaker as well as your access to your bank account, and your bank’s access to the Federal Reserve).

Time and again, cryptographers have been called upon to explain to policymakers why banning working encryption is a bad idea. Over and over, they’ve insisted that there’s no way to build an encryption scheme that works when criminals try to break it, but fails when governments go after it.

In 2018, Australia passed a law that empowered its government to order tech companies to add “back doors” to their encryption for national security and public safety purposes. When cryptographers tried to explain to then-Prime Minister Malcolm Turnbull that encryption that only works when bad guys attack it violated the laws of mathematics, Turnbull responded that “the laws of Australia prevail in Australia, I can assure you of that. The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia.”

The law passed, but has not been invoked – until now. In early September, Mike Burgess – head the Australian Security Intelligence Organisation – signaled that he was about to issue his first back-door order.

He’s not alone. Since 2021, the EU has been pursuing a “Chat Control” initiative that would ban working privacy technology in order to facilitate the fight against child sex abuse material (CSAM, more commonly known as “child pornography”). Warnings from cryptographers and security experts that Chat Control will weaken all our security against “bad guys” have thus far failed to move EU policymakers.

France has a long history of trying to ban working encryption; a bill to do this failed narrowly in 2016, and the proposal was reintroduced last year.

Let’s go back to Pavel Durov, the billionaire with 100 children and total control over Telegram, a communications platform with over 900 million users. Telegram is widely described as an “encrypted messaging app,” and indeed, buried deep in Telegram’s settings is a way to turn on encryption for person-to-person chats. Telegram uses its own, idiosyncratic encryption scheme, whose integrity is questioned by the world’s leading cryptographers. “Rolling your own” encryption is considered a cardinal sin in security circles, since the established systems have been so widely scrutinized and had their bugs and defects eliminated. As the cryptographer Bruce Schneier says, “Anyone can design a security system that works so well that they themself can’t think of a way to break it.”

The fact that Telegram’s encryption is clunky and of dubious efficacy is dwarfed by the fact that it works only for one-on-one communications, and the majority of Telegram usage is in group chats, some of them gigantic, as in millions of users. These chats can’t be encrypted, not even with Telegram’s crummy, weird, homebrew encryption.

Question: Did French authorities arrest Durov in a bid to get a backdoor for his encryption?

(Answer: …maybe?)

Like Brazil, France has a legal system with some features that differ sharply from the US system you may be familiar with (that’s sovereignty in action). In particular, France’s legal system is descended from Napoleonic Code, which is why, when the French cops arrest you, they don’t really have to explain why. Part of why the French authorities arrested Durov was his manifest failure to adhere to French law, both by failing to establish a representative in France, and by failing to honor lawful notices related to deleting unlaw­ful content and turning over the identities and private messages of users on presentation of a warrant.

We know that this was part of the French government’s beef with Durov, because they let him go once he promised to do it.

But all those hundreds of millions of users who use Telegram’s massive group-chats? It would be very weird if those users didn’t go on to carry out private discussions with one another. Some of those users are bound to turn on Telegram’s encryption. And some of those users are doubtless of interest to the French authorities.

The same authorities that – remember – tried to ban working cryptography repeatedly, including last year, and who are also key advocates for the EU’s Chat Control policy. It would be very, very weird if these authorities weren’t also seeking access to Telegram messages that are encrypted.

What’s the right way to talk about this? Well, we can say:

  • Durov is a creep; and
  • Telegram should have a mechanism to comply with lawful takedown orders; and
  • those orders should respect human rights and the rule of law; and
  • Telegram should not backdoor its encryption, even if
  • the sovereign French state orders it to do so.
  • Sovereignty, sure, but human rights even moreso.

Here’s the thing about bans on working encryption: They’re a lot of work. To ban working encryption, a country needs a national firewall, a blocklist that every ISP in the country agrees to block. But that’s just for starters. As anyone who likes to watch Britcoms or Korean soap operas before their US release windows knows, you can beat this kind of block with a cheap or free VPN. These VPNs are classic “dual use” technology – many companies insist that their employees use a VPNs to access the company systems to defend themselves from snooping.

Banning VPNs is a lot harder than simply blocking a website. Now you’re talking about somehow controlling which software we install on our devices. A government might start here by ordering the two big mobile app stores, operated by Google and Apple, to block VPN installation for mobile devices. But people run VPNs on their PCs, too, so now you have to block every website that distributes VPN programs, or code that can be compiled into programs, which means you have to somehow run an advanced society while blocking every software repository, including GitHub and its competitors.

Even if you manage to block this wide swathe of widely used, business-critical websites, you’ve got the problem that millions of people already have VPNs installed, so now you’re stuck search­ing everyone’s laptops or somehow installing “trusted computing” software on all those PCs to regulate which code they can run. This is real old-lady-who-swallowed-the-fly territory, and it’s a major national undertaking. A few countries have managed it – China and North Korea – while other countries, like Iran, have failed, despite extensive efforts.

Meanwhile, the collateral damage to human rights from this kind of ban are gigantic.

When Brazil banned Twitter, they were un­doubtedly defending their sovereignty from an arrogant jerk who had all but begged them to do so. Brazil’s current government has a pretty good rule-of-law/human rights track record, and their demand on Twitter amounted to, “Obey lawful court orders.”

But even if you agree that these orders were purely a matter of national sovereignty, the resulting block on Twitter has gigantic free expression issues. The obvious free expression dimension here is that there are millions of Bra­zilians who use Twitter for purposes unrelated to the unlawful communications that Musk refused to block. These are the dolphins in the blocking order’s tuna net.

But that’s just the first-order effects of the block. In addition to ordering Brazilian ISPs to block Twitter, the court threatened civil prosecu­tions for users who circumvented the block with VPNs. The court also ordered Google and Apple to remove VPNs from their app stores.

Before things could continue to snowball, the court backed off of this order, but Brazilian residents still face penalties if they use a VPN to access Twitter, even if they do so to carry on discussions and access information that is not banned in Brazil.

Again, it’s possible to make sense of all this:

  • Musk is a creep, and
  • Twitter should have a mechanism to comply with lawful takedown orders; and
  • those orders should respect human rights and the rule of law; and
  • banning Twitter is bad for the free speech rights of Twitter users in Brazil; and
  • banning VPNs is bad for all Brazilian internet users; and
  • it’s hard to see how a Twitter ban will be effective without bans on VPNs.

Again, we can recognize the legitimate exercise of sovereignty without using that as a pretense to ignore when sovereign power is used to under­mine free expression, especially when that use is likely to kick off a cascade of ever-more-extreme measures that are progressively worse for free expression.

Sovereignty, sure, but human rights even moreso.


Cory Doctorow is the author of WalkawayLittle Brother, and Information Doesn’t Want to Be Free (among many others); he is the co-owner of Boing Boing, a special consultant to the Electronic Frontier Foundation, a visiting professor of Computer Science at the Open University and an MIT Media Lab Research Affiliate.


All opinions expressed by commentators are solely their own and do not reflect the opinions of Locus.

This article and more like it in the November 2024 issue of Locus.

Locus Magazine, Science Fiction FantasyWhile you are here, please take a moment to support Locus with a one-time or recurring donation. We rely on reader donations to keep the magazine and site going, and would like to keep the site paywall free, but WE NEED YOUR FINANCIAL SUPPORT to continue quality coverage of the science fiction and fantasy field.

©Locus Magazine. Copyrighted material may not be republished without permission of LSFF.

Leave a Reply

Your email address will not be published. Required fields are marked *