Once, the mainstream view was that worrying about tech policy was faintly ridiculous, a kind of masturbatory science fictional exercise in which your hyperactive imagination led you to have vivid delusions about the supposed significance of the rules we laid down for the internet and the computers we connect to it.
Weirdly, worrying about this stuff made you a “techno utopian,” though it’s a strange type of utopian who spends a lot of time fretting that discriminatory network routing, proprietary programs/file formats, and law enforcement backdoors will usher in an era of networked totalitarianism. Cyber-policy wonks may not have consensus on what a successful technology future looks like, but since the earliest days they’ve shared a broad vision of what failure looks like: tech-supercharged autocracy, in which corrupt, spying states deputize massive monopolistic corporations to micro-manage the whole world’s population, at scale.
Which brings me to the techlash: the post-Brexit, post-Trump, post-Equifax turning point where suddenly a lot of people start to pay attention to the rules we set for technology users, companies, and practitioners.
I’m genuinely delighted that this moment has arrived. Tech policy is like climate change: every year we fail to fix it is a year that we accumulate more bad tech debt (insecure systems full of sensitive data and attached to machines, sensors and actuators that can harm or kill us). We are in a race between the point of no return, when it’s too late to fix things, and the point of “peak indifference,” when the number of people who care starts to rise of its own accord, thanks to the gaudy disasters detonating all around us.
But it’s not enough to do something: we have to do something good. And we’re getting it really wrong.
Here’s the most utopian thing I believe about technology: the more you learn about how to control the technology in your life, the better it will serve you and the harder it will be for others to turn it against you. The most democratic future is one in which every person has ready access to the training and tools they need to reconfigure all the technology in their lives to suit their needs. Even if you personally never avail yourself of these tools and trainings, you will still benefit from this system, because you will be able to choose among millions of other people who do take the time to gain the expertise necessary to customize your world to suit you.
This belief has underpinned the entire technology liberation movement since its inception: it’s the belief that animates the free/open-source software movement, the open web/Net Neutrality movement, code camps and STEM education, accessibility, Wikipedia, and the decentralization movement. The belief is that letting more people participate in shaping the technology around them, and, in particular, letting people decide how they, personally, will use technology, is the optimal state. People might make bad choices, but so might companies – and if companies are making good choices, then people will leave things set at the corporate default.
And that’s where we’re getting it wrong, because the people who ignored decades of dire warnings about interoperability, competition, decentralization, monopolism, lock-in, and the host of evils that move power out of the hands of users and into the hands of powerful corporations and governments are now treating monopolism as an immutable, if unfortunate, fact of life.
We “techno utopians” once dreamed of a world where you got to be the boss of the devices and code that you interacted with – and railed against the strictures of control-freak operating systems, devices, and networks, to say nothing of the governments who turned a blind eye to their bad conduct (or worse, cheered them on).
The reinforcements who’ve shown up worried about Cambridge Analytica, the destruction of the newspaper business, and online human trafficking don’t care about monopolism, though.
They don’t see that it’s the reach of Facebook that made Cambridge Analytica possible; that it’s the size of Facebook and Google that have let them corner the market for online advertising (and its handmaiden, corporate surveillance); that it’s the riches of Apple and Twitter that let them steer tax policy and liability rules to their favor.
No. The priority today is making Big Tech behave itself. Laws like SESTA/FOSTA (the 2018 US law notionally concerned with fighting sex trafficking) and the proposed new EU Copyright Directive (which would require anyone providing a forum for public discourse to invest tens of millions of dollars in copyright filters that would block anything potentially infringing from being published) are the kinds of rules that Big Tech’s winners can comply with, but their nascent future competitors can’t possibly follow. Google and Twitter and Facebook can find the hundreds of millions necessary to comply with EU copyright rules, but the companies that might someday knock them off their perch don’t have that kind of money.
The priorities of the techlash are handing Big Tech’s giants a potentially eternal monopoly on their turf. That’s not so surprising: a monopolist’s first preference is for no regulation, and its close second preference is for lots of regulation, especially the kind of regulation that no competitor could possibly comply with. Today, Big Tech spends hundreds of millions of dollars buying and crushing potential competitors: better for them to spend the money on regulatory compliance and have the state do the work of snuffing out those future rivals in their cradles.
The vision of the “techno utopia” is a democracy: a world where anyone who wants to can participate in the shape of the future, retooling, reconfiguring, remapping the systems around them to suit their needs.
The vision of the techlash is a constitutional monarchy. We start by recognizing the divine right of Google, Amazon, Facebook, Apple, and the other giants to rule our technology, then we gather the aristocracy – technocrats from government regulatory bodies – to place modest limits on the power of the eternal monarchs.
Constitutional monarchies are bullshit. Big Tech does not have the divine right to rule. They haven’t conquered the world by being better than everyone else: they’ve conquered it by being rapacious beyond measure, while the regulators charged with taming them have turned a blind eye while they used anticompetitive sleaze – predatory pricing, predatory acquisitions, tax dodging, liability shifting, and more – to grow to unimaginable size.
When the techlash succeeds in wringing an expensive concession from Big Tech, there is celebration: the giant has been tamed! But if you think Amazon and co. are ungovernable in 2018, imagine what they’ll be like in 2028, after a decade of punishing rules that clear the field of any pretenders to their thrones, when their coffers are orders of magnitude larger and their integration with our lives is total. Once Big Tech is too big to fail, it will also be too big to regulate (for one thing, everyone capable of understanding how Big Tech companies work well enough to regulate them will already be working for a Big Tech giant).
Constitutional monarchy is the enemy of democracy. Take Facebook: the company has assembled dossiers on billions of people, which have been used for genocide, stalking, fraud, pogroms, harassment, identity theft, and a host of other evils. In an ideal world, we’d make rules about what data Facebook could gather and what they could do with it – but to make those rules meaningful, we’d also have to safeguard the right of individuals and groups to probe Facebook to see whether it is complying with them.
Here’s a real-world example. Cambridge Analytica exploited a sloppy Facebook policy that let them access the dossiers of anyone listed as a “friend” of any of the users of apps it tricked people into using. Facebook has claimed that it has changed its rules to prevent this from happening again, but hardly a week goes by without another revelation about an exception that Facebook made for a “key partner,” including state-affiliated Russian internet services and state-affiliated Chinese phone manufacturers.
While a rule forcing Facebook not to share this information is important, it’s not enough. We also need to be able to probe Facebook (to test whether our data is being leaked to third parties), to trick Facebook (feeding it false data so we are misclassified and can’t be easily targeted), and to block Facebook (so it can’t collect data on us without our consent).
The problem is that Facebook’s Terms of Service ban all this stuff, and Facebook has sued competitors, like Power Ventures, for providing tools that customize your Facebook experience and let you control how closely Facebook gets to watch you. And what’s worse, a 1986 US law called the Computer Fraud and Abuse Act gives these Terms of Service the power of law, with the potential of criminal prosecution for people who breach them.
In a lawsuit called Sandvig v. Sessions, a coalition of news and research organizations, represented by the ACLU, are suing the US government for the right to violate Facebook’s Terms of Service in order to investigate and document its conduct – to find out if Facebook is lying (again) when it claims to have reformed its wicked ways.
Late in March of 2018, the judge in Sandvig gave the case leave to proceed. It was a moment that starkly illuminated the difference between techno-utopianism and the techlash. The techno-utopians cheered the judge’s decision: at last, a chance that we might once again be able to independently audit and validate the systems we rely on!
But the techlash saw danger: A judge has given the next Cambridge Analytica the green light to violate Facebook’s Terms of Service with impunity.
This isn’t an entirely stupid response. It’s possible to imagine an outcome to this case that lets the ACLU’s clients proceed with violating Facebook’s Terms of Service in order to verify its conduct without greenlighting future Cambridge Analyticas (“It is illegal to violate Terms of Service for an illegal purpose; but if you commit no offense apart from breaking the Terms, you’re in the clear”).
But if your vision of a better future is one in which Facebook rules as a constitutional monarch and your government’s representatives tell it how to conduct itself, independent validation starts to look like vigilantism.
The thing is, everyone wants to limit the harms from Facebook’s bad conduct, but the techlash version of this is to set technical rules that Facebook will inevitably outmaneuver by finding loopholes, growing larger, and then buying their way out.
The techno-utopian version says that our states should make rules that cut Facebook down to size: force it to divest, limit its acquisitions, tax its worldwide income, and, just as importantly, force it to allow third parties to erode its dominance: by giving users the ability to extract their data and move it to rivals; by creating APIs (standard software interfaces) that let upstarts offer users a way to stay in contact with Facebook friends while boycotting Facebook itself; and by letting individuals, researchers, collectives, and companies violate Facebook’s Terms of Service in order to attain “adversarial interoperability” – plugging their services into Facebook without Facebook’s permission.
Big Tech’s evils are, after all, more about “big” than “tech.” A sociopath running a mom-and-pop shop is an unpleasant fellow who makes life miserable for his customers and employees alike, but no one forces you to work for him or buy his wares – there’s another mom-and-pop across the street that’s much nicer. But a sociopath who runs a company town, who dominates a whole sector, is a different story. They’re the only game in town.
It is profoundly undemocratic that a small cabal of nerds working for tech giants gets to make decisions that adversely affect the lives of billions. It’s even more undemocratic to ban anyone from altering that code to protect themselves from its harms. Constitutional monarchies are bullshit. The democratic alternative is to give people control over their technological lives – to seize the means of computation and put it into the hands of everyone who wants it.
Cory Doctorow is the author of Walkaway, Little Brother, and Information Doesn’t Want to Be Free (among many others); he is the co-owner of Boing Boing, a special consultant to the Electronic Frontier Foundation, a visiting professor of Computer Science at the Open University and an MIT Media Lab Research Affiliate.
This article and more like it in the September 2018 issue of Locus.
While you are here, please take a moment to support Locus with a one-time or recurring donation. We rely on reader donations to keep the magazine and site going, and would like to keep the site paywall free, but WE NEED YOUR FINANCIAL SUPPORT to continue quality coverage of the science fiction and fantasy field.