Cory Doctorow: Let’s Get Better at Demanding Better from Tech

Cory Doctorow
Photo by Paula Mariel Salischiker

At long last, the techlash has arrived, and not a minute too soon. I have been involved in the tech industry since I got my first programming job in 1988. I’ve been a sysadmin, a CIO, a trainer, a software company founder, and an activist. I’ve argued against terrible laws and argued for good ones. I’ve dreamed of the promise of tech and been haunted by its peril. Depending on who you ask, I was either a starry-eyed idealist, willing to give the clayfooted giants of technology a pass on their worst excesses, or I was a naysaying sourpuss, forever letting the perfect be the enemy of the good when it came to the minor bugs in the otherwise wonderful technological future we were living in.

This is a false binary: you don’t have to be “protech” or “anti-tech.” Indeed, it’s hard to imagine how someone could realistically be said to be “anti-tech” – your future is going to have more technology in it, so the question isn’t, “Should we use technology?” but rather, “Which technology should we use?”

That’s the discussion I’ve been waiting for us all to have for decades now: which tech should we use? And science fiction has a signature move that will help us to make this a productive discourse: the ability to separate a technology from its social and economic context.

That’s a job science fiction writers need to do, because it’s one that technologists generally refuse to undertake. To hear Facebook tell it, staying in touch with your friends is impossible, unless you give in to continuous, covert surveillance of everything you do online. Ask Apple and they’ll tell you that having a functional phone is inseparable from allowing a distant, multibillion-dollar corporation decide who can repair it and whose software you’re allowed to use. Ask Google and they’ll tell you that providing a critical search-interface to the web can’t be done without (again) spying on everything you do.

The problem with the anti-tech/pro-tech false dichotomy is that it insists that the anti-tech side argue that you don’t like talking with your friends, or using your phone, or searching the web (or at least, that you could live without these things). It leaves the pro-tech side arguing that the harms from not being able to audit your mobile devices, or maintain your privacy, are ultimately unimportant enough that they’re a reasonable price to pay.

But the science fiction writer gets to ask contrafactuals: how can we maintain our social lives or search the web without spying? What kinds of devices would let us communicate on the go without taking away our rights?

These questions need to be asked because the rapid technological changes in our lifetime have been accompanied by rapid economic changes. The rise of neoliberal capitalism – less regulated, more global, driven by the proposition that firms have a legal duty to maximize profit at the expense of all other considerations – coincided with the rise and rise of tech, which rode successive financial bubbles, inflated by the money sloshing around venture capital and private equity worlds, fueled by the unimaginable riches of the capital markets and money-launderers desperate to use any vehicle, including cryptocurrency, to extricate their funds from basket-case dictators or just the local tax-collector.

The question of what postcapitalist tech, or noncapitalist tech, or even mixed-market tech would look like is up for grabs. It’s easy to see the self-serving logic of insisting that such a thing could not exist: if you’ve just raised millions with the hope of getting rich yourself, insisting that your technology’s apparent avarice is an unfortunate necessity, rather than a choice you’ve made, can buy you a modicum of respectability.

Technology is innate to the human condition, though. People have made and adopted and used technology since the time of early hominids, under every economic system. Wheels and pulleys, cars and rockets, pacemakers and vaccines, have all been manufactured under a variety of economic and social arrangements, and there is no credibility to the position that tech is neoliberalism’s handmaiden, no matter whether it’s advanced by the left or the right.

A critique of technology that focuses on its market conditions, rather than its code, yields up some interesting alternate narratives. It has become fashionable, for example, to say that advertising was the original sin of online publication. Once the norm emerged that creative work would be free and paid for through attention – that is, by showing ads – the wheels were set in motion, leading to clickbait, political polarization, and invasive, surveillant networks: “If you’re not paying for the product, you’re the product.”

But if we understand the contours of the advertising marketplace as being driven by market conditions, not “attention economics,” a different story emerges. Market conditions have driven incredible consolidation in every sector of the economy, meaning that fewer and fewer advertisers call the shots, and meaning that more and more of the money flows through fewer and fewer payment processors. Compound that with lax anti-trust enforcement, and you have companies that are poised to put pressure on publishers and control who sees which information.

In 2018, companies from John Deere to GM to Johnson & Johnson use digital locks and abusive license agreements to force you to submit to surveillance and control how you use their products. It’s true that if you don’t pay for the product, you’re the product – but if you’re a farmer who’s just shelled out $500,000 for a new tractor, you’re still the product.

The “original sin of advertising” story says that if only microtransactions had been technologically viable and commercially attractive, we could have had an attention-respecting, artist-compensating online world, but in a world of mass inequality, financializing culture and discourse means excluding huge swaths of the population from the modern public sphere. If the Supreme Court’s Citizens United decision has you convinced that money has had a corrupting influence on who gets to speak, imagine how corrupting the situation would be if you also had to pay to listen.

Viewing the internet age through an economic lens means giving up on tidy stories like “Craigslist killed newspapers and democracy died.” Instead, we have to tell complicated stories like, “Reagan deregulated business and defanged anti-trust, and so newspapers were snapped up by private equity funds that slashed their newsrooms, centralized their ad sales, and weakened their product. When Craigslist came along, these businesses had been looted of all the cash they could have used to figure out their digital futures, and they started to die.” The news business had weathered multiple technology-driven shocks from the telegraph to television to the Great Depression. One difference this time around was that they were operating in a deregulated environment without historical parallel, in which a single group of investors could own the majority of media in a market, or control whole swathes of the industry nationwide.

I’ve argued before that the reason we’re still talking about decades-old SF movies like The Matrix and The Terminator – the reason smart people keep issuing foolish warnings about our primitive AIs making great leaps and becoming our overlords – is that these AI-apocalypses resonate with our current corporate situation. Corporations – artificial persons under the law – are colony life-forms that use us like gut-flora, maneuvering us to help them thrive and reproduce, jettisoning us or crushing us if we cease to serve their needs.

There is a key difference between the actual gut-flora that’s filling your non-metaphorical intestine right now and the metaphorical gut-flora that we humans constitute in the bowels of the Fortune 100: gut flora can’t be persuaded by moral argument, and people can.

The long-delayed techlash has an unfortunate tendency to scapegoat early tech pioneers who promoted the idea that technology could make our lives better. These people – people I was fortunate enough to grow up among – are said to have been blind to the potential of technology to harm our privacy, our discourse, and our human rights.

The reality is that these early “techno-utopians” were keenly aware of these risks. They founded organizations like the Electronic Frontier Foundation, and the Free Software Foundation, not because they were convinced that everything was going to be great – but because they were worried that everything could be terrible, and also because they saw the potential for things to be better.

The motto of these pioneers wasn’t, “This is going to be so great.” It was, “This could be great – if we don’t screw it up.”

The people of tech – the people without whom Google and Facebook and Apple and Amazon couldn’t keep the lights on – are not gut flora. They’re human beings with agency and willpower, and they are subject to moral suasion. They are capable of building a technological future that gives us the things we love about our technology, without inflicting the harms of these systems upon us.

What’s more, they also labor under these harms. The humans behind the corporate veil are capable of compassion and solidarity, they can be reached and moved and enlisted.

It may sound improbable, but revolutions always rely on fifth columnists who change sides. In China Miéville’s October, a masterful, novelistic history of the Russian Revolution, he recounts how the Cossacks, the Czar’s most brutal shock-troops, took the revolutionaries’ side, refusing to mount cavalry charges on protestors and keeping their horses perfectly stationary while protesters openly crawled between the mounts’ legs, technically complying with their officers’ orders to “hold the street and do not move from your positions.”

Our technology can make our lives better, can give us more control, can give us more privacy – but only if we force it to live up to its promise. Any path to that better future will involve technologists, because no group of people on earth is better equipped to understand how important it is to get there.


Cory Doctorow is the author of Walkaway, Little Brother, and Information Doesn’t Want to Be Free (among many others); he is the co-owner of Boing Boing, a special consultant to the Electronic Frontier Foundation, a visiting professor of Computer Science at the Open University and an MIT Media Lab Research Affiliate.


This article and more like it in the March 2018 issue of Locus.Locus Magazine, Science Fiction Fantasy

While you are here, please take a moment to support Locus with a one-time or recurring donation. We rely on reader donations to keep the magazine and site going, and would like to keep the site paywall free, but WE NEED YOUR FINANCIAL SUPPORT to continue quality coverage of the science fiction and fantasy field.

Leave a Reply

Your email address will not be published. Required fields are marked *