‘‘Are you optimistic or pessimistic about the future?’’ It’s a question I get asked so often that I have a little canned response I can rattle off without thinking: ‘‘In order to be an activist, you have to be both: pessimistic enough to believe that things will get worse if left unchecked, optimistic enough to believe that if you take action, the worst can be prevented.’’
But there’s more to it than that. I’ve been called a techno-utopian. I don’t know about that, but I’ll at least cop to ‘‘techno-optimist.’’ Techno-optimism is an ideology that embodies the pessimism and the optimism above: the concern that technology could be used to make the world worse, the hope that it can be steered to make the world better.
To understand techno-optimism, it’s useful to look at the free software movement, whose ideology and activism gave rise to the GNU/Linux operating system, the Android mobile operating system, the Firefox and Chrome browsers, the BSD Unix that lives underneath Mac OS X, the Apache web-server and many other web- and e-mail-servers and innumerable other technologies. Free software is technology that is intended to be understood, modified, improved, and distributed by its users. There are many motivations for contributing to free/open software, but the movement’s roots are in this two-sided optimism/pessimism: pessimistic enough to believe that closed, proprietary technology will win the approval of users who don’t appreciate the dangers down the line (such as lock-in, loss of privacy, and losing work when proprietary technologies are orphaned); optimistic enough to believe that a core of programmers and users can both create polished alternatives and win over support for them by demonstrating their superiority and by helping people understand the risks of closed systems.
While some free software activists might dream of a world without proprietary technology, the pursuit of free software’s ideology is generally more practical in its goal; like good technologists, they view proprietary technology as a bug, and bugs can’t necessarily be eliminated. It’s just not possible to squash every bug, so programmers track, isolate, and minimize bugs instead. Take the Ubuntu operating system, a very popular flavor of GNU/Linux. The first bug in its bug-tracker is this:
‘‘Bug Description: Microsoft has a majority market share in the new desktop PC marketplace. This is a bug, which Ubuntu is designed to fix.
‘‘Non-free software is holding back innovation in the IT industry, restricting access to IT to a small part of the world’s population and limiting the ability of software developers to reach their full potential, globally. This bug is widely evident in the PC industry.’’
This bug has been ‘‘open’’ (that is, still not satisfactorily resolved) since 2004 and I’d be surprised to see it closed in the near-term. Nevertheless, each revision of Ubuntu has worked explicitly to minimize the harm arising from the bug, by providing an operating system that can be easily switched to from Microsoft’s products, with similar keyboard shortcuts and built-in programs, but none of the lock-in or restrictions. Ubuntu’s Bug #1 will not be solved by a product, but by a process.
This programmerly mindset is the key to understanding the pessimism/optimism duality. As a techno-optimist, I was heartened to see the role that networked technologies played in aiding activists in Iran, Egypt, Libya, Bahrain and other middle-eastern autocracies to coordinate with one another. But as a techno-pessimist, I was horrified to see activists making use of unsecured unfit systems like Facebook, which make it trivial for authorities to snoop on and unpick the structure of activist organizations.
This isn’t new. The convenience of privacy-unfriendly social-network technologies from Friendster to Facebook has made them tempting platforms for use in organizing activist causes. Those of us who care about the underlying tools used in causes have railed against their use for the whole time, with moderate success. But our Bug #1 is still open – activists, even technologically savvy ones who should know better – still reach for proprietary, unencrypted, non-private technology, citing the difficulty of using the alternatives.
They’ve got a point: right now, it’s harder to organize a cause without using surveillance-friendly technology than it is to create another Facebook group. It falls to techno-optimists to do two things: first, improve the alternatives and; second, to better articulate the risks of using unsuitable tools in hostile environments. There are high-risk contexts – repressive, bloodthirsty regimes – in which it is literally better to do nothing than to put activists at risk by using tools that make it easy for the secret police to do their awful work.
Herein lies the difference between a ‘‘technology activist’’ and ‘‘an activist who uses technology’’ – the former prioritizes tools that are safe for their users; the latter prioritizes tools that accomplish some activist goal. The trick for technology activists is to help activists who use technology to appreciate the hidden risks and help them find or make better tools. That is, to be pessimists and optimists: without expert collaboration, activists might put themselves at risk with poor technology choices; with collaboration, activists can use technology to outmaneuver autocrats, totalitarians, and thugs.
Autocrats’ use of technology against the Middle Eastern uprisings has been a wake-up call to a large group of technology activists and activists who use technology. As I write this, the net is alive with privacy-conscious activists building organizing tools that preserve anonymity, that fill the gap when governments pull the plug on the net, that prevent eavesdropping and fight disinformation. The best of these technologies will be open and free, such that flaws in their methodologies can be identified and repaired early through broad scrutiny. In the meantime, we techno-optimists will go on fighting against Bug #1, asking our colleagues to look past the immediate convenience of Facebook, and at the long-term risks of putting our freedom in the hands of private concerns who’ve never promised to preserve it, and whom we shouldn’t believe even if they do.
Cory Doctorow is the author of Walkaway, Little Brother and Information Doesn’t Want to Be Free (among many others); he is the co-owner of Boing Boing, a special consultant to the Electronic Frontier Foundation, a visiting professor of Computer Science at the Open University and an MIT Media Lab Research Affiliate.
From the May 2011 issue of Locus Magazine