Cory Doctorow: The Internet Will Always Suck
Technologist Anil Dash has a law. ‘‘Three things never work: Voice chat, printers, and projectors.’’ It’s funny because it’s true. We’ve all struggled with getting a printer to work; we’ve all watched a presenter and an AV tech sweat over a projector in a room full of awkwardly shifting audience-members; we’ve all noted the perverse tendency of voice-over-IP calls to turn into slurred, flanged Dalek-speak just as the other person is getting to the point.
But why? For the same reason that the Internet will always suck: because we always use our vital technologies at the edge of their capabilities.
Take printers. My first printer was a teletype terminal, back in 1977. I had no screen; the printer was the only way that the computer I was using – a mainframe at a university, connected by a primitive modem called an acoustic coupler – could communicate with me. I’d type on a keyboard and the printer’s all upper-case daisy-wheel would slam itself against the roll of paper on the platen, through the thin, ink-saturated ribbon. Letters appeared, crisp and black and neat, on the paper, except when they didn’t – often the paper would jam, or the ribbon would fade, or some other electromechanical misfortune would manifest and all computing would halt.
Not long before this, teletype terminals were only to be found in computer labs, where skilled technicians would service and tune them. Moving them was a major undertaking. The portable teletype terminal was a major innovation, as it allowed computing to take place outside of the lab, from any location where an acoustic coupler could be mated with a telephone handset. The advantages of this were nothing short of spectacular: first for the scientific and commercial users of computers, who could do data-entry and lookup from remote locations, and then for people like me, a six-year-old kid in suburban Toronto, kindling a life-long love-affair with technology.
Portable teletypes were a dumb idea. The machines were balky and had a lot of moving parts and really required a lot of service. Locating a teletype far from its maintenance staff was, to say the least, very optimtistic.
But teletypes improved, became more robust. The common mechanical and electric failure points were replaced with more robust components, and mass manufacturing drove prices down. As the price of teletypes plunged, the number of potential users for them grew, and the punishment they were expected to absorb also increased. Teletypes improved, but each improvement brought new demands, forming an equilibrium poised on the knife-edge of uselessness.
By 1979, we had a dot-matrix printer, another migrant from the lab and industry into the home. Balky, nearly useless, prone to jamming, but it could draw any shape you could create on the computer, not just the upper-case roman letters and a constrained number/punctuation set. It was just useful enough not to be totally useless. New generations of dot-matrix printers emerged and as they did, new applications appeared that pushed them right to their limits.
Years later, I found myself working in prepress network administration and systems integration. The clients I serviced had spent hundreds of thousands of dollars on systems that replaced systems that had recently cost millions. The outgoing technology had worked reliably – it had been in existence long enough to be perfected, and for a praxis of maintenance and operation to mature in a cohort of skilled technicians. The stuff I was installing mostly didn’t work – it was nearly, but not quite, useless. We were pushing minicomputers to do the work of mainframes, consumer laser printers to do the work of industrial behemoths, operating systems from startups to manage operations that were once run on big iron whose millions of lines of code had been hand-wrought by IBM’s greatest software minds.
Within a few years, minicomputing and PCs had taken over prepress, growing more reliable and mature, and the instant it did, the customers of prepress bureaux – ad agencies, design shops – started to bring those systems in house, doing the work for themselves, tended by semi-skilled technicians who knew even less than I had.
This is the path and destiny of technology: its users and applications are constrained by its cost and complexity. Cost and complexity are pushed relentlessly downward, and as they fall, there is always a new group of users who had a use for it that had always been too marginal, trivial, or weird to warrant the expense and difficulty of using the tech, until now.
This is more true of communications technology than any other. Printing, voice, and projectors are only important insofar as they allow individuals and groups to connect.
The Internet is the nervous system that ties all these things together, the network-of-networks, designed to allow anyone to talk to anyone, using any protocol, without permission from anyone else.
Every time the Internet gets cheaper, or more pervasive, or faster, the applications that it is expected to bear increase in intensity, precarity and importance. As with printers – as with every technology – users and businesses push each innovation to the brink of uselessness, not because they want useless technology, but because something is usually better than nothing.
Why do people use crappy VoIP connections? Because in a world where telephone carriers still treat ‘‘long distance’’ as a premium product to be charged for by the second, the alternative for many users is no connection at all. Why do users try to download giant media files over cellular network connections on moving trains? Because the alternative isn’t waiting until you get to the office – it’s blowing a deadline and tanking the whole project.
The corallary of this: whatever improvements are made to the network will be swallowed by a tolerance for instability as an alternative to nothing at all. When advocates of network quality-of-service guarantees talk about the need to give telesurgeons highly reliable connections to the robots conducting surgery on the other side of the world, the point they miss is that as soon as telesurgery is a possibility, there will be ‘‘special circumstances’’ that require telesurgeons to conduct operations even when the reliable reserved lines aren’t available. If a child is pulled from the rubble of a quake in some rich, mediagenic city and the only orthopedic surgeon that can save him is on the other side of the world, she will inevitably end up operating Dr Robot over whatever crappy network connection the rescue crews can jury-rig in the wreckage.
The corollary of this: always assume that your users are in a zone of patchy coverage, far from technical assistance, working with larger files than they should, under tighter deadlines than is sane, without a place to recharge their battery. Don’t make your users load three screens to approve a process, and if you do, make sure that if one of those screens times out and has to be reloaded, it doesn’t start the process over. Assume every download will fail and need to be recovered midstream. Assume their IP addresses will change midstream as they hunt for a wifi network with three bars.
Assume that the Internet will always suck – because that’s the way we prefer it.
Cory Doctorow is the author of Walkaway, Little Brother, and Information Doesn’t Want to Be Free (among many others); he is the co-owner of Boing Boing, a special consultant to the Electronic Frontier Foundation, a visiting professor of Computer Science at the Open University and an MIT Media Lab Research Affiliate.
From the November 2015 issue of Locus Magazine