Cory Doctorow: What’s Inside the Box

Computer science has long wrestled with the question of how anyone can know what a computer or the programs running on it are doing. The Halting Problem described by Alan Turing in 1936 tells us that in complex cases, it’s impossible to predict what a computer program will do without actually running it. A related problem is knowing whether any defects or errors exist in a computer program. This is Gödel’s territory: his ‘‘incompleteness’’ tells us that it’s hard-to-impossible to prove that a program is free from bugs.

These sound like abstract problems, but they are vital to every computer user’s life. Without knowing what a computer might do in some circumstance, it’s hard to predict whether two programs will collide and cause unpredictable things to happen, like crashes, data-loss, or data-corruption. The inability to prove that programs are bug-free means that clever or lucky adversaries might find new bugs and use them to take over a computer’s functionality, which is how we get spyware and all its related infections.

After a bunch of ineffective answers (read: anti-virus programs), the solution that nearly everyone has converged on is curation. Rather than demanding that users evaluate every program to make sure that it is neither incompetent nor malicious, we delegate our trust to certifying authorities who examine software and give the competent and beneficial programs their seal of approval.

But ‘‘curation’’ covers a broad spectrum of activities and practices. At one end, you have Ubuntu, a flavor of GNU/Linux that offers ‘‘repositories’’ of programs that are understood to be well-made and trustworthy. Ubuntu updates its repositories regularly with new versions of programs that correct defects as they are discovered. Theoretically, an Ubuntu repository could remove a program that has been found to be malicious, and while this hasn’t happened to date, a recent controversy on the proposed removal of a program due to a patent claim confirmed the possibility. Ubuntu builds its repositories by drawing on still other repositories from other GNU/Linux flavors, making it a curator of other curators, and it doesn’t pretend that it’s the only game in town. A few seconds’ easy work is all it takes to enable other repositories of software maintained by other curators, or to install software directly, without a repository. Ubuntu warns you when you do this that you’d better be sure you can trust the people whose software you’re installing, because they could be assuming total control over your system.

At the other end of the scale, you have devices whose manufacturers have a commercial stake in curation, such as Apple and Nintendo. Apple gets 30% of all the sales from the App stores, and then a further 30% of all the transactions taking place within those apps. Naturally, Apple goes to great lengths to ensure that you can’t install ‘‘third party’’ apps on your device and deprive Apple of its cut. They use cryptographic signatures to endorse official apps, and design their devices to reject unsigned apps or apps with a non-matching signature. Nintendo has a similar business model: they charge money for the signature that informs a 3DS that a given program is authorized to run. Nintendo goes further than Apple with the 3DS, which automatically fetches and installs OS updates, and if it detects any tampering with the existing OS, it permanently disables the device.

Both Apple and Nintendo argue that their curatorial process also intercepts low-quality and malicious software, preventing it from entering the platform. I don’t doubt their sincerity. But their curatorial model treats owners as adversaries and executes countermeasures to prevent people from knowing what the programs on their devices are doing. It has to, because a user who can fully inspect the operating system and its processes, and terminate or modify processes at will, can subvert the curation and undermine the manufacturer’s profits by choosing to buy software elsewhere.

This means that when bad things do slip through the curatorial process, the owner of the device has a harder time discovering it. A recent example of this was the scandal of CarrierIQ, a company that makes spyware for mobile phones. CarrierIQ’s software can capture information from the phone’s GPS, look at its files, inspect your text messages and e-mails, and watch you enter your passwords. Carriers had been knowingly and covertly installing this software, shipping an estimated 150,000,000 infected handsets.

The carriers are cagey about why they did this. CarrierIQ characterizes its software as a quality assurance tool that helped carriers gather aggregate statistics useful for planning and tuning their infrastructure. Some industry insiders speculate that CarrierIQ was also used to prevent ‘‘tethering’’ (using your mobile phone as a data modem for your laptop or other device). CarrierIQ ran covertly on both iPhones and Android phones, but CarrierIQ was discovered on Android first.

Android is closer to Ubuntu in its curatorial model than it is to Apple’s iOS. Out of the box, Android devices trust software from Google’s Marketplace, but a checkbox lets users choose to trust other marketplaces, too (Amazon operates an alternative Android marketplace). As a result, Android’s design doesn’t include the same anti-user countermeasures that are needed to defend a kind of curation that treats users adversarially. Trevor Eckhart, the researcher who discovered CarrierIQ in the wild, was able to hunt for suspicious activity in Android because Android doesn’t depend on keeping its workings a secret. A widely distributed, free, and diverse set of tools exist for inspecting and modifying Android devices. Armed with the knowledge of CarrierIQ’s existence, researchers were able to go on to discover it on Apple’s devices, too.

It all comes down to whom users can trust and how they verify the ongoing trustworthiness of those parties, and what they trust them to do. You might trust Apple to ensure high quality, but do you trust them to police carriers to prevent the insertion of CarrierIQ before the phone gets to you? You might trust Nintendo to make a great 3DS, but do you trust them to ensure the integrity of their updating mechanism? After all, a system that updates itself without user permission is a particularly tasty target: if you can trick the device into thinking that it has received an official update, it will install it and nothing the user can do will prevent that.

Let’s get a sense of what trust means in these contexts. In 2010, a scandal erupted in Lower Merion PA, an affluent Philadelphia suburb. The Lower Merion School District had instituted a mandatory laptop program that required students to use school-supplied laptops for all their schoolwork. These computers came loaded with secret software that could covertly operate the laptops’ cameras (when the software ran, the cameras’ green activity lights stayed dark), as well as capturing screengrabs and reading the files on the computers’ drives. This software was nominally in place to track down stolen laptops.

But a student discovered the software’s existence when he was called into his principal’s office and accused of taking narcotics. He denied the charge and was presented with a photo taken the night before in his bedroom, showing him taking what appeared to be pills. The student explained that these were Mike & Ike’s candies, and asked how the hell the principal had taken a picture of him in his bedroom at night? It emerged that this student had been photographed thousands of times by his laptop’s camera without his knowledge: awake and asleep, clothed and undressed.

Lower Merion was just the first salvo. Today, a thriving ‘‘lawful interception’’ industry manufactures ‘‘network appliances’’ for subverting devices for the purposes of government and law-enforcement surveillance. In 2011, Wikileaks activist Jacob Applebaum attended the ‘‘wiretapper’s ball,’’ a lawful interception tradeshow in Washington DC. He left with an armload of product literature from unwitting vendors, and a disturbing account of how lawful interception had become a hotbed of subverting users’ devices to spy on them. For example, one program sent fake (but authenticated) iTunes updates to target computers, and then took over the PCs and gave control of their cameras, microphones, keyboard, screens, and drives to the program’s operators. Other versions disguised themselves as updates to Android or iPhone apps. The mobile versions could also access GPSes, record SMS conversations, and listen in on phone calls.

Around the same time, a scandal broke out in Germany over the ‘‘Staatstrojaner’’ (‘‘state-Trojan’’), a family of lawful interception programs discovered on citizens’ computers, placed there by German government agencies. These programs gathered enormous amounts of data from the computers they infected, and they were themselves very insecure. Technology activists in the Chaos Computer Club showed that it was trivial to assume control over Staatstrojaner-infected machines. Once the government infected you, they left you vulnerable to additional opportunistic infections from criminals and freelance snoops.

This is an important point: a facility that relies on a vulnerability in your computer is equally available to law enforcement and to criminals. A computer that is easy to wiretap is easy to wiretap by everyone, not just cops with wiretapping warrants (assuming you live in a country that still requires warrants for wiretapping).

2011 was also the year that widespread attention came to UEFI, the Unified Extensible Firmware Interface. UEFI is a hardware component that will be included in future PCs that checks operating systems to see if they have valid curatorial signatures before they are allowed to run. In theory, this can be used to ensure that only uninfected operating systems run (an infected system will no longer match the signature).

UEFI has additional potential, though. Depending on whether users are allowed to override UEFI’s judgment, this could also be used to prevent users from running operating systems that they trust more than the signed ones. For example, you might live in Iran, and believe that the Iranian police use UEFI to ensure that only versions of Windows with a built-in wiretapping facility can run on officially imported PCs. Why not? If Germany’s government can justify installing Staatstrojaners on suspects’ PCs, would Iran really hesitate to ensure that they could conduct Staatstrojaner-grade surveillance on anyone without the inconvenience of installing the Staatstrojaner in the first place? German Staatstrojaner investigators believe the software was installed during customs inspections, while computers were out of their owners’ sight. Much easier to just ban the sale of computers that don’t allow surveillance out of the box.

At the same time, a UEFI-like facility in computers would be a tremendous boon to anyone who wants to stay safe. If you get to decide which signatures your UEFI trusts, then you can, at least, know that your Ubuntu computer is running software that matches the signatures from the official Ubuntu repositories, know that your iTunes update was a real iTunes update, and that your Android phone was installing a real security patch. Provided you trust the vendors not to give in to law enforcement pressure (the Iran problem), or lose control of their own updating process (the CarrierIQ problem), you can be sure that you’re only getting software from curators you trust.

As an aside, other mechanisms might be used to detect subversion or loss of control or pressure – you might use a verification process that periodically checks whether your software matches the software that your friends and random strangers also use, which raises the bar on subversion. Now a government or crook has to get a majority of the people you compare notes with to install subverted software, too.

All this is pretty noncontroversial in security circles. Some may quibble about whether Apple or Nintendo would ever be subverted, but no one would say that no company will ever be subverted. So if we’re to use curation as part of a security strategy, it’s important to consider a corrupt vendor as a potential threat.

The answer to this that most of the experts I speak to come up with is this:

The owner (or user) of a device should be able to know (or control) which software is running on her devices.

This is really four answers, and I’ll go over them in turn, using three different scenarios: a computer in an Internet cafe, a car, and a cochlear implant. That is, a computer you sit in front of, a computer you put your body into, and a computer you put in your body.

1. Users know.

The user of a device should be able to know which software is running on her devices.

You can’t choose which programs are running on your device, but every device is designed to faithfully report which programs are running on it.

If you walk into an Internet cafe and sit down at a computer, you can’t install a keylogger on it (to capture the passwords of the next customer), but you can also trust that there are no keyloggers in place on the computer.

If you get behind the wheel of a car, you can’t install a hands-free phone app, but you also know whether its GPS is sending your location to law enforcement or carjackers.

If you have a cochlear implant, you can’t buy a competitor’s superior, patented signal processing software (so you’ll have inferior hearing forever, unless you let a surgeon cut you open again and put in a different implant). But you know that griefers or spies aren’t running covert apps that can literally put voices in your head or eavesdrop on every sound that reaches your ear.

2. Owners know.

The owner of a device should be able to know which software in running on her devices.

The owner of an Internet cafe can install keyloggers on his computers, but users can’t. Law enforcement can order that surveillance software be installed on Internet cafe computers, but users can’t tell if it’s running. If you trust that the manufacturer would never authorize wiretapping software (even if tricked by criminals or strong-armed by governments), then you know that the PC is surveillance-free.

Car leasing agencies or companies that provide company cars can secretly listen in on drivers, secretly record GPS data, and remotely shut down cars. Police (or criminals who can take control of police facilities) can turn off your engine while you’re driving. Again, if you trust that your car’s manufacturer can’t be coerced or tricked into making technology that betrays drivers to the advantage of owners, you’re safe from these attacks.

If you lease your cochlear implant, or if you’re a minor or legally incompetent, your guardians or the leasing company can interfere with your hearing, making you hear things that aren’t there or switching off your hearing in some or all instances. Legislative proposals (like the ones that emanated from the Motion Picture Association of America’s Analog Reconversion Working Group) might have the side-effect of causing you to go deaf in the presence of copyrighted music or movie soundtracks.

3. Owners control.

The owner of a device should be able to control which software in running on her devices.

The owner of an Internet cafe can spy on her customers, even if the original PC vendor doesn’t make an app for this, and can be forced to do so by law enforcement. Users can’t install keyloggers.

Car leasing agencies and companies with company cars can spy on and disable cars, even if the manufacturers don’t support this. Drivers can’t know what locks, spyware, and kill-switches are in the cars they drive.

You can choose to run competitors’ software on your cochlear implant, no matter who made it, without additional surgery. If you’re a minor, your parents can still put voices in your head. If you don’t trust your cochlear implant vendor to resist law-enforcement or keep out criminals, you can choose one you do trust.

4. Users control.

The user of a device should be able to control which software in running on her devices.

An Internet cafe’s owner can’t spy on his customers, and customers can’t spy on each other.

Car leasing agencies and employers can’t spy on drivers. Drivers don’t have to trust that manufacturers will resist government or crooks. Government can’t rely on regulation to ensure emissions, braking characteristics, or speed-governors.

Everyone gets to choose which vendors supply software for their implants. Kids can override parents’ choices about what they can and can’t hear (and so can crazy people or people who lease their implants).

 

Of these four scenarios, I favor the last one: ‘‘The user of a device should be able to control which software in running on her devices.’’ It presents some regulatory challenges, but I suspect that these will be present no matter what, because crooks and tinkerers will always subvert locks on their devices. That is, if you believe that your self-driving cars are safe because no one will ever figure out how to install crashware on them, you’re going to have problems. Security that tries to control what people do with technology while it is in their possession, and that fails catastrophically if even a few people break it, is doomed to fail.

However, the majority of experts favor three, ‘‘The owner of a device should be able to control which software in running on her devices.’’ I think the explanation for this is the totalizing nature of property rights in our society. At a fundamental level, we have absorbed Lord Blackstone’s notion of property as ‘‘that sole and despotic dominion which one man claims and exercises over the external things of the world, in total exclusion of the right of any other individual in the universe.’’

Especially when it comes to corporate equipment. It’s a hard-to-impossible sell to convince people that employees, not employers, should be able to decide what software runs on their computers and other devices. In their hearts, most employers are Taylorists, devotees of 19th century theories of ‘‘Scientific Management’’ that holds that a corporation’s constituent human parts should have their jobs defined by experts who determine the empirical, purely best way to do something, and the employees should just do it that way.

There’s a lot of lip-service paid to the idea that businesses should ‘‘hire smart people and let them solve problems intelligently.’’ Silicon Valley offices are famed for their commitment to freewheeling, autonomy-friendly workplaces, but IT departments have always been hives of Taylorism.

The mainframe era was dominated by IT workers who got to choose exactly which screens employees were allowed to see. If your boss thought you should only be able to compare column x against column y, that was it. Employees who literally smuggled in Apple ][+s running Lotus 1-2-3 to give them the freedom to compute in other ways were summarily fired.

Until they weren’t. Until the PC guerrillas were recognized by their employers as having personal expertise that was squandered through overtight strictures on how they did their jobs – the IT equivalent of the auto-plants that deployed Japanese-style management where teams were told to get the job done and were left alone.

But guerrillas have a way of winning and becoming the secret police. The PC-dominated workplace has a new IT priesthood that yearns to use UEFI as the ultimate weapon in their war against fellow employees who think they can do their jobs better with software of their own choosing.

But scenario four (users control) offers a different, more flexible, more nuanced approach to solving problems collaboratively with employees. A world in which employees can choose their tools, but know absolutely what those tools are doing (within the limits of computer science) is one where users can be told what they must not do (‘‘don’t run a program that opens holes in the corporate firewall or gets access to your whole drive’’) and where they can know if they’re doing what they’re told.

That only works if you trust your employees, which is a difficult bar for many corporate cultures to hurdle. Some dark part of every corporate id yearns to reduce every process to a pictographic McDonald’s cash register that can be operated by interchangeable commodity laborers who can be moved around, fired, and hired with impunity. The mature and humane face of management will always tell you that ‘‘human resources are our greatest asset’’ but it’s an imperfect veneer over the overwhelming urge to control.


Cory Doctorow is the author of Walkaway, Little Brother and Information Doesn’t Want to Be Free (among many others); he is the co-owner of Boing Boing, a special consultant to the Electronic Frontier Foundation, a visiting professor of Computer Science at the Open University and an MIT Media Lab Research Affiliate.


From the March 2012 issue of Locus Magazine

26 thoughts on “Cory Doctorow: What’s Inside the Box

  • Pingback:Cory Doctorow’s craphound.com >> Blog Archive » Who should know what’s happening in your computer? Who should control it?

  • Pingback:Who should know what’s happening in your computer? Who should control it? | It's like, Really?

  • Pingback:Who should know what’s happening in your computer? Who should control it? | TUMBLR TRANSLATOR

  • Pingback:Who should know what’s happening in your computer? Who should control it? | Geek News and Musings

  • Pingback:Who should know what’s happening in your computer? Who should control it? | Fast Fails | The Best Fail Channels

  • Pingback:Spyware Store » Cory Doctorow: What’s Inside the Box – Locus Online

  • March 2, 2012 at 4:03 pm
    Permalink

    Where’s the +1 button? I want to +1 this post!!

    Reply
  • Pingback:Who should know what’s happening in your computer? Who should control it? – - AboutLifeX - Living News AggregatorAboutLifeX – Living News Aggregator

  • Pingback:63 – Message à François F. et Antoine A. : Non, il n’y a rien à garder dans HADOPI. Rien du tout. « Un jour Une idée

  • Pingback:SF Tidbits for 3/3/12 - SF Signal – A Speculative Fiction Blog

  • March 3, 2012 at 1:55 am
    Permalink

    Interesting ideas. How would you option 4 look if i were to have systems that ran multiple VMs concurrently with each VM rigged to the relevant “spy” agency? This does presuppose that the host environment whether started via a UEFI environment or not, is under my control entirely. And taking that thought to it’s conclusion, whether VMs are run or not, if I have control over the base system, it does not matter what I agree (or forced) to run in the VM.

    Reply
  • March 3, 2012 at 1:58 am
    Permalink

    I like number 4 as well, but the bigest challenge I see to implementing it is identified when you say “where they can know if they’re doing what they’re told”. This goes right back to your original points on “hard” computing problems .. the user still needs to know (and now I’m thinking non-specialists as users) whether his use of the computer (i.e. the programs he invokes) conforms to the guidelines he was given.

    I think that implementing #4 needs to be viewed as a challenge for the computer science community. The “hard” computer-science problems need to be resolved to the point that the computer can give the user a description (in human terms) of what will happen when it is used in a particular way.

    Reply
  • March 3, 2012 at 3:04 am
    Permalink

    Your developing some interesting Ideas here Cory but your fourfold analysis of the three examples needs more thought. Are you describing different possible factual states for computers in society (the result of technical decisions) or making normative observations about social practices in relation to computers (who do we trust and who should we trust, who controls and who should control)?

    Reply
  • Pingback:Lastest Parts To Make A Computer News – News Maniac

  • Pingback:Explore Anywhere Holding Corp. Near-Term Strategic Initiatives | Information Threats

  • March 3, 2012 at 4:27 am
    Permalink

    I believe the CCC used the name «Bundestrojaner» (Federal trojan) instead of «Staatstrojaner» mentioned in this article.

    Otherwise I’ve been looking for an article such as this for a long time now. Superb work.

    Reply
  • March 3, 2012 at 2:49 pm
    Permalink

    Very good article.
    I do see an issue with number four, however. In many cases, it is absolutely critical that owners (rather than users) have the ultimate authority over the device. For example, in your leased-car scenario, many car dealerships and rental companies retain the ability to track cars with GPS, because people steal lease and rental cars all the time. If we give the “user” (even if it’s someone with very little time or money tied up in the computer) ultimate authority, they could cause problems for the actual owner (like disabling GPS and stealing the car).

    If the person who leased the car decides to purchase it, stealing is obviously no longer an issue, and that person should then be given the ultimate authority over that device.

    As for the cochlear implant: what kind of idiot would lease a subdermal implant technology? If a computer goes inside your body, you better be damned sure you own it (and have control over it).

    Reply
  • March 3, 2012 at 11:59 pm
    Permalink

    Great article. I don’t have any thoughts on which of the four options I prefer, but I couldn’t stop thinking about a talk I heard from Karen Sandler at linux.conf.au 2012. She discussed the legal and ethical problems of not knowing what software is running on a defibrillator which is implanted in her body and connected to her heart. She is very open about the whole thing, and I strongly encourage anybody interested in this post to look up and watch her talk.

    Reply
  • Pingback:Links 4/3/2012: Cory Doctorow on Code Visibility, Oracle Enterprise Linux 5.8 | Techrights

  • Pingback:Daily Blog Post 03/03/2012 | ITGS Online

  • March 4, 2012 at 4:10 am
    Permalink

    @Will,
    … but vehicles were leased for many years prior to GPS tracking being commonly available, so the leasing business model clearly must have a process that reduces the likelyhood of theft regardless of the presence/absence of a GPS tracker. Perhaps the use of a GPS tracking system can increase the profitability of the business (e.g. reduced insurance rates for the lessor) but that could easily be addressed by simply increasing the amount of my shoudl I choose to disable the GPS.

    One of the reasons I like number 4 is because it seems to return ethical behaviour to a decision made by human beings as opposed to something imposed by a machine. I think society (at least the North American one I live in) has been going down the “slippery slope” of using technology to try and make unethical actions impossible. There still need to be consequences for misbehaviour, but I’m not sure the ever-vigilant “big brother” approach is the right one. [I know it bothers me 🙂 ]

    But, as I said in my earlier comment, in order for the human to make the “ethical” decision she needs to be informed of the consequences of her actions. That is the difficult problem … when even the creators of the devices and programs that we use cannot predict all of the consequences of their use, how can we expect a non-expert user to make an informed choice. I think we default to #3 because we are unable (or unwilling?) to make computing transparent enough that the typical user can predict with some confidence the results of choices they make in its use.

    be seeing you … Don

    Reply
  • March 5, 2012 at 6:15 am
    Permalink

    this is so true that both Apple and Nintendo argue that their curatorial process also intercepts low-quality and malicious software, preventing it from entering the platform

    Reply
  • March 5, 2012 at 11:01 am
    Permalink

    I do not see option 4 as practicable. The best option is a combination of 3 and 1; the owner has ultimate control but the user is to be informed.

    Reply
  • Pingback:Cory Doctorow’s craphound.com >> Blog Archive » What’s Inside the Box?

  • March 6, 2012 at 12:19 pm
    Permalink

    Excellent work, as always, Cory!

    After 25+ years working in the IT industry, I feel pretty comfortable saying that no matter what processes or programs are instituted by any authority there will be someone that finds a way to circumvent them. I believe we will continue to see more of the same with regard to the issues of systems control. There will always be ‘work arounds’ and hacks available to those that seek them out.
    The collective brain trust that is the Internet will always ‘out weigh’ that of any one company or organization. As in the past, the ‘priests’ will have access to the arcane knowledge to get around a particular obstacle and the common user
    will be stuck with what they are given.

    My biggest concerns are the code being written and released by government agencies that we, as citizens, have no idea exists and the re-purposing of this
    code by griefers (cool word Cory). This was made clear in a recent 60 Minutes television program where code was used to attack the Iranian nuclear effort. It apparently worked well but now it is in the wild and can be used to manipulate any system that uses a particular Siemens micro controller.
    Look out power plants, sewage treatment facilities and… you get the idea!

    Reply
  • March 7, 2012 at 9:11 am
    Permalink

    Cory, I applaud the use of some of Computer Science’s core theories (Halting Problem and Incompleteness) to set the context of your discussion; I can’t help but nit-pick though.

    The halting problem is that there is no way to determine, given a computer program P and input I, whether P when run on I will eventually terminate. The incompleteness theorem is that there are truths in mathematics which cannot be proven.

    It actually *is* possible to prove that a computer program will produce the correct output, although this is infeasible in all but the most trivial cases.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *