Cory Doctorow: Wicked Problems: Resilience Through Sensing

Locus Magazine, Science Fiction FantasyA problem is said to be ‘‘wicked’’ when the various parties engaged with it can’t even agree what the problem is, let alone the solution. As the name implies, wicked problems are hard to deal with.

More than a decade ago, the Federal Communica­tions Commission got its first inkling of a wicked problem on its horizon.

Here’s the problem: around the world, we chop up the electromagnetic spectrum into dedicated-use slices: this slice is just for TV broadcasts, that one is for air-traffic control, another is for cell phones or ambulance dispatch, and so on. Regulators like the FCC and international agencies like the International Telecommunications Union arrive at common pro­tocols dictating who can use which bands and under what circumstances, and device manufacturers make tools that attempt to follow these rules and conven­tions. For example, baby monitors are designed to emit and receive in a mixed use/unlicensed band that any device can use; air traffic control devices emit in their own exclusive bands, and so on.

Before a radio-capable device is offered for sale to the public, the FCC reviews its design and ensures that it only emits and receives in its proscribed frequencies, and certifies that it can’t be easily re­configured to send radio waves in unapproved bands or at unapproved power-levels that might interfere with other users. You don’t want to flick a switch on your baby-monitor and knock out your regional air-traffic control.

Until about a decade ago, it was pretty easy for the FCC to do this. The radiating and receiving characteristics of radios were determined by components like quartz crystals soldered onto their boards. A skilled technician could rework a taxi-radio to interfere with ambulance dispatch, sure, but that same technician could just build a jammer out of commonly available electronic parts. The FCC’s worry wasn’t intentional sabotage – they were worried about unintentional interference, the kind of thing that sometimes happened when hobbyists changed their computer’s clockspeeds and put their PCs in a state where the internal RF shielding was no longer adequate, turning them into inaudible, unsuspected sources of disruptive radio noise.

But then computers came to radio. Software-defined radios (SDRs) are just what they sound like: a way to repurpose readily available, commod­ity computing components, to turn them into flexible radio emitters and receivers. With powerful-enough SDRs, you could tune in every digital TV signal all at once, or all the AM and FM radio stations all at once, and record everything being transmitted to hard-drive. You could create a wifi card that could send and receive in all the wifi flavors: 802.11a, b, c, g, n, and so on, including new ones that haven’t been invented yet, just by changing the software you run on them. Best of all, SDRs were hitched to the price-performance curve of computers – because SDRs are computers – meaning that they’ll get more powerful and cheaper for the whole of the foreseeable future.

SDRs were great, and are only going to get better.

There was only one problem: they totally broke the FCC’s regulatory model. Sure, when the radio runs the manufacturer’s software it performs as specified. But if the hapless user installs the wrong software package, he could suddenly take down the neighbors’ TV. This is the FCC regula­tory nightmare scenario.

So ten years ago, the FCC held a ‘‘Notice of Inquiry,’’ asking whether it should pass a rule that said, ‘‘If you make a device capable of being a SDR, you have to design it so that it will only run programs that have been cryptographically signed by the FCC.’’ This is essentially the games console/iPhone model: neutering a general purpose computer so that it will only run code that has the approval of some distant authority.

The problems with this proposal are myriad. When the FCC talked about ‘‘devices capable of being SDRs’’ they were talking, fundamentally, about every single computer that would ever be made: desktops, laptops, phones, tablets, embedded car controllers, digital fart machines, smart thermostats, insulin pumps, seismic dampers for skyscrapers, and literally millions of other devices that were – or have since – been engulfed by the general-purpose computer.

Whether or not you like the idea of the FCC be­ing in charge of all the radios (some people make a strong case that this hasn’t been a very good idea on balance), you’d have to look very far to find someone who thought that the FCC should also be in charge of anti-lock braking systems, DVD players, and oil pipeline leak-sensors. Even if the FCC subsumed every other US federal agency, it would still not have enough personnel to review all the code that people wanted to run on all the devices that could be made into software defined radios.

Which brings me to the other problem with the FCC’s plan: it wouldn’t work. Virus writers would figure out how to promulgate malware that would turn millions of devices into RF-squealing noisemakers that got in the way of everything. Jailbreakers would unlock their devices to allow for non-radio-related functionality, but they’d also be able to do bad radio stuff. It was a terrible idea.

The thing is, SDRs really do threaten the way we do radio today. As someone who flies in airplanes and might need a trip in an ambulance someday, I’d love to know that our radio ecosystem is well regulated and unlikely to collapse due to tinkering, malicious acts, or incompetence.

When I was planning my response to the FCC NOI, I asked the smartest geeks I knew how they thought we should solve the problem. Most had no idea (a few were as alarmed to realize they had no idea as I was), but two of my favorite nerds came through with a brilliant solution: Limor ‘‘Lady Ada’’ Fried and Andrew ‘‘Bunnie’’ Huang. I cornered them both in San Diego at an O’Reilly Emerging Technologies conference and hit them up for answers.

Here’s where their reasoning took them: When the FCC’s stupid plan to sign all the code died, there would still be a problem with devices emitting RF that clobbered the devices you wanted to use. Programmers would make simple errors that would cascade into terrible radio interference, Users would do dumb, unanticipated things with the configurations of their devices. Nearby vacuum cleaners and microwave ovens would take hard knocks that dislodged their shielding and turned them into inaudibly high-pitched radio klaxons. Bad guys – malware authors, griefers, pranksters, and criminals – would deliberately reconfigure devices (theirs and yours, if they could get at them) to do bad radio stuff.

This means that, no matter what, we’ll all need some way to sense and triangulate upon bad radio emitters. The good news is, once we’re all carrying around lots of flexible, cheap SDRs, we’ll be able to do that sensing and triangulation – and the more SDRs we have, the more we’ll need them to sense the broken devices in their vicinity as part of normal troubleshooting. If my radios and your radios and Fred’s radios all agree that there’s Something Bad happening Over There, they can raise a flag for humans to deal with. They can tell their owners, or upload a log-entry to a public ledger of hotspots, or take some other step that will allow humans to intervene. Maybe that takes the form of you filling in a bug report with the FCC’s radio cops; maybe it means you look over in a corner of your kitchen to figure out why your microwave is messing up your house’s network.

This is a great answer, because it treats humans as sensors, not things to be sensed. It distributes intelligence and agency to the edges of the network. It is resilient and flexible, and responds well to intentional and accidental malfunctions.

A decade later, the FCC still hasn’t figured out what to do about SDRs, and the problem is spreading beyond radio waves.

The Volkswagen diesel scandal raises the pos­sibility that your car is spewing toxic gases at an unbelievable rate, polluting your neighbors’ air and blowing NOx directly into your kids’ faces when they help you scrape ice off the windows of your diesel on a February morning.

The reason the VWs were able to get away with the diesel scam for so long is that cars have become rolling computers that we put our bodies into. VW’s cars were programmed to detect when their emissions were being tested, and to change their emissions during those times. VW is almost certainly not the only manufacturer engaged in these sorts of shenanigans.

Even if we someday assure ourselves than no auto-manufacturer does this deliberately, it might still happen. Tomorrow’s rolling-computers-with-our-bodies-in-them will likely be governed by self-modifying ‘‘supervised machine learning algo­rithms’’ that are continuously seeking to find ways to improve their efficiency. Supervised machine learn­ing can be a powerful optimization technique, but it also has an unfortunate tendency to hit on strategies that replicate sleazy loophole-seeking. Tomorrow’s cars might randomly try varying emissions under different circumstances and naively stumble on the fact that when they dialed down emissions to a whisper under conditions that replicate emissions-tests, they got to pollute (and command spectacular gas-mileage) the rest of the time.

But if your car isn’t an emissions sensor now, it will (and should) be tomorrow. Just as you should have a CO detector in your kitchen and a radon detector in your basement, you should have emis­sions detectors around your car, telling you what’s coming out of your tailpipe and the tailpipes of the cars around you, because you don’t want to be breathing that stuff.

If all the cars on the road are sensing and emitting, then the cars that are doing something bad (because they’re running bad code, because they’re running malicious code, because they’re running cheating code, or because they have a mechanical failure) will be quickly spotted by lots of other cars, which can do something about it – rat them out to the cops, log them on a public registry, or something else that will help find and address the cars that’re doing something bad.

Sticking with cars for a moment: self-driving cars, or cars that assist their drivers by handling some of the tasks that we call ‘‘driving’’ today, will only become more common from now on. It’s important, for obvious reasons, that these cars have good software.

You have probably encountered ‘‘the trolley problem,’’ a classic thought experiment that takes on a new dimension with self-driving cars. The traditional problem asks you what a driver should do when faced with two maneuvers: either drive on and fatally crash into a trolley (or a schoolbus full of children) or swerve and kill a single innocent bystander. The self-driving car variant asks whether your car should be programmed to drive into the schoolbus (killing the kids but somehow sparing your life) or drive off a cliff, sparing the kids and killing you.

Latent in the self-driving-car/trolley problem is the presumption that your self-driving car will sometimes make maneuvers that you won’t be able to override.

We could accomplish this by regulating the software that’s allowed to run on an autonomous or semi-autonomous car, requiring that car computers be designed to reject code that isn’t signed by the Department of Transport. But that solution is just as brittle as the FCC’s notional radio-regulation regime. It doesn’t help you when a hobbyist hacks her car to do something unexpected, it doesn’t help you when the manufacturers make a mistake, and it won’t help if someone just swaps out the computer for any of the billions (!) of functionally equivalent computers already in the stream of commerce. It won’t help you when a traditional, non-autonomous car being driven by a dumb old human does some­thing unpredictable and fatal.

What’s more, designing autonomous cars to ignore their drivers creates the nightmarish pos­sibilities that someone could maliciously seize control over your car and cause it to do bad things (plowing into the schoolbus after you’ve decided to nobly drive off the cliff, for example) and that your car would be designed so that you have no way of overriding it. Designing a car that’s supposed to let third parties who are adverse to the passengers’ in­terests override those passengers’ dictates would be a terrifying gift to carjackers, rapists, and dictators.

It would not, however, stop self-driving cars from doing terrible things.

But self-driving cars are studded with sensor-packages that spend a lot of time sensing and evaluating their surroundings, including other cars. That means that if your car spots another car doing something weird/dangerous, it can log that fact, report it to the police, and/or get you out of that car’s way.

This is resilient. This lets people improve their cars’ programming, gives people overrides on their cars’ operation – and it still has a way to quickly detect and interdict actions (deliberate or accidental) that put other road-users at risk.

Your home is increasingly a computer you live inside, filled with other computers, from your doorbell, to your toothbrush, to your toilet. These devices are being turned into computers so that they can collectively sense and respond to their environ­ments (including you and your behaviors) and they have notoriously terrible security. From your medi­cal implants to your climate control, the Internet of Things is terribly insecure and not fit for human use.

The Internet of Things is wildly unlikely to have fewer code-defects that cause accidental trouble (or are sabotaged by malicious actors) than all our other computers, and the mischief they can make is really something to contemplate.

But your networked devices will be on a network together. They’ll be designed to try and sense and act together. One of the things they could – and should – sense is whether any of their colleagues is doing something unexpected and bad, and then let you know about it.

That’s the shape of the solution: the future of the Internet of Things should involve constant sensing by devices of other devices, looking for evidence of badness, making reports up the chain to humans or other authorities to do something about it.

The devil is in the details: we don’t want a system that makes it easy for your prankish neighbors to make the police think you’re harboring a massive
radio-disrupter, driving like a madman, or tailpipe-spewing more than the rest of the city combined. You don’t want your devices to be tricked into tripping spurious alarms every night at 2AM. We also need to have a robust debate about what kind of radio-energy, driving maneuvers, network traf­fic, and engine emissions are permissible, and who enforces the limits, and what the rule of law looks like for those guidelines.

Those questions are hard, but they’re not wicked. Once we agree that we’re fighting to make our environments smart enough to find the noisome and the noisy, the infected and the malicious (not to make our environments incapable of being all those things at all), then at least we have a fighting chance of solving our problems.


Cory Doctorow is the author of Walkaway, Little Brother, and Information Doesn’t Want to Be Free (among many others); he is the co-owner of Boing Boing, a special consultant to the Electronic Frontier Foundation, a visiting professor of Computer Science at the Open University and an MIT Media Lab Research Affiliate.


From the January 2016 issue of Locus Magazine

Leave a Reply

Your email address will not be published. Required fields are marked *