The Internet of Things is starting to emerge. You can tell it’s just starting, because we’re still using the ungainly name ‘‘Internet of Things.’’ It’s one of those coinages that tells you that we don’t know what a thing is or what it’s for, like ‘‘horseless carriage’’ or ‘‘3D printer.’’
But there’s one thing we do know about the IoT: it involves a lot of sensing. The IoT is what happens when our computers shrink down so small that they are woven into the fabric of every gadget, tool and technology in our lives, imbuing physical objects with the power to sense, report on their environments, and act on them, using computer-actuated motors, switches, levers, and pumps.
It’s a kind of Jetsonian vision, where your table notices that you’re hungry and clatters over to you, having conferred with your kitchen and your supermax pedometer to figure out the intersection of what you’re willing to eat, what’s available to eat, and what you should be eating. The startups in IoT-land are thinking about how to make our built environments ‘‘smart,’’ how to integrate your body, and how to combine those two phenomena to turn the world around you into a nearly magical place, a high-tech version of the ‘‘Be Our Guest’’ set-piece from Disney’s Beauty and the Beast.
The IoT’s would-be architects share a common belief: that ‘‘people’’ are just another kind of ‘‘thing,’’ and that you serve people by acting on their behalf, by anticipating them, asking their personal networks for important facts about them, and then adapting the world around them in realtime to provide the magic. The presumption about the thing-ness of humans is particularly visible in the IoT human-control applications: anti-theft systems, school behavior monitors, police bodycams, parolee ankle-cuffs, employee productivity trackers, prison monitoring systems, War on Terror cameras, sniffers, and mass-surveillance snoops. The Internet of Incarcerated Things is in an adversarial relationship with its users – they are its enemies, and it is charged with the task of keeping them from outwitting it.
Whatever your feelings are about the justice of treating employees, school children, travellers, or even prisoners as things to be controlled by semi-autonomous computers, it’s a good bet that you would feel more dignified and secure if you got to boss your Internet of Things around, rather than vice-versa.
Even in the Internet of Allegedly Free Things, humans and computers are adversaries. Medical telemetry and implant companies envision selling shockingly intimate facts about your body’s internal workings to data-mining services and insurers. Car companies see their vehicles as platforms for gathering data on your driving, on traffic patterns, and on the sense-able facts of the streets you pass by, to sell it to, you guessed it, data-mining companies and insurers. John Deere has argued that its tractors are copyrighted works, and that it, not the farmers, own the soil-density data collected by the torque sensors on the wheels (it sells this data to Monsanto, which charges farmers for the right to know about it).
Today, venture capitalists are uninterested in IoT pitches unless the gadget comes with an ‘‘ecosystem’’ – an app store and a closed channel of add-ons and parts that it can set margins on, to guarantee ongoing revenue streams. These devices are inheriting the worst parts of the inkjet printer and video-game console market, where consumables, software, and replacement parts all come at a high markup set by the original manufacturer, which uses technological countermeasures to keep third parties from invading its territory, which is your wallet.
But imagine a different kind of IoT: an IoT where human beings are first class citizens, ahead of the ‘‘things’’ doing the sensing and the things being sensed.
For example: IoT vendors envision many ‘‘location based’’ businesses. The devices around you sense when you need a pee, or a coffee, or a new set of tires, and they will advertise those services to you, along with special offers that you’ll gain access to by giving them even more intimate knowledge of your life and times.
People today may be indifferent to surveillance, but very few welcome it. And whatever today’s attitudes are about privacy, the general population will only be more hostile to surveillance tomorrow. We haven’t reached peak surveillance, not by a long shot, but we’ve sure as hell reached peak indifference to surveillance.
Imagine a location service that sold itself on the fact that your personal information was securely contained in its environs, used by you and you alone. You could have devices on your person that used their sensors to know things about you – when you last ate, what your dining preferences are, what your blood-sugar is, and so on, but these devices would have no truck with the cloud, and they would not deliver that information to anyone else for analysis.
Instead, as you passed through space, stores, toilets, restaurants, grocers and other entities would emit information about their offerings. These would be seen and analyzed by your personal network, and it would decide, on your behalf, whether to tell you about them, based on the preferences you’ve set, or that it’s learned from you. The fact that you’ve been shown some piece of information would be between you and the computers you own, and – again – shared with no one.
It’s the opposite of the Facebook model, where Facebook owns all the feeds and decides which one you’re allowed to see. This is more like the email model, where your systems download all the messages someone wants to send you, then use your own filters and rules to decide which ones to discard and which ones to display.
This gets even more crucial in the medical sensing and implant world. Today, med-tech companies talk about the kinds of important facts we’ll be able to learn about rare diseases once we can collect longitudinal, deep, granular data on the biological histories of people who contract them. If you get a weird cancer, the doctor will be able to contact the company that sold you the gadget and trawl through your health history to rewind your body through its whole past, looking for clues about how you ended up with your current problems.
But if there’s one thing we’ve learned about huge repositories of sensitive data, it’s that they leak. From Sony to the Office of Personnel Management to Hacking Team to Ashley Madison, they all leak eventually. Putting a bunch of valuable stuff in one place makes it an irresistible target – and then there’s the obvious question: why would all that data need to be held by the manufacturer of your implant, anyway? Why shouldn’t it live in your implant, or your personal network?
If you actually own your data – if the cloud is nothing but an inert repository of encrypted backups that are indistinguishable from noise without your personal keys – good things start to happen. As privacy concerns mount, the amount of data your med-tech devices can gather will be hampered by the combination of market forces (people being unwilling to share data with their gadgets because they don’t want their blood sugar and sex-lives shared with insurance companies and data-brokers) and liability fears (insurance companies refusing to underwrite policies for companies who’ve voluntarily assumed the liability associated with owning all that potentially compromising data on all those potential plaintiffs in a class-action suit).
But once the data is yours and yours alone – once the spyware becomes myware – then these considerations are substantially improved. The privacy concerns over other people knowing intimate facts about your life don’t come into play when it’s you knowing facts about your life.
When you get sick, then you choose to let your doctor see your medical history and draw inferences from it – even authorize her to share them with research colleagues trying to cure whatever ails you. You are in charge, not a manufacturer that sees you as a thing first and a person second.
From theme-parks to smart cities to med-tech to workplace efficiency tuning, treating humans as something more than a data-point, but as something with native intelligence, personal worth, and dignity, opens up whole worlds of transformational, world-changing possibilities.
Cory Doctorow is the author of Walkaway, Little Brother, and Information Doesn’t Want to Be Free (among many others); he is the co-owner of Boing Boing, a special consultant to the Electronic Frontier Foundation, a visiting professor of Computer Science at the Open University and an MIT Media Lab Research Affiliate.
From the September 2015 issue of Locus Magazine