As I write this in mid-November 2020, there’s quite a stir over the new version of Apple’s Mac OS, the operating system that runs on its laptops. For more than a year, Apple has engaged in a covert, global surveillance of its users through its operating system, which automatically sent information about which apps you were running to Apple, and which gave Apple a remote veto over whether that program would launch when you double-clicked it. Most Apple customers don’t know about this, but the kind of Apple user who does know about it is also likely to be the kind of security-conscious person who doesn’t like it and even takes steps to block it.
A confluence of events has tipped this obscure “feature” into global notoriety: first, Apple suffered an outage in the servers that received this information and okayed the launch of its customers’ programs, meaning that Mac OS users couldn’t run the programs they relied on to do their work. To make things worse, the outage coincided with the release of “Big Sur,” the latest version of Mac OS, which locks out the aftermarket additions that privacy- and security-conscious Apple customers use to block Apple’s OS-level surveillance. In other words, at the very same moment that millions of Apple device owners were discovering why they might want to switch off this hidden “feature,” Apple made it all but impossible to do so.
All this was written up in “Your Computer Isn’t Yours,” (<sneak.berlin/20201112/your-computer-isnt-yours/>) an excellent article by Jeffrey Paul, a Berlin-based technologist. Paul makes the point that the latest Apple hardware will only run the new, more-surveillant version of Mac OS, so, barring a change in Apple’s corporate philosophy, this is the future of Mac OS. Paul also namechecked me at the start of his essay, which means that I got a look at it early and have had occasion to follow along with the commentary it provoked.
Why would Apple bother to do this spying? The best guess (and most charitable explanation) is that Apple is surveilling the whole ecosystem of MacOS devices so that it can discover malicious software early, and it has arrogated the ability to block certain programs from running on its customers’ computers in order to protect them from bad software. In a world of rampant and ghastly cybersecurity failures, these are both highly desirable features.
But, as Paul points out, Apple has now arrogated to itself the power to know, with a reasonable degree of granularity, which programs its customers are using, and to decide whether customers should be permitted to do so. Nothing in this surveillance system prevents it from being used against legitimate software. Nothing prevents it from being used to extract surveillance data about Apple customers – for example, to determine where you are, or whether there is anyone else there with you running a Mac. The only thing that stops Apple from blocking you from running legitimate apps – or from gathering information about your movements and social activities – is its goodwill and good judgment, and therein lies the problem.
The security researcher (and Hugo Award-nominee) Bruce Schneier has a name for this arrangement: he calls it feudal security. Here in the 21st century, we are beset by all manner of digital bandits, from identity thieves, to stalkers, to corporate and government spies, to harassers. There is no way for us to defend ourselves: even skilled technologists who administer their own networked services are no match for the bandits. To keep bandits out, you have to be perfect and perfectly vigilant, and never make a single mistake. For the bandits to get you, they need merely find a single mistake that you’ve made.
To be safe, then, you have to ally yourself with a warlord. Apple, Google, Facebook, Microsoft, and a few others have built massive fortresses bristling with defenses, whose parapets are stalked by the most ferocious cybermercenaries money can buy, and they will defend you from every attacker – except for their employers. If the warlord turns on you, you’re defenseless.
We see this dynamic playing out with all of our modern warlords. Google is tweaking Chrome, its dominant browser, to block commercial surveillance, but not Google’s own commercial surveillance. Google will do its level best to block scumbag marketers from tracking you on the web, but if a marketer pays Google, and convinces Google’s gatekeepers that it is not a scumbag, Google will allow them to spy on you. If you don’t mind being spied on by Google, and if you trust Google to decide who’s a scumbag and who isn’t, this is great. But if you and Google disagree on what constitutes scumbaggery, you will lose, thanks, in part, to other changes to Chrome that make it much harder to block the ads that Chrome lets through.
Over in Facebook land, this dynamic is a little easier to see. After the Cambridge Analytica scandal, Facebook tightened up who could buy Facebook’s surveillance data about you and what they could do with it. Then, in the runup to the 2020 US elections, Facebook went further, instituting policies intended to prevent paid political disinformation campaigns at a critical juncture.
But Facebook isn’t doing a very good job of defending its users from the bandits. It’s a bad (or possibly inattentive, or indifferent, or overstretched) warlord, though. We know this thanks to Ad Observer and Ad Observatory, a pair of tools from NYU’s engineering school. Ad Observer is a browser plugin that Facebook users run; whenever they encounter an ad, Ad Observer makes a copy of it and sends it to Ad Observatory, an open repository of Facebook ads. Researchers and accountability journalists use Ad Observatory to document all the ways that Facebook is failing to enforce its own policies.
In October, Facebook sent a legal threat to NYU, demanding that Ad Observer and Ad Observatory shut down. Facebook says that it is doing its duties as an honest warlord here, because Ad Observer could (but doesn’t) violate its users’ privacy. As the local warlord, Facebook has a duty to prevent anyone from supplying the people inside its fortress with tools that could expose those under its protection to risk.
While the risk to Facebook users from Ad Observer is wholly hypothetical, Ad Observer poses a concrete risk to Facebook itself, by exposing the company’s failings to live up to both its promises and its legal duties stemming from various settlements over past privacy violations – and the one entity Facebook will never, ever protect you from is Facebook. They’ve got lots of resources at their disposal, too: not just cybermercenaries that could tweak Facebook’s systems to try to block Ad Observer, but also a legion of lawyers who can enlist the crown to destroy Ad Observer on its behalf.
Back to Apple. In 2017, Apple removed all effective privacy tools from the Chinese version of the iPhone/iPad App Store, at the behest of the Chinese government. The Chinese government wanted to spy on Apple customers in China, and so it ordered Apple to facilitate this surveillance. Despite mounting evidence that the Chinese state was using concentration camps, punitive rape, coerced sterilizations, and forced labor in its ethnic cleansing program in Xinjiang province, Apple went along with the order, exposing vulnerable populations to state surveillance that could lead to gross human rights abuses.
The order to compromise its customers’ privacy was a lawful one. Apple has to follow Chinese laws, because it has both a commercial presence in China (the subsidiary arm that sells iPhones, ads, apps, and other products and services in China) and because it relies on Chinese manufacturing to make its products. If Apple chose not to comply with the Chinese order, it would either have to risk fines against its Chinese subsidiary and possible criminal proceedings against its Chinese staff, or pull out of China and risk having its digital services blocked by China’s Great Firewall, and its Chinese manufacturing subcontractors could be ordered to sever their relations with Apple.
In other words, the cost of noncompliance with the order is high, so high that Apple decided that putting its customers at risk was an acceptable alternative.
Therein lies the problem with trusting warlords to keep you safe: they have priorities that aren’t your priorities, and when there’s a life-or-death crisis that requires them to choose between your survival and their own, they will throw you to the bandits.
But that’s not the end of the story.
Of course warlords will only protect us to the extent that doing so doesn’t risk their own extinction. Is there any way that warlords can reduce the likelihood that bandits will achieve the leverage they need to convince them to throw us over the wall to be torn to pieces?
There most assuredly is, and that’s the most frustrating part of the whole business of Apple’s app surveillance/cutoff program.
The mere existence of such a killswitch is a moral hazard. If you can cut off your users’ privacy – or their tools that improve competition or undo lock-in – then you invite others to demand that these tools be used to their advantage. The fact that Apple devices are designed to prevent users from overriding the company’s veto over their computing makes it inevitable that some government will demand that this veto be exercised in their favor. After all, the Chinese government wasn’t the first state to demand that Apple expose its customers to surveillance – that was the Obama administration, which sought a back-door for Apple’s devices in order to investigate the San Bernardino terrorist attack. Apple resisted the US government demands, something it was able to do because the US constitution constrained the government’s ability to compel action. China faces no such constraint.
But even if Apple exited the Chinese market and repatriated its manufacturing to countries with strong rule-of-law protections for human rights, the killswitch would still present a serious risk. Maybe Tim Cook is a good egg who’ll defend the company’s customers from state surveillance and not abuse Apple’s killswitches to shut down competition, but what about all the CEOs that come after Cook?
Schneier calls this “Feudal Security,” but as the medievalist Stephen Morillo wrote to me, the correct term for this is probably “Manorial Security” – while feudalism was based on land-grants to aristocrats who promised armed soldiers in return, manorialism referred to a system in which an elite owned all the property and the rest of the world had to work on that property on terms that the local lord set.
After all, the thing that gives tech companies the power to overrule your choices on your computers and devices is that they’re not really yours. Thanks to onerous licensing terms and bizarre retrofits to copyright and patent law, the only entities who can truly be said to “own” anything are aristocratic corporations, who may have to capitulate to the king, but owe no fealty to us, the peasants.
That’s how Facebook can roll out a new Oculus VR headset that is tied to your Facebook account – if you resign from (or get kicked off of) Facebook, your VR headset turns into a brick. It’s not really yours. Rather, you are a tenant of Facebook’s, which has graciously extended the use of its property to you for the low price of $400.
Feudalism was also based on manorialism, but it was tied to hereditary titles; you couldn’t aspire to become a lord of the manor unless you were to the manor born.
In the early tech bubbles – the PC years, the dotcom boom – there was real manorialist energy, the sense that “two guys (ugh) in a garage” could seize a manor from one of the great hereditary, titled lines, as Microsoft did to IBM. But lurking in the heart of all those mercantalists was a secret aristocrat, someone whose commitment to overturning the old order was highly selective and applied only to themselves.
Having attained walled manors of their own, these merchant-warlords are determined to ensure that no one does to them what they did to their toppled forbears.
That aristocratic urge is why we see lock-in, kill-switches, overcollection and overretention of data, and the invocation of state power to silence critics. As with feudal aristocrats, the state is happy to lend these warlords their legitimacy, in exchange for the power to militarize the aristocrat’s holdings.
There is a middle path available to tech giants, something that allows them to maintain their presence in countries with weak human rights frameworks without putting their users at risk: the Ulysses Pact.
A Ulysses Pact is a pre-commitment that you make in a moment of strength: think of throwing away your Oreos when you go on a diet, so that when you’re hungry at 2AM you can’t eat a whole sleeve of them. The Ulysses Pact takes its name from Ulysses’s voyage through the siren seas, where he declined to follow the usual protocol of filling his ears with wax to block out the siren song, and instead instructed his sailors to lash him to the mast, so he could hear the song but could not follow its otherwise irresistible command to throw himself into the sea.
Apple’s App Store (and its new MacOS killswitch) are so tempting to governments because there is no way to opt out of them. For its own business reasons (including the power to exclude competitors from its mobile store, or to compel them to cut the company in for a large commission on in-app purchases), Apple takes extreme measures to prevent users from overriding its killswitch. If an app is kicked out of the App Store or blocked by Apple’s anti-malware surveillance, then by design you have no way to put your judgment ahead of Apple’s.
That means that any government that orders Apple to use its killswitches to achieve its goals knows that Apple’s customers will be helpless before such an order.
On the other hand, what if Apple – by design – made is possible for users to override its killswitches? When governments came to Apple demanding that some app be removed from the App Store, Apple can say, “Yes, we will do that, of course, but all the citizens you want to spy on or censor will just download those apps from somewhere else and reinstall them.” At least some of the time, governments will recognize the futility of pursuing this kind of control and leave Apple – and Apple customers – alone. And if they stubbornly press ahead with a symbolic killswitch order, Apple’s customers can go ahead and download the blocked apps from elsewhere, reclaiming the security that Apple had taken away from them.
Apple had plenty of warnings about the likelihood of its App Store killswitches being enlisted by oppressive governments, long before the 2017 Chinese order. Perhaps the company can be excused for not heeding those warnings. After all, they referred to a hypothetical (if likely and foreseeable) outcome.
But the company can’t be excused for its latest killswitch. The risk is not hypothetical. It’s live. There are product managers and executives at Apple who lived through the 2017 Chinese killswitch orders who signed off on the latest killswitch. There are people in concentration camps in Xinjiang province who might have evaded capture if they had working privacy tools.
It’s not just Apple. Ever since the passage of the Patriot Act, tech companies knew that any data they collected on their users could be taken from them by spy agencies, using secret warrants that they couldn’t disclose…ever. But in 2013, the whistleblower Edward Snowden revealed that these secret, invasive warrants were a sham. The US government was spying on everything the tech companies collected, wiretapping the lines between tech companies’ data-centers, and then issuing warrants for the stuff that seemed interesting, so that when that data got used in the real world, tech executives wouldn’t guess at the existence of the secret wiretaps.
The 2013 revelations proved that the US government viewed the tech companies as host organisms to be parasitized at will, a force that would mobilize market investments to erect a vast, expensive surveillance apparatus that the state could then wield at bargain-basement prices.
To engage in data-collection in the wake of 2013 isn’t just an oversight, it’s an act of collaboration with the forces of surveillance. In 2020, Google has admitted that it is being required to respond to “reverse search warrants” that reveal the identities of every person who was present at a certain location at a certain time; and “search-term warrants” to reveal the identities of every person who used a specific search-term. These warrants are utterly foreseeable. Google collects this data, so governments will require them to turn it over – and not just the US government, either.
As with Apple, the best way for Google to avoid being ordered to turn over data on its users is to not collect or retain that data in the first place. And, as with Apple, the next best thing is to give users the power to turn off that data-collection and data-retention altogether, something Google’s gotten marginally better at in the past year.
The writing has been on the wall about the relationship of government control and surveillance to private sector control and surveillance since Bill Clinton’s war on cryptography. It became unmissable after the Patriot Act passed Congress in 2001. And now, after 2020, the writing is no longer on the wall – it is in flaming, 20-story-tall letters in the sky overhead.
Cory Doctorow is the author of Walkaway, Little Brother, and Information Doesn’t Want to Be Free (among many others); he is the co-owner of Boing Boing, a special consultant to the Electronic Frontier Foundation, a visiting professor of Computer Science at the Open University and an MIT Media Lab Research Affiliate.
All opinions expressed by commentators are solely their own and do not reflect the opinions of Locus.
This article and more like it in the January 2021 issue of Locus.
While you are here, please take a moment to support Locus with a one-time or recurring donation. We rely on reader donations to keep the magazine and site going, and would like to keep the site paywall free, but WE NEED YOUR FINANCIAL SUPPORT to continue quality coverage of the science fiction and fantasy field.
©Locus Magazine. Copyrighted material may not be republished without permission of LSFF.