Big Tech and privacy: Apple flirts with the “dark side”
- Compared to other Big Tech companies, Apple has been the poster child of privacy protection.
- Unfortunately, a recent announcement has breached that trust.
- How much power do we want Big Tech to have? And what sort of society do we want?
Apple has built a reputation as the “least evil” Big Tech giant when it comes to privacy. All these companies — Google, Facebook, Apple, Microsoft, Amazon — collect our data and essentially spy on us in a multitude of ways. Apple, however, has cultivated a reputation as by far the least invasive of our mainstream technology options. Their recent dramatic reversal on this issue has caused an uproar. What is going on, and what should we do about it?
I personally use and appreciate Apple products. Their software and hardware blend together, making smartphones and computers simpler, easier, and more enjoyable. More important (to me, anyway), Apple has conspicuously maintained much better privacy policies than othertech giants, who log every place you visit, calculate and sell your religion and politics, and store every single search you have ever made (even the “incognito” ones). Apple’s privacy policies are imperfect but less terrifying. The general consensus is that most Apple products and services spy less on you and send much less of your personal data to third parties. That matters to many users and should matter to all of us.
Now, the bad news. All the major cloud storage providers have for some time been quietly scanning everything you upload. That is no conspiracy theory: you agree to it when you accept their privacy policies. They do this for advertising (which is what is meant by the phrase “to help us improve our services” in the user agreement). They also look for and report illegal activity. A big part of monitoring for criminal activity is looking for “CSAM,” a polite acronym for something horrific. In August, Apple announced that it would take a drastic step further and push a software update for iPhones that would scan and analyze all of the images on your iPhone, looking not just for known CSAM but for any images that a computer algorithm judges to be CSAM.
There are two enormous red flags here. First, the software does not operate on Apple’s cloud servers, where you are free to choose whether to park your data and allow Apple to scan it for various purposes. The scanning is performed on your phone, and it would scan every picture on your phone, looking for content that matches a database of bad images.
Image recognition tech is still bad
Why is this a problem for people who do not keep illegal images on their phones? The second red flag is that the software does not look for a specific file — horrifying_image.jpg — and ignore all of your personal photos. Rather, it uses what Apple calls “NeuralHash,” a piece of computer code that looks for features and patterns in images. You can read their own description here.
Computer image recognition is much better than it once was. Despite the hype surrounding it the past few years, however, it is still extremely fallible. There are many ways that computer image recognition can be baffled by tricks that are not sophisticated enough to fool toddlers. This fascinating research paper covers just one of them. It finds that a 99 percent confident (and correct) image identification of a submarine can be made into a 99 percent confident (but wrong) identification of a bonnet by adding a tiny area of static noise to one corner of the image.
Other researchers fooled image recognition by changing a single pixel dot in an image. Hackers know this too. It is extremely easy to add an undetectable pattern to a cute cat picture, triggering algorithms to flag it as something sinister.
Let’s imagine for a moment that this algorithm never mistakes baby bath pictures or submarines or cats or dots for illicit material. This could make things worse. How?
The NeuralHash algorithm is trained by scanning examples of the target images it seeks. This collection of training material is secret from the public. We do not know whether all the material in the database is CSAM or if it also includes things such as: political or religious images, geographic locations, anti-government statements, mis-information according to whoever has the power to define it, material potentially embarrassing to politicians, or whistleblowing documents against powerful authorities. There are many things that tech companies, federal agencies, or autocratic regimes around the world would love to know if you have on your phone. The possibilities are chilling.
Reaction to Apple’s plan was immediate and overwhelmingly negative. Outcry came from international groups like the Electronic Frontier Foundation that specialize in privacy rights all the way down to everyday people in Mac user forums. Apple initially stood its ground and defended the decision. Their weak justifications and reassurances failed to smother the fire. Just last week, the company relented and announced a delay to implementing the program. This is a victory for privacy, but it is not the end of the story.
iSpy with my little eye
It would be very easy for Apple to wait out the uproar and then quietly go ahead with the plan a few months from now. The other tech giants likely would follow suit. But remember that Big Tech already tracks our movements and records our private conversations. If the public does not stay vigilant, Big Tech can keep invading what most of us consider to be our private lives. How much more power over us do we want Big Tech to have? And is this the sort of society that we want?