Data spies: The dark and shady practices of Silicon Valley

This is how data harvesting really works. You're not going to like it.

ROGER MACNAMEE: After the internet bubble burst in 2000, there was a period where the venture capital industry retrenched, and a void was left. A group of people saw an opportunity. They're now known as the PayPal Mafia. This was Peter Thiel, Elon Musk, Reid Hoffman they had two of the most brilliant insights in the history of entrepreneurship. The first was that the internet was about to make a pivot from being a web of pages to a web of people. They called it web 2.0. This is what created the basis for all of social networking. The second insight was probably even more powerful than that. It had turned out that for 50 years before that time, Silicon Valley had basically struggled with the limits of technology. You never had enough processing power, memory, storage, or bandwidth to do what the customer wanted. So every product had to address just a piece of the customer problem. The notion of making a global product, like Google or Facebook, never occurred to anyone because you never had enough resources to get it done. The PayPal Mafia realized that Moore's law and Metcalfe's law the two laws that talk about processing power and networks were about to hit crossing points, where there would be enough resources to do whatever you wanted to do. And it happened to coincide exactly with their insight about web 2.0.

They subscribed to a form of libertarianism that basically not only praised the individual, but it had this notion that you could disrupt; you could change things and not be responsible for the consequences of your actions, which was incredibly convenient if you're about to go out and create giant global enterprises. The notion that you could just knock things over and it was somebody else's problem, that allowed you to do pretty much anything. This notion of blitzscaling, eliminating friction of all kinds so you can grow as rapidly as possible to global scale. And the problem with eliminating friction is you eliminate the ability of populations to adjust to change. Things happened so quickly, there's no opportunity for evolution. And these guys went, as you saw, in a decade from nothing to global. And in 15 years, they went from nothing to global domination. But they started with this notion that the product had to be free. You had to get rid of the friction of a purchase price. So if you're going have a free product, it's got to be supported by advertising. And if you want to have advertising be valuable, people had to pay attention. They got into this notion that they would manipulate attention, first with rewards to create habits. They would give you likes. They would give you notifications to get you to come back. And they would build habits that for many people turned into addictions.

But the other thing they did was when they got you there, they appealed to the lizard brain, the low-level flight-or-fight emotions that cause people to become tribal. They appeal to outrage and fear. Why? Because when you're afraid or when you are outraged, You share the source of that fear and outrage with other people. Because if they share that with you, that emotion, you're going to feel better. And that worked really, really well. It caused people to share a lot of stuff and see lots of ads. If they'd stop there, we probably would have been OK. But they didn't.

Google had a brilliant insight in 2003. Actually, they had it earlier. They patented in 2003, this notion of behavioral prediction. What Google discovered was that the data they captured from their users that they used to improve the search engine they captured tons of data they had nothing to do with improving the search engine. But they discovered it told them a lot about what people were going to do. It gave them a way to predict behavior. Traditionally, business collected data to improve the product or a service for the person from whom they collected the data. But behavioral prediction as practiced first by Google then by Facebook and now also by Amazon and, I believe, by Microsoft, is really about taking data from one person and applying it to somebody else. So the person whose data is used gets no benefit. They could use persuasive technology to increase the probability that the prediction would be accurate, which would make it more valuable.

Let me give you an example. Let's say you're going to buy a car. You buy a car. They look at the 200 steps you take before you bought the car to see what are the things you do before you buy a car. And maybe 10 or 20 of them are about buying a car. But most of them are about other stuff. But if they compare enough people, they discover all kinds of patterns from things that don't appear to have anything to do with the car but are actually lead indicators that you're going that way. And then they take that, and they can price, they can make predictions based on where you are in those 200 steps. So if you make the first 20, that doesn't mean anything. But if you make the first 150, the probability you're going to buy a car is pretty high. If you make the first 180, if you're on that curve after 180, the odds that you're going to buy a car is really high. And 190 it's almost a certainty. So the price becomes exceptionally high. If they stop there, that would be great. But they didn't.

They used filter bubbles, which is to say their ability to nudge people by confirming preexisting beliefs. And they used recommendation engines to increase the probability that their forecasts would be accurate. So when you go in there and you see a recommendation engine, do you think to yourself, well, this thing is helping me find things I'm going to like? Maybe. But for certain, it's helping you find things that the person would like to sell you. And now we're starting to get into really problematic areas. And when you watch this business model go to its final point, they're tracking everything. So they go out there, and they buy all your credit card information. And they go and buy all your location information from the cellular carrier. And if there's health apps that they can get data from, they get that. They buy data wherever they can get it. They create this really high-resolution picture of you. But they're also tracking what you do. Maybe you've been on one of those CAPTCHA things that Google has to identify whether you're a human or not. And they show you, can you pick out the road signs? That has nothing to do with are you a human or not. That's to train Google's self-driving car. They've figured out if you're a human based on the way you move your mouse. Guess what? They're keeping a file of all your mouse movements over time. And I don't know if they can do this yet, but pretty soon they're going to be able to tell if you have a neurologic problem before you even know it. So your mouse slows down, maybe gets more shaky. Maybe you've got the beginning of Parkinson's. Now, here's the problem. If you were their customer, they would call you up and say, hey, you've got to go to hospital. But you're just the fuel for their business. The customer is going to be the insurance company that will pay them thousands of dollars to know that you have the beginnings of a neurologic problem, which would allow them either to raise your insurance rates or terminate your coverage entirely.

And I would like to think that we could have a national conversation about did we sign up for this? And the question is I want to ask is, why is it legal that there is commerce based on our personal financial transactions, credit cards, mortgages, and things like that? Why is that actually legal for people to trade in that information? Same question for geolocation data from cellular carriers or map products. Why is that legal? Why is it legal to have commerce in health data from apps and devices? It's not legal to have commerce in the stuff from hospitals or doctors. These things aren't happening because these companies are bad people. They're happening because there are no rules. Nobody told them to stop. They're really smart. It's obviously a great business. And so my basic point here is I think we should stop, have a serious conversation, figure out as a country: What are we going to do about this?

  • In this absorbing talk spanning the last 20 years of tech, Roger McNamee starts at the origins of the PayPal Mafia (which included entrepreneurs like Elon Musk, Peter Thiel, and Reid Hoffman) and traces them to Silicon Valley's global domination.
  • Data is used by online vendors in all industries to make behavioral predictions for profit – often in unethical or cloaked ways.
  • Did we sign up for this? Roger McNamee calls for a halt to blind participation and asks for a national debate on whether commerce based on personal data (but not for personal benefit) should be legal.


Are we really addicted to technology?

Fear that new technologies are addictive isn't a modern phenomenon.

Credit: Rodion Kutsaev via Unsplash
Technology & Innovation

This article was originally published on our sister site, Freethink, which has partnered with the Build for Tomorrow podcast to go inside new episodes each month. Subscribe here to learn more about the crazy, curious things from history that shaped us, and how we can shape the future.

In many ways, technology has made our lives better. Through smartphones, apps, and social media platforms we can now work more efficiently and connect in ways that would have been unimaginable just decades ago.

But as we've grown to rely on technology for a lot of our professional and personal needs, most of us are asking tough questions about the role technology plays in our own lives. Are we becoming too dependent on technology to the point that it's actually harming us?

In the latest episode of Build for Tomorrow, host and Entrepreneur Editor-in-Chief Jason Feifer takes on the thorny question: is technology addictive?

Popularizing medical language

What makes something addictive rather than just engaging? It's a meaningful distinction because if technology is addictive, the next question could be: are the creators of popular digital technologies, like smartphones and social media apps, intentionally creating things that are addictive? If so, should they be held responsible?

To answer those questions, we've first got to agree on a definition of "addiction." As it turns out, that's not quite as easy as it sounds.

If we don't have a good definition of what we're talking about, then we can't properly help people.

LIAM SATCHELL UNIVERSITY OF WINCHESTER

"Over the past few decades, a lot of effort has gone into destigmatizing conversations about mental health, which of course is a very good thing," Feifer explains. It also means that medical language has entered into our vernacular —we're now more comfortable using clinical words outside of a specific diagnosis.

"We've all got that one friend who says, 'Oh, I'm a little bit OCD' or that friend who says, 'Oh, this is my big PTSD moment,'" Liam Satchell, a lecturer in psychology at the University of Winchester and guest on the podcast, says. He's concerned about how the word "addiction" gets tossed around by people with no background in mental health. An increased concern surrounding "tech addiction" isn't actually being driven by concern among psychiatric professionals, he says.

"These sorts of concerns about things like internet use or social media use haven't come from the psychiatric community as much," Satchell says. "They've come from people who are interested in technology first."

The casual use of medical language can lead to confusion about what is actually a mental health concern. We need a reliable standard for recognizing, discussing, and ultimately treating psychological conditions.

"If we don't have a good definition of what we're talking about, then we can't properly help people," Satchell says. That's why, according to Satchell, the psychiatric definition of addiction being based around experiencing distress or significant family, social, or occupational disruption needs to be included in any definition of addiction we may use.

Too much reading causes... heat rashes?

But as Feifer points out in his podcast, both popularizing medical language and the fear that new technologies are addictive aren't totally modern phenomena.

Take, for instance, the concept of "reading mania."

In the 18th Century, an author named J. G. Heinzmann claimed that people who read too many novels could experience something called "reading mania." This condition, Heinzmann explained, could cause many symptoms, including: "weakening of the eyes, heat rashes, gout, arthritis, hemorrhoids, asthma, apoplexy, pulmonary disease, indigestion, blocking of the bowels, nervous disorder, migraines, epilepsy, hypochondria, and melancholy."

"That is all very specific! But really, even the term 'reading mania' is medical," Feifer says.

"Manic episodes are not a joke, folks. But this didn't stop people a century later from applying the same term to wristwatches."

Indeed, an 1889 piece in the Newcastle Weekly Courant declared: "The watch mania, as it is called, is certainly excessive; indeed it becomes rabid."

Similar concerns have echoed throughout history about the radio, telephone, TV, and video games.

"It may sound comical in our modern context, but back then, when those new technologies were the latest distraction, they were probably really engaging. People spent too much time doing them," Feifer says. "And what can we say about that now, having seen it play out over and over and over again? We can say it's common. It's a common behavior. Doesn't mean it's the healthiest one. It's just not a medical problem."

Few today would argue that novels are in-and-of-themselves addictive — regardless of how voraciously you may have consumed your last favorite novel. So, what happened? Were these things ever addictive — and if not, what was happening in these moments of concern?

People are complicated, our relationship with new technology is complicated, and addiction is complicated — and our efforts to simplify very complex things, and make generalizations across broad portions of the population, can lead to real harm.

JASON FEIFER HOST OF BUILD FOR TOMORROW

There's a risk of pathologizing normal behavior, says Joel Billieux, professor of clinical psychology and psychological assessment at the University of Lausanne in Switzerland, and guest on the podcast. He's on a mission to understand how we can suss out what is truly addictive behavior versus what is normal behavior that we're calling addictive.

For Billieux and other professionals, this isn't just a rhetorical game. He uses the example of gaming addiction, which has come under increased scrutiny over the past half-decade. The language used around the subject of gaming addiction will determine how behaviors of potential patients are analyzed — and ultimately what treatment is recommended.

"For a lot of people you can realize that the gaming is actually a coping (mechanism for) social anxiety or trauma or depression," says Billieux.

"Those cases, of course, you will not necessarily target gaming per se. You will target what caused depression. And then as a result, If you succeed, gaming will diminish."

In some instances, a person might legitimately be addicted to gaming or technology, and require the corresponding treatment — but that treatment might be the wrong answer for another person.

"None of this is to discount that for some people, technology is a factor in a mental health problem," says Feifer.

"I am also not discounting that individual people can use technology such as smartphones or social media to a degree where it has a genuine negative impact on their lives. But the point here to understand is that people are complicated, our relationship with new technology is complicated, and addiction is complicated — and our efforts to simplify very complex things, and make generalizations across broad portions of the population, can lead to real harm."

Behavioral addiction is a notoriously complex thing for professionals to diagnose — even more so since the latest edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), the book professionals use to classify mental disorders, introduced a new idea about addiction in 2013.

"The DSM-5 grouped substance addiction with gambling addiction — this is the first time that substance addiction was directly categorized with any kind of behavioral addiction," Feifer says.

"And then, the DSM-5 went a tiny bit further — and proposed that other potentially addictive behaviors require further study."

This might not sound like that big of a deal to laypeople, but its effect was massive in medicine.

"Researchers started launching studies — not to see if a behavior like social media use can be addictive, but rather, to start with the assumption that social media use is addictive, and then to see how many people have the addiction," says Feifer.

Learned helplessness

The assumption that a lot of us are addicted to technology may itself be harming us by undermining our autonomy and belief that we have agency to create change in our own lives. That's what Nir Eyal, author of the books Hooked and Indistractable, calls 'learned helplessness.'

"The price of living in a world with so many good things in it is that sometimes we have to learn these new skills, these new behaviors to moderate our use," Eyal says. "One surefire way to not do anything is to believe you are powerless. That's what learned helplessness is all about."

So if it's not an addiction that most of us are experiencing when we check our phones 90 times a day or are wondering about what our followers are saying on Twitter — then what is it?

"A choice, a willful choice, and perhaps some people would not agree or would criticize your choices. But I think we cannot consider that as something that is pathological in the clinical sense," says Billieux.

Of course, for some people technology can be addictive.

"If something is genuinely interfering with your social or occupational life, and you have no ability to control it, then please seek help," says Feifer.

But for the vast majority of people, thinking about our use of technology as a choice — albeit not always a healthy one — can be the first step to overcoming unwanted habits.

For more, be sure to check out the Build for Tomorrow episode here.

Why the U.S. and Belgium are culture buddies

The Inglehart-Welzel World Cultural map replaces geographic accuracy with closeness in terms of values.

Credit: World Values Survey, public domain.
Strange Maps
  • This map replaces geography with another type of closeness: cultural values.
  • Although the groups it depicts have familiar names, their shapes are not.
  • The map makes for strange bedfellows: Brazil next to South Africa and Belgium neighboring the U.S.
Keep reading Show less

CT scans of shark intestines find Nikola Tesla’s one-way valve

Evolution proves to be just about as ingenious as Nikola Tesla

Credit: Gerald Schömbs / Unsplash
Surprising Science
  • For the first time, scientists developed 3D scans of shark intestines to learn how they digest what they eat.
  • The scans reveal an intestinal structure that looks awfully familiar — it looks like a Tesla valve.
  • The structure may allow sharks to better survive long breaks between feasts.
Keep reading Show less

Mammals dream about the world they are entering even before birth

A study finds that baby mammals dream about the world they are about to experience to prepare their senses.

Michael C. Crair et al, Science, 2021.
Surprising Science
  • Researchers find that babies of mammals dream about the world they are entering.
  • The study focused on neonatal waves in mice before they first opened their eyes.
  • Scientists believe human babies also prime their visual motion detection before birth.
Keep reading Show less
Quantcast