from the world's big
The race to predict our behavior: Tristan Harris testifies before Congress
The design ethicist did not hold back his concerns when talking to political leaders.
- Former Google design ethicist, Tristan Harris, recently spoke in front of Congress about the dangers of Big Tech.
- Harris told senators that behavior prediction models are a troublesome part of design in the race for attention.
- He warned that without ethical considerations, the dangers to our privacy will only get worse.
In a strange cultural turn of events, ethics are at the forefront of numerous conversations. For decades they were generally overlooked (outside of family and social circles) or treated like Victorian manners (i.e., "boys will be boys"). To focus too diligently on ethics likely meant you were religious or professorial, both of which were frowned upon in a "freedom-loving" country like America. To garner national attention, an ethical breach had to be severe.
The potent combination of the current administration and privacy concerns in technology companies — the two being connected — emerged, causing a large portion of the population to ask: Is this who I really want to be? Is this how I want to be treated? For many, the answers were no and no.
While ethical questions surface in our tweets and posts, rarer is it to look at the engines driving those narratives. It's not like these platforms are benign sounding boards. For the socially awkward, social media offers cover. We can type out our demons (often) without recourse, unconcerned about what eyes and hearts read our vitriolic rants; the endless trolling takes its toll. It's getting tiresome, these neuroses and insecurities playing out on screens. Life 1.0 is not a nostalgia to return to but the reality we need to revive, at least more often than not.
This is why former Google design ethicist, Tristan Harris, left the tech behemoth to form the Center for Humane Technology. He knows the algorithms are toxic by intention, and therefore by design. So when South Dakota senator John Thune recently led a hearing on technology companies' "use of algorithms and how it affects what consumers see online," he invited Harris to testify. Considering how lackluster previous congressional hearings on technology have been, the government has plenty of catching up to do.
The ethicist did not hold back. Harris opened by informing the committee that algorithms are purposefully created to keep people hooked; it's an inherent part of the business model.
"It's sad to me because it's happening not by accident but by design, because the business model is to keep people engaged, which in other words, this hearing is about persuasive technology and persuasion is about an invisible asymmetry of power."
Tristan Harris - US Senate June 25, 2019
Harris tells the panel what he learned as a child magician, a topic he explored in a 2016 Medium article. A magician's power requires their audience to buy in; otherwise, tricks would be quickly spotted. Illusions only work when you're not paying attention. Tech platforms utilize a similar asymmetry, such as by hiring PR firms to spin stories of global connection and trust to cover their actual tracks.
As each track leads to profit maximization, companies must become more aggressive in the race for attention. First it was likes and dislikes, making consumers active participants, causing them to feel as if they have personal agency within the platform. They do, to an extent, yet as the algorithms churn along, they learn user behavior, creating "two billion Truman Shows." This process has led to Harris's real concern: artificial intelligence.
AI, Harris continues, has been shown to predict our behavior better than we can. It can guess our political affiliation with 80 percent accuracy, figure out you're homosexual before even you know it, and start suggesting strollers in advance of the pregnancy test turning pink.
Prediction is an integral part of our biology. As the neuroscientist Rodolfo Llinas writes, eukaryotes navigate their environment through prediction: this way leads to nutrition (go!), that looks like a predator (swim away!). Humans, too — as does all biological life — predict our way through our environment. The problem is that since the Industrial Revolution, we've created relatively safe environments to inhabit. As we don't have to pay attention to rustling bushes or stalking predators, we offload memory to calendars and GPS devices; we offload agency to the computers in our hands. Our spider senses have diminished.
Even during the technological revolution, we remain Paleolithic animals. Our tribalism is obvious. Harris says that in the race for attention companies exploit our need for social validation, using neuroscientist Paul MacLean's triune brain model to explain the climb down the brainstem to our basest impulses. Combine this with a depleted awareness of our surroundings and influencing our behavior becomes simple. When attention is focused purely on survival — in this age, surviving social media — a form of mass narcissism emerges: everything is about the self. The world shrinks as the ego expands.
We know well the algorithmic issues with Youtube: 70 percent of time spent watching videos is thanks to recommendations. The journalist Eric Schlosser has written about the evolution of pornography: partial nudity became softcore became hardcore became anything imaginable because we kept acclimating to what was once scandalous and wanted more. The Youtube spiral creates a dopamine-fueled journey into right- and left-wing politics. Once validated, you'll do anything to keep that feeling alive. Since your environment is the world inside of your head influenced by a screen, hypnotism is child's play to an AI with no sense of altruism or compassion.
How a handful of tech companies control billions of minds every day | Tristan Harris
That feeling of outrage sparked by validation is gold to technology companies. As Harris says, the race to predict your behavior occurs click by click.
"Facebook has something called loyalty prediction, where they can actually predict to an advertiser when you're about to become disloyal to a brand. So if you're a mother and you take Pampers diapers, they can tell Pampers, 'Hey, this user is about to become disloyal to this brand.' So in other words, they can predict things about us that we don't know about our own selves."
Harris isn't the only one concerned about this race. Technology writer Arthur Holland Michel recently discussed an equally (if not more) disturbing trend at Amazon that perfectly encapsulates the junction between prediction and privacy.
"Amazon has a patent for a system to analyze the video footage of private properties collected by its delivery drones and then feed that analysis into its product recommendation algorithm. You order an iPad case, a drone comes to your home and delivers it. While delivering this package the drone's computer vision system picks up that the trees in your backyard look unhealthy, which is fed into the system, and then you get a recommendation for tree fertilizer."
Harris relates this practice by tech companies to a priest selling access to confessional booths, only in this case these platforms have billions of confessions to sell. Once they learn what you're confessing, it's easy to predict what you'll confess to next. With that information, they can sell you before you had any idea you were in the market.
Harris makes note of the fact that Fred Rogers sat in front of the very same congressional committee fifty years prior to warn of the dangers of "animated bombardment." He thinks the world's friendliest neighbor would be horrified by the evolution of his prediction. Algorithms are influencing our politics, race relations, environmental policies — we can't even celebrate a World Cup victory without politicization (and I'm not referencing equal pay, which is a great usage of such a platform).
The nature of identity has long been a philosophical question, but one thing is certain: no one is immune to influence. Ethics were not baked into the system we now rely on. If we don't consciously add them in — and this will have to be where government steps in, as the idea of companies self-regulating is a joke — any level of asymmetric power is possible. We'll stay amazed at the twittering bird flying from the hat, ignorant of the grinning man who pulled it from thin air.
Construction of the $500 billion dollar tech city-state of the future is moving ahead.
- The futuristic megacity Neom is being built in Saudi Arabia.
- The city will be fully automated, leading in health, education and quality of life.
- It will feature an artificial moon, cloud seeding, robotic gladiators and flying taxis.
The Red Sea area where Neom will be built:
Saudi Arabia Plans Futuristic City, "Neom" (Full Promotional Video)<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="c646d528d230c1bf66c75422bc4ccf6f"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/N53DzL3_BHA?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>
A study of the manner in which memory works turns up a surprising thing.
- Researchers have found that some basic words appear to be more memorable than others.
- Some faces are also easier to commit to memory.
- Scientists suggest that these words serve as semantic bridges when the brain is searching for a memory.
Cognitive psychologist Weizhen Xie (Zane) of the NIH's National Institute of Neurological Disorders and Stroke (NINDS) works with people who have intractable epilepsy, a form of the disorder that can't be controlled with medications. During research into the brain activity of patients, he and his colleagues discovered something odd about human memory: It appears that certain basic words are consistently more memorable than other basic words.
The research is published in Nature Human Behaviour.
An odd find
Image source: Tsekhmister/Shutterstock
Xie's team was re-analyzing memory tests of 30 epilepsy patients undertaken by Kareem Zaghloul of NINDS.
"Our goal is to find and eliminate the source of these harmful and debilitating seizures," Zaghloul said. "The monitoring period also provides a rare opportunity to record the neural activity that controls other parts of our lives. With the help of these patient volunteers we have been able to uncover some of the blueprints behind our memories."
Specifically, the participants were shown word pairs, such as "hand" and "apple." To better understand how the brain might remember such pairings, after a brief interval, participants were supplied one of the two words and asked to recall the other. Of the 300 words used in the tests, five of them proved to be five times more likely to be recalled: pig, tank, doll, pond, and door.
The scientists were perplexed that these words were so much more memorable than words like "cat," "street," "stair," "couch," and "cloud."
Intrigued, the researchers looked at a second data source from a word test taken by 2,623 healthy individuals via Amazon's Mechanical Turk and found essentially the same thing.
"We saw that some things — in this case, words — may be inherently easier for our brains to recall than others," Zaghloul said. That the Mechanical Turk results were so similar may "provide the strongest evidence to date that what we discovered about how the brain controls memory in this set of patients may also be true for people outside of the study."
Why understanding memory matters
Image source: Orawan Pattarawimonchai/Shutterstock
"Our memories play a fundamental role in who we are and how our brains work," Xie said. "However, one of the biggest challenges of studying memory is that people often remember the same things in different ways, making it difficult for researchers to compare people's performances on memory tests." He added that the search for some kind of unified theory of memory has been going on for over a century.
If a comprehensive understanding of the way memory works can be developed, the researchers say that "we can predict what people should remember in advance and understand how our brains do this, then we might be able to develop better ways to evaluate someone's overall brain health."
Image source: joob_in/Shutterstock
Xie's interest in this was piqued during a conversation with Wilma Bainbridge of University of Chicago at a Christmas party a couple of years ago. Bainbridge was, at the time, wrapping up a study of 1,000 volunteers that suggested certain faces are universally more memorable than others.
Bainbridge recalls, "Our exciting finding is that there are some images of people or places that are inherently memorable for all people, even though we have each seen different things in our lives. And if image memorability is so powerful, this means we can know in advance what people are likely to remember or forget."
Image source: Anatomography/Wikimedia
At first, the scientists suspected that the memorable words and faces were simply recalled more frequently and were thus easier to recall. They envisioned them as being akin to "highly trafficked spots connected to smaller spots representing the less memorable words." They developed a modeling program based on word frequencies found in books, new articles, and Wikipedia pages. Unfortunately, the model was unable to predict or duplicate the results they saw in their clinical experiments.
Eventually, the researchers came to suspect that the memorability of certain words was linked to the frequency with which the brain used them as semantic links between other memories, making them often-visited hubs in individuals's memory networks, and therefore places the brain jumped to early and often when retrieving memories. This idea was supported by observed activity in participants' anterior temporal lobe, a language center.
In epilepsy patients, these words were so frequently recalled that subjects often shouted them out even when they were incorrect responses to word-pair inquiries.
Modern search engines no longer simply look for raw words when resolving an inquiry: They also look for semantic — contextual and meaning — connections so that the results they present may better anticipate what it is you're looking for. Xie suggests something similar may be happening in the brain: "You know when you type words into a search engine, and it shows you a list of highly relevant guesses? It feels like the search engine is reading your mind. Well, our results suggest that the brains of the subjects in this study did something similar when they tried to recall a paired word, and we think that this may happen when we remember many of our past experiences."
He also notes that it may one day be possible to leverage individuals' apparently wired-in knowledge of their language as a fixed point against which to assess the health of their memory and brain.
If machines develop consciousness, or if we manage to give it to them, the human-robot dynamic will forever be different.
- Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.
- Given the grave tragedy of slavery throughout human history, philosophers and technologists must answer this question ahead of technological development to avoid humanity creating a slave class of conscious beings.
- One potential safeguard against that? Regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.