Computer Algorithms Are Killing Jobs and Narrowing Our Personalities
The history of industrialization is the separation of workers from their labor, and it continues today in the digital marketplace where online companies seek to replace human labor with algorithms.
Douglas Rushkoff is the host of the Team Human podcast and a professor of digital economics at CUNY/Queens. He is also the author of a dozen bestselling books on media, technology, and culture, including, Present Shock, Program or Be Programmed, Media Virus, and Team Human, the last of which is his latest work.
Douglas Rushkoff: What people have to remember is that the object of industrialism wasn't to make more stuff better and faster, it was to disconnect labor from the value they created. So if I have a shoe factory, I don't want to hire expensive shoemakers and cobblers in my business, I want to go to the Home Depot parking lot, find a bunch of undocumented aliens and pay them two cents an hour. So I'm going to teach them something that's going to take me 15 minutes to teach them how to nail one nail into the shoe and then pass it onto the next guy. The person who understands how this all works is actually my enemy.
So you fast forward to today when we implement digital technologies we try to do them in ways that get rid of people. We don't want employees. If you need human beings, well then how are you going to scale up? It has to be able to be an algorithm. The easy way to think about it is most people's first interaction with a computer was probably a telephone answering system. And sure, I understand a company has a human receptionist that sitting there. She's got a salary. She's got benefits. She's got a health plan. Get rid of her; put in a computer so people who call your company are going to have to take a little bit more time to get through all those menus, you're going to save a lot of money and it makes you look kind of high tech.
But while you save money everybody who calls the company now spends more time going through those menus. You've actually created more work rather than less. You've externalized the cost of your human receptionist onto everybody else. So then what do they do? Well now they all have to get computer operators because they have to externalize the cost to everybody else. So we all end up now spending more time and energy going through those menus than we did when we hired somebody, but because we're so biased against hiring, because a company's stock price will go up, if it can show that it's hired less people we end up perpetuating that system.
So when we implement digital technologies, in order to get people out of the way, in order to get them out of the company we end up really killing the only expertise we have. If you're using algorithms and big data to figure out your next product line rather than designers, what's your competitive advantage? The other company is using that same data and probably hiring the same big data analytics company to figure out the future trend. So now you've been turned into a commodity. No, you've got to reverse in a digital age. What you want is the most qualified people you can find so that your business actually can differentiate itself from all of the other automated algorithmic nonsensical platforms out of there.
What consumers have to understand is that there's a value proposition with everything that they use. They have to be able, and currently they can't, they have to be able to ask themselves is this platform creating value for me or am I creating value for it? Or is there an exchange that I'm aware of and I'm okay with? Do I want to run my social life on Facebook? Is this an exchange that I like? Do I like defining myself in that way? Do I like these radio buttons? Do I want to present myself to the world through this platform and am I okay with everything they know about to me? I don't know. Am I okay with me getting my news and information through a newsfeed that's algorithmically optimized to make me click on things, to narrow and figure out who I am? Am I okay living on a platform that's using past data about me to advertise and market a future to me that I haven't yet decided to go live?
If they know there is a 70 or 80 percent chance that I might go on a diet in a month, what are they going to do? They start filling my newsfeed with hey you're looking kind of fat. Something is wrong with you. And they're going to try to steer me to be more consistent with my profile, to make me a more predictable and cooperative consumer. I guess that's okay as long as maybe it was a diet and so they're going to encourage me to go on it, but what about the other 20 percent of people? What about what I might have done instead? What about that unpredictability that would've made me different from the next guy and let me innovate something; let me have a new idea; let me have a more interesting personal anomalous weird life. Well, maybe I'm okay to surrender that. Maybe I want to be more like the rest of my statistical profile. But at least I should know this. At least I should know that my Google search results are different from yours. Why? Because Google wants me to do something. Google wants me to be a certain way. Google wants to help me be the real me. But how do they know what the real me is? What's the algorithm they're using and to what end?
Every time you navigate an automated telephone menu — a recorded voice helping you do what a receptionist could help you with in half the time — you are experiencing an early form of digital industrialization. Today, more sophisticated algorithms are replacing human employees, further separating workers from their labor.
A large new study uses an online game to inoculate people against fake news.
- Researchers from the University of Cambridge use an online game to inoculate people against fake news.
- The study sample included 15,000 players.
- The scientists hope to use such tactics to protect whole societies against disinformation.
Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.
- Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
- They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
- The research raises many ethical questions and puts to the test our current understanding of death.
The image of an undead brain coming back to live again is the stuff of science fiction. Not just any science fiction, specifically B-grade sci fi. What instantly springs to mind is the black-and-white horrors of films like Fiend Without a Face. Bad acting. Plastic monstrosities. Visible strings. And a spinal cord that, for some reason, is also a tentacle?
But like any good science fiction, it's only a matter of time before some manner of it seeps into our reality. This week's Nature published the findings of researchers who managed to restore function to pigs' brains that were clinically dead. At least, what we once thought of as dead.
What's dead may never die, it seems
The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called BrainEx. BrainEx is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.
BrainEx pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.
The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if BrainEx can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.
As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.
The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.
"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told National Geographic.
An ethical gray matter
Before anyone gets an Island of Dr. Moreau vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.
The BrainEx solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness.
Even so, the research signals a massive debate to come regarding medical ethics and our definition of death.
Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?
"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told the New York Times. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."
One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.
The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, told Nature that if BrainEx were to become widely available, it could shrink the pool of eligible donors.
"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.
It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.
Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? The distress of a partially alive brain?
The dilemma is unprecedented.
Setting new boundaries
Another science fiction story that comes to mind when discussing this story is, of course, Frankenstein. As Farahany told National Geographic: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have Frankenstein, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."
She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.
Many governments do not report, or misreport, the numbers of refugees who enter their country.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.