Trusting a Robot with Your Life: Can Self-Driving Cars Earn the Public's Trust?
How far are you willing to trust technology? Make a call, share a photo, find a good restaurant, pay a bill, vacuum the floor? But, will you trust autonomous systems with your life and the life others? Autonomous vehicles will be making decisions for us at 60mph and more -- the question facing us may no longer be technological but social. How much do you trust a robot?
It should never escape our attention the amount of trust we have come to place in technology. Of course, the reason we give tech-based services our trust is because, well, they’ve earned it: they’re fast; they’re efficient; they’re generally reliable; they work, to our eyes, as if by magic. We trust them even as we don’t trust them: dubious as people generally are about giving up their personal information with tech companies, that dubiousness doesn’t appear to be enough to alter anybody’s habits. A company’s ability to provide a service efficiently overwhelms the tertiary consideration of whether or not it’s responsible in doing so. We don’t like the idea of a corporation manhandling our data, but that concern ultimately remains in the realm of ideas, too abstract or at least not powerful enough to sway anyone’s behavior.
In other words, it seems as if we’ll trust tech to do anything for us, as long as it does it fast and well and cheap. But what happens when tech moves into fields where the stakes are higher than keeping in touch with friends, looking up shawarma places, or delivering stuff to our doorsteps? Will we be willing to trust Google, Apple, Tesla or brands yet to be created with our lives? That’s not a thought experiment. In developing self-driving cars, tech companies are worming their way into the automotive industry, where the result of a software bug won’t be a website outage or an app crash, but crumpled steel, distant sirens, a crowd of onlookers, a cry for help….
Today, only about 26% of consumers say they would purchase a self-driving car, and the main barrier seems to be trust. Most people don’t believe an autonomous car can keep them safe; nevermind that nearly all car accidents are the result of human error. Trust, in this case, has far more to do with instinct than with reason or practicality. For example, people who take test-rides in self-driving cars tend to be made nervous by the small berth the robotic driver gives when it passes parked vehicles. The fact that self-driving cars are more precise than humans has the ironic effect of reducing our trust in them. On the other hand, if a consumer is presented with a hypothetical dilemma faced by an autonomous system, such as its being forced to choose between the life of vehicle’s passenger versus the life of a pedestrian, the consumer may refuse to consider the idea, period. She will say that as long as such a dilemma is in the realm of possibility, the technology simply should not be employed. In other words, it has to be perfect. But any such perfection is, of course, is an attempt to contemplate infinity.
Ultimately, the developers of autonomous vehicles will have to meet a public threshold of trust far above what we expect from tech companies or humans today. Even leaders in technology, such as Boeing, Airbus and others in aerospace still keep a person in the left seat -- even if most of that time is watching the system operate and giving everyone in behind them a warm feeling that "someone" is in control. Note Hollywood is still making movies about human heroes making decisions in a pinch -- Sully (2016) And this trust-gap, as it were, is an opening that could very well be exploited by any brand willing to make the leap into the automotive industry. It could just as well be a company that nobody’s talking about right now -- Amazon? Verizon? Microsoft? – that will dominate the burgeoning industry as it might be Google or Tesla. The question will be which company can best leverage its image to convince customers to entrust them with their lives.
For years, tech has operated on the ethos of disruption. Nothing makes Silicon Valley happier than upending a whole industry. But disruptive may not be what you want to be when you’re in the business of human lives. Moving too hastily -- pushing for the implementation of autonomous technologies before they’re ready -- could lead to disaster. If the rollout of self-driving cars leads to negative headlines questioning their reliability, then their developers will find themselves struggling to make up a yawning trust deficit, something that could delay the wide-scale adoption of autonomous vehicles for years.
The smartest way for tech companies to move forward with autonomous cars might be to work closely with the government. Sometimes government moves sluggishly because it is inefficient, yes; but often, the pace of government merely reflects the gravity of the duties to which it has been assigned, and the accompanying need to act prudently. Policymaking can act as a circuit breaker when society may not yet be ready for dramatic change. Tech will have to develop something of a conservative streak and a willingness to work closely with regulators if it wants to survive in the auto industry.
In 1900, the driverless elevator was invented -- to which you might respond, “why would an elevator need a driver?” It’s second-nature for us to ride an automatic elevator today, but it was outright feared when it was introduced. People who stepped into an automatic elevator were apt to turn around and walk right back out of it. It took over fifty years, an elevator operator strike, and a coordinated industry ad campaign for driverless elevators to finally be accepted -- a cautionary tale for those of us who hope for big things from autonomous vehicles in the near future. Is fifty years the timeline we ought to expect for people to grow fully comfortable with a computer taking the wheel? Or will the practical benefits of self-driving cars serve to quickly overwhelm peoples’ concerns? So far, at least, tech companies have always been able to bank on that.
MIT AgeLab's Adam Felts contributed to this article.
- The meaning of the word 'confidence' seems obvious. But it's not the same as self-esteem.
- Confidence isn't just a feeling on your inside. It comes from taking action in the world.
- Join Big Think Edge today and learn how to achieve more confidence when and where it really matters.
If you're lacking confidence and feel like you could benefit from an ego boost, try writing your life story.
In truth, so much of what happens to us in life is random – we are pawns at the mercy of Lady Luck. To take ownership of our experiences and exert a feeling of control over our future, we tell stories about ourselves that weave meaning and continuity into our personal identity.
Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.
- Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
- They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
- The research raises many ethical questions and puts to the test our current understanding of death.
The image of an undead brain coming back to live again is the stuff of science fiction. Not just any science fiction, specifically B-grade sci fi. What instantly springs to mind is the black-and-white horrors of films like Fiend Without a Face. Bad acting. Plastic monstrosities. Visible strings. And a spinal cord that, for some reason, is also a tentacle?
But like any good science fiction, it's only a matter of time before some manner of it seeps into our reality. This week's Nature published the findings of researchers who managed to restore function to pigs' brains that were clinically dead. At least, what we once thought of as dead.
What's dead may never die, it seems
The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called BrainEx. BrainEx is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.
BrainEx pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.
The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if BrainEx can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.
As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.
The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.
"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told National Geographic.
An ethical gray matter
Before anyone gets an Island of Dr. Moreau vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.
The BrainEx solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness.
Even so, the research signals a massive debate to come regarding medical ethics and our definition of death.
Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?
"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told the New York Times. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."
One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.
The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, told Nature that if BrainEx were to become widely available, it could shrink the pool of eligible donors.
"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.
It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.
Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? The distress of a partially alive brain?
The dilemma is unprecedented.
Setting new boundaries
Another science fiction story that comes to mind when discussing this story is, of course, Frankenstein. As Farahany told National Geographic: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have Frankenstein, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."
She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.
A space memorial company plans to launch the ashes of "Pikachu," a well-loved Tabby, into space.
- Steve Munt, Pikachu's owner, created a GoFundMe page to raise money for the mission.
- If all goes according to plan, Pikachu will be the second cat to enter space, the first being a French feline named Felicette.
- It might seem frivolous, but the cat-lovers commenting on Munt's GoFundMe page would likely disagree.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.