from the world's big
Top 6 fears about future technology
Many of our greatest fears stem from uncertainty about the future, and technology has made the future very uncertain indeed.
- Americans are scared, but hardly alone; people are primed by evolution to worry over their inability to control their future environment.
- Oxford professor Nick Bostrom has painted a doomsday scenario. Are he and Elon Musk correct?
- Even if these six fears come to pass—and some of them surely will—they aren't guaranteed to be as catastrophic as we think. Fortunately or unfortunately, we are incredibly bad at predicting the future.
The future is a scary place. According to a 2017 survey, many Americans' greatest fears—economic collapse, another world war, not having enough money for the future, etc.—are concerns over the state of tomorrow. (Although it is worth noting that their number one fear, corrupt government officials, is a clear and ever-present danger.)
Americans are hardly alone. People are primed to worry over their inability to control their future environment. Tomorrow's unpredictability requires that our brains view it with suspicion, as a potential threat to our survival. Unfortunately for our survival-primed brains, technology's influence is making our future ever more protean.Today's technological advancements occur exponentially, and the average person will have to adjust to changes that would have previously taken several generations. Many of these advancements will, no doubt, be beneficial. Others, however, could prove less than advantageous.
Elon Musk speaks onstage at SXSW 2018 in Austin, Texas. During the conversation, Musk shared his fears over the future of AI.
(Photo by Diego Donamaria/Getty Images for SXSW)
Imagine a paperclip company creates an artificial superintelligence and tasks it with the single goal of making as many paperclips as possible. The company's stock soars, and humanity enters the golden age of the paperclip.
But something unexpected happens. The AI surveys the natural resources we need to survive and decides those could go a long way toward paperclip manufacturing. It consumes those resources in an effort to fulfill its prime directive, "make as many paperclips as possible," and wipes out humanity in the process.
This thought experiment, devised by Oxford professor Nick Bostrom, details just one potential danger in creating an artificial superintelligence—that being, we need to be very careful with our words.
"I'm very close to the cutting-edge of AI, and it scares the hell out of me," Elon Musk, CEO of Tesla and SpaceX, said at SXSW 2018. "It is capable of vastly more than anyone knows, and the rate of improvement is exponential. […] We have to figure out some way to ensure that the advent of digital superintelligence is one which is symbiotic with humanity. I think that's the single biggest exponential crisis that we face."Bostrom and Musk paint worst-case scenarios, but there are plenty of worries over artificial superintelligence that don't end in human genocide. Experts have postulated that AI could automate terrorism, mass produce propaganda, and streamline hacking to devastating effects.
Americans have steadily been losing work to automation for decades, but the trend appears to be picking up speed. Self-driving cars, for example, could soon displace 5 million workers nationwide.
But taxi drivers aren't the only people who should be worried. A McKinsey Global Institute study suggests that nearly 70 million people could lose their jobs to automation by 2030. U.S. workers in retail, agriculture, manufacturing, and food services may find their jobs on the automated chopping block.No wonder Americans fear the incoming robo-revolution. A Pew Research report found that 72 percent of U.S. adults surveyed expressed worry over automation, compared with 33 percent who were enthusiastic. A majority were also hesitant to consider using automated services such as driverless cars or robotic caregivers.
We create robots to fight our wars for us, but they turn on their masters and bring ruin to our world. It's a classic science fiction conceit, and one we're much closer to than, say, first contact. Autonomous drones are already available, and it is only a matter of time before they make the leap from selfie-machine to combatant.
The Campaign to Stop Killer Robots worries about this future, but not about robotic warriors turning on their masters. Rather, the campaign believes that autonomous weapons will lead to an erosion of accountability in armed conflicts between states.
As stated on the campaign's website:
The use of fully autonomous weapons would create an accountability gap as there is no clarity on who would be legally responsible for a robot's actions: the commander, programmer, manufacture, or robot itself? Without accountability, these parties would have less incentive to ensure robots did not endanger civilians and victims would be left unsatisfied that someone was punished for the harm they experienced.
Considering the difficulties already associated with prosecuting war crimes, the concern is worth consideration.
Vicious virtual reality
A group of children wearing virtual reality headsets.
(Photo by Getty Images)
Virtual reality is here, and it looks way better than the '80s led us to believe it would. But as with any new technology, trepidation has welled up over to how it will affect people's wellbeing, especially children.
"The gap between 'things that happen to my character' and 'things that happen to me' is bridged," Scott Stephen, a VR designer, told The New Yorker. "The way I process these scares is not through the eyes of a person using their critical media-viewing faculty but through eyes of I, the self, with all of the very human, systems-level, subconscious voodoo that comes along with that."Because the technology's availability has been limited until recently, not many studies that have looked at VR's effects on children, and the studies we have aren't conclusive. One study showed that children were more likely to create a false memory under VR's influence, but another study has shown its ability to reduce anxiety in children undergoing medical procedures.
Baleful biomedical technologies
In the coming years, we could cultivate biomaterials in labs to replace failing organs and splice genes in utero so children won't suffer the debilitating inherited diseases of their forebearers. Biomedical technologies promise a future where we are all better, stronger, faster and at the fraction of the cost of one Steve Austin.
But a 2016 Pew Research report suggests that Americans don't see these medical advancements as incoming miracles. Of those surveyed, a majority said they were either somewhat or very worried about brain chips that make us smarter (69 percent), genetic editing to reduce babies' risk of disease (68 percent), and synthetic blood to improve physical abilities (63 percent).
Their reasoning? Such enhancements "could exacerbate the divide between haves and have-nots" and be used as a measure of superiority by their recipients. The more religious a participant, the more likely they were to believe such technologies were "meddling with nature" and "crosses a line we should not cross." Mostly though, we just loathe the idea of neighbors throwing a get-together to show off their fancy new brain chips.
Wholesale nuclear power
The ghost town of Pripyat, Ukraine, with the Chernobyl nuclear reactor in the background.
(Photo by MediaProduction/Getty Images)
On Aug. 6, 1945, the United States dropped an atomic bomb on Hiroshima, Japan. Since then, nuclear weapons have been an existential threat to our species. As of Jan. 2018, the Bulletin of Atomic Scientists set the Doomsday Clock at a mere two minutes to midnight.
But weapons of mass destruction aren't why nuclear made this list. It's here because of people dread nuclear energy.
In a 2016 Gallup poll, a majority of Americans surveyed (54 percent) were opposed to nuclear energy, the first time a majority opposed the prospect since 1994, when Gallup first started asking the question. Of course, it's not hard to where the fear originates. When nuclear power plants fail, they fail with devastating consequences. Three Mile Island, Chernobyl, Fukushima, the list is longer than we'd like.
But some experts argue that we need nuclear energy to decarbonize quickly enough to avert major climate catastrophe. Not only does nuclear power produce immense amounts of energy, it also has a low-carbon footprint (lower than even solar)."In most of the world, especially the rich world, they're not talking about building new reactors. We're actually talking about taking reactors down before their lifetimes are over," Michael Shellenberger, president of Environmental Progress, said during his TED talk. "[The United States] could lose half of our reactors over the next 15 years, which would wipe out 40 percent of the emissions reductions we're supposed to get under the Clean Power Plan."
A cloudy crystal ball
So, is the future a technological murder mansion, a place where every dark corner hides a robotic horror waiting to kill all humans or, at the very least, take all our jobs? Maybe, but probably not.
People have a strong desire to predict the course of tomorrow, and whole social movements, from futurists to psychics to horoscopes, have sprung up to meet that demand. Such conjectures return to us a semblance of control with regards to our future environment.
To pick a few well-known examples: In the late 18th century Thomas Malthus argued that unless family size was regulated, humanity would overpopulate the planet and create a misery of famine. In 1989 Francis Fukuyama foresaw the end of history. And in 1998 the Y2K bug was predicted to wipe out computer networks across the world.
But Malthus couldn't predict the technological advancements in agriculture that could feed billions more people than existed in his day; Fukuyama could not foresee the political upheaval of events such as 9/11; and Y2K doomsayers, well, they were just wrong.
Even if these six fears come to pass — and some of them surely will — they aren't guaranteed to be as bad as predicted. Automation could wipe out 70 million jobs, but new innovations could generate new jobs needing to be filled. Biomedical technologies could widen the expanding gap between classes, but if treat them as reconstructive procedures, rather than aesthetic ones, then everyone should have a right to benefit.
That makes you feel better about the future… right?
Join multiple Tony and Emmy Award-winning actress Judith Light live on Big Think at 2 pm ET on Monday.
From "if-by-whiskey" to the McNamara fallacy, being able to spot logical missteps is an invaluable skill.
- A fallacy is the use of invalid or faulty reasoning in an argument.
- There are two broad types of logical fallacies: formal and informal.
- A formal fallacy describes a flaw in the construction of a deductive argument, while an informal fallacy describes an error in reasoning.
Appeal to privacy<p>When someone behaves in a way that negatively affects (or could affect) others, but then gets upset when others criticize their behavior, they're likely engaging in the appeal to privacy — or "mind your own business" — fallacy. Examples:<br></p><ul><li>Someone who speeds excessively on the highway, considering his driving to be his own business.</li><li>Someone who doesn't see a reason to bathe or wear deodorant, but then boards a packed 10-hour flight.</li></ul><p>Language to watch out for: "You're not the boss of me." "Worry about yourself."</p>
Sunk cost fallacy<p>When someone argues for continuing a course of action despite evidence showing it's a mistake, it's often a sunk cost fallacy. The flawed logic here is something like: "We've already invested so much in this plan, we can't give up now." Examples:<br></p><ul><li>Someone who intentionally overeats at an all-you-can-eat buffet just to get their "money's worth"</li><li>A scientist who won't admit his theory is incorrect because it would be too painful or costly</li></ul><p>Language to watch out for: "We must stay the course." "I've already invested so much...." "We've always done it this way, so we'll keep doing it this way."</p>
If-by-whiskey<p>This fallacy is named after a speech given in 1952 by <a href="https://en.wikipedia.org/wiki/Noah_S._Sweat" target="_blank">Noah S. "Soggy" Sweat, Jr.</a>, a state representative for <a href="https://en.wikipedia.org/wiki/Mississippi" target="_blank">Mississippi</a>, on the subject of whether the state should legalize alcohol. Sweat's argument on prohibition was (to paraphrase):<br></p><p><em>If, by whiskey, you mean the devil's brew that causes so many problems in society, then I'm against it. But if whiskey means the oil of conversation, the philosopher's wine, "</em><em>the stimulating drink that puts the spring in the old gentleman's step on a frosty, crispy morning;" then I am certainly for it.</em></p>
Slippery slope<p>This fallacy involves arguing against a position because you think choosing it would start a chain reaction of bad things, even though there's little evidence to support your claim. Example:<br></p><ul><li>"We can't allow abortion because then society will lose its general respect for life, and it'll become harder to punish people for committing violent acts like murder."</li><li>"We can't legalize gay marriage. If we do, what's next? Allowing people to marry cats and dogs?" (Some people actually made this <a href="https://www.daytondailynews.com/news/national/cats-marrying-dogs-and-five-other-things-same-sex-marriage-won-mean/dLV9jKqkJOWUFZrSBETWkK/" target="_blank">argument</a> before same-sex marriage was legalized in the U.S.)</li></ul><p>Of course, sometimes decisions <em>do </em>start a chain reaction, which could be bad. The slippery slope device only becomes a fallacy when there's no evidence to suggest that chain reaction would actually occur.</p><p>Language to watch out for: "If we do that, then what's next?"</p>
"There is no alternative"<p><span style="background-color: initial;">A modification of the </span><a href="https://en.wikipedia.org/wiki/False_dilemma" target="_blank" style="background-color: initial;">false dilemma</a><span style="background-color: initial;">, this fallacy (often abbreviated to TINA) argues for a specific position because there are no realistic alternatives. Former British Prime Minister Margaret Thatcher used this exact line as a slogan to defend capitalism, and it's still used today to that same end: Sure, capitalism has its problems, but we've seen the horrors that occur when we try anything else, so there is no alternative.</span><br></p><p>Language to watch out for: "If I had a magic wand…" "What <em>else</em> are we going to do?!"</p>
Ad hoc arguments<p>An ad hoc argument isn't really a logical fallacy, but it is a fallacious rhetorical strategy that's common and often hard to spot. It occurs when someone's claim is threatened with counterevidence, so they come up with a rationale to dismiss the counterevidence, hoping to protect their original claim. Ad hoc claims aren't designed to be generalizable. Instead, they're typically invented in the moment. <a href="https://rationalwiki.org/wiki/Ad_hoc" target="_blank">RationalWiki</a> provides an example:<br></p><p style="margin-left: 20px;">Alice: "It is clearly said in the Bible that the Ark was 450 feet long, 75 feet wide and 45 feet high."</p><p style="margin-left: 20px;">Bob: "A purely wooden vessel of that size could not be constructed; the largest real wooden vessels were Chinese treasure ships which required iron hoops to build their keels. Even the <em>Wyoming</em> which was built in 1909 and had iron braces had problems with her hull flexing and opening up and needed constant mechanical pumping to stop her hold flooding."</p><p style="margin-left: 20px;">Alice: "It's possible that God intervened and allowed the Ark to float, and since we don't know what gopher wood is, it is possible that it is a much stronger form of wood than any that comes from a modern tree."</p>
Snow job<p><span style="background-color: initial;">This fallacy occurs when someone doesn't really have a strong argument, so they just throw a bunch of irrelevant facts, numbers, anecdotes and other information at the audience to confuse the issue, making it harder to refute the original claim. Example:</span><br></p><ul><li>A tobacco company spokesperson who is confronted about the health risks of smoking, but then proceeds to show graph after graph depicting many of the other ways people develop cancer, and how cancer metastasizes in the body, etc.</li></ul><p>Watch out for long-winded, data-heavy arguments that seem confusing by design.</p>
McNamara fallacy<p>Named after <a href="https://en.wikipedia.org/wiki/Robert_McNamara" target="_blank">Robert McNamara</a>, the <a href="https://en.wikipedia.org/wiki/United_States_Secretary_of_Defense" target="_blank">U.S. secretary of defense</a> from 1961 to 1968, this fallacy occurs when decisions are made based solely on <em>quantitative metrics or observations,</em> ignoring other factors. It stems from the Vietnam War, in which McNamara sought to develop a formula to measure progress in the war. He decided on bodycount. But this "objective" formula didn't account for other important factors, such as the possibility that the Vietnamese people would never surrender.<br></p><p>You could also imagine this fallacy playing out in a medical situation. Imagine a terminal cancer patient has a tumor, and a certain procedure helps to reduce the size of the tumor, but also causes a lot of pain. Ignoring quality of life would be an example of the McNamara fallacy.</p><p>Language to watch out for: "You can't measure that, so it's not important."</p>
A new study looks at what would happen to human language on a long journey to other star systems.
- A new study proposes that language could change dramatically on long space voyages.
- Spacefaring people might lose the ability to understand the people of Earth.
- This scenario is of particular concern for potential "generation ships".
Generation Ships<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="a1e6445c7168d293a6da3f9600f534a2"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/H2f0Wd3zNj0?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>
Many of the most popular apps are about self-improvement.
Emotions are the newest hot commodity, and we can't get enough.