from the world's big
Should Society Be Able to Tell You How To Augment Your Reality?
Over at Atlantic Wire, Evan Selinger is wondering about a potential downside to augmented reality technology: What if people want to tune augmented-reality tech to their prejudices? Specifically, he imagines a little old lady (OK, his grandmother) tuning her personal Head's Up Display to filter out African-Americans—maybe by turning brown faces into copies of Larry David or Rhea Perlman. Or, he continues, imagine reality-supplementing eyewear that keeps track of whom you look away from and who triggers a stress response—so it can edit out any people you'd rather not see. (Let's just stipulate that it's good enough not to also eliminate the person you're madly in love with, who provokes pretty similar signals—this is a thought experiment, after all.)
Would that kind of editing be bad thing? Imagine that Selinger's granny was avoiding the Post Office or a nice restaurant because she was afraid of black people. If her eyewear algorithm eliminated the sight of those people, she'd get her mail and dine out without stress. Of course, the people she was mis-seeing would be offended if they knew what was going on (social erasure being a problem that long predates 21st century technology). But how would they know?
The trouble arises at the societal and political levels (how things play out for a community full of people wearing these see-what-I-want-to-see glasses). It does seem bad for society to let people wander about in cognitive and emotional cocoons. Maybe we want the law to insist that they see their community as it is, not as their prejudices would have it be.
The problem with that claim, of course, is that people already walk around in cocoons. We already have devices that filter our perceptions so that we see facts and events and people as we want to see them. These devices are called human brains, and they are good at the work. What augmented-reality devices would do is move our preferences and prejudices outside our heads. Instead of being expressed outside our awareness, they would have to be made explicit, so they could be programmed into our devices. And this explicitness would give society a way to engage in thoughts and emotions that are now hidden from other people (and, often, from ourselves). So moving our reality-filters from mind to device could make them more subject to regulation.
But what sort of regulations would a future society want? On this point, America's political and business cultures are ambivalent. Polite and proper Americans know they're supposed to value diversity, which requires seeing differences and appreciating how they help an organization or a society. Yet we also know that we're supposed to act as if we don't see those differences: a pretense of "color blindness" is now the norm in American corporations and institutions. Small children in experiments take care to hide the fact that they have noticed race differences and in other experiments adults go to some trouble to appear unprejudiced. (Such research is part of a wave of work on social meta-cognition—your management not only of your own racial feelings but of others' perceptions of your racial feelings.)
So maybe some the (Chelsea) Clinton Administration of 2032 will mandate that people's augmented reality smooth out differences and make every citizen look like every other. Or maybe it will instead ban any form of augmentation that distorts people's real appearance. (But what if they want to appear different than they look? That's another wrinkle.) Seems to me it could go either way. Maybe we're more pro-social and unbiased when we have more information about each other; but maybe we're more tolerant when we know less.
Of course, there are other regulatory options. Gary Marcus, for instance, suggested that perhaps those Google Glasses of the future could be required to give you empathy-inducing information about people you see—reminding you that the person in front of you was a fellow grandparent, Red Sox fan, Lutheran, gardener or whatever, and thus pushing back against any racist or sexist or other isty thoughts. Your device would end up gently nudging you to be a better citizen, inclined to see others as in the same boat, rather than as scary members of Them. This would, again, be a technological extension of what people do now. Example: Years ago a friend of mine, who was about to be visited by a tax auditor with an Irish name, reached into his ash tray and daubed his forehead with some cinders. It was Ash Wednesday, and he figured it couldn't hurt to send a (false) message to the IRS man that they were both pious churchgoers.
The root question, in any event, is well worth pondering, especially as most technophiles are knee-jerk libertarians. Once augmented-reality technologies become truly ubiquitous and truly powerful, will government have a say in how we can and can't use it?
Follow me on Twitter: @davidberreby
Join multiple Tony and Emmy Award-winning actress Judith Light live on Big Think at 2 pm ET on Monday.
From "if-by-whiskey" to the McNamara fallacy, being able to spot logical missteps is an invaluable skill.
- A fallacy is the use of invalid or faulty reasoning in an argument.
- There are two broad types of logical fallacies: formal and informal.
- A formal fallacy describes a flaw in the construction of a deductive argument, while an informal fallacy describes an error in reasoning.
Appeal to privacy<p>When someone behaves in a way that negatively affects (or could affect) others, but then gets upset when others criticize their behavior, they're likely engaging in the appeal to privacy — or "mind your own business" — fallacy. Examples:<br></p><ul><li>Someone who speeds excessively on the highway, considering his driving to be his own business.</li><li>Someone who doesn't see a reason to bathe or wear deodorant, but then boards a packed 10-hour flight.</li></ul><p>Language to watch out for: "You're not the boss of me." "Worry about yourself."</p>
Sunk cost fallacy<p>When someone argues for continuing a course of action despite evidence showing it's a mistake, it's often a sunk cost fallacy. The flawed logic here is something like: "We've already invested so much in this plan, we can't give up now." Examples:<br></p><ul><li>Someone who intentionally overeats at an all-you-can-eat buffet just to get their "money's worth"</li><li>A scientist who won't admit his theory is incorrect because it would be too painful or costly</li></ul><p>Language to watch out for: "We must stay the course." "I've already invested so much...." "We've always done it this way, so we'll keep doing it this way."</p>
If-by-whiskey<p>This fallacy is named after a speech given in 1952 by <a href="https://en.wikipedia.org/wiki/Noah_S._Sweat" target="_blank">Noah S. "Soggy" Sweat, Jr.</a>, a state representative for <a href="https://en.wikipedia.org/wiki/Mississippi" target="_blank">Mississippi</a>, on the subject of whether the state should legalize alcohol. Sweat's argument on prohibition was (to paraphrase):<br></p><p><em>If, by whiskey, you mean the devil's brew that causes so many problems in society, then I'm against it. But if whiskey means the oil of conversation, the philosopher's wine, "</em><em>the stimulating drink that puts the spring in the old gentleman's step on a frosty, crispy morning;" then I am certainly for it.</em></p>
Slippery slope<p>This fallacy involves arguing against a position because you think choosing it would start a chain reaction of bad things, even though there's little evidence to support your claim. Example:<br></p><ul><li>"We can't allow abortion because then society will lose its general respect for life, and it'll become harder to punish people for committing violent acts like murder."</li><li>"We can't legalize gay marriage. If we do, what's next? Allowing people to marry cats and dogs?" (Some people actually made this <a href="https://www.daytondailynews.com/news/national/cats-marrying-dogs-and-five-other-things-same-sex-marriage-won-mean/dLV9jKqkJOWUFZrSBETWkK/" target="_blank">argument</a> before same-sex marriage was legalized in the U.S.)</li></ul><p>Of course, sometimes decisions <em>do </em>start a chain reaction, which could be bad. The slippery slope device only becomes a fallacy when there's no evidence to suggest that chain reaction would actually occur.</p><p>Language to watch out for: "If we do that, then what's next?"</p>
"There is no alternative"<p><span style="background-color: initial;">A modification of the </span><a href="https://en.wikipedia.org/wiki/False_dilemma" target="_blank" style="background-color: initial;">false dilemma</a><span style="background-color: initial;">, this fallacy (often abbreviated to TINA) argues for a specific position because there are no realistic alternatives. Former British Prime Minister Margaret Thatcher used this exact line as a slogan to defend capitalism, and it's still used today to that same end: Sure, capitalism has its problems, but we've seen the horrors that occur when we try anything else, so there is no alternative.</span><br></p><p>Language to watch out for: "If I had a magic wand…" "What <em>else</em> are we going to do?!"</p>
Ad hoc arguments<p>An ad hoc argument isn't really a logical fallacy, but it is a fallacious rhetorical strategy that's common and often hard to spot. It occurs when someone's claim is threatened with counterevidence, so they come up with a rationale to dismiss the counterevidence, hoping to protect their original claim. Ad hoc claims aren't designed to be generalizable. Instead, they're typically invented in the moment. <a href="https://rationalwiki.org/wiki/Ad_hoc" target="_blank">RationalWiki</a> provides an example:<br></p><p style="margin-left: 20px;">Alice: "It is clearly said in the Bible that the Ark was 450 feet long, 75 feet wide and 45 feet high."</p><p style="margin-left: 20px;">Bob: "A purely wooden vessel of that size could not be constructed; the largest real wooden vessels were Chinese treasure ships which required iron hoops to build their keels. Even the <em>Wyoming</em> which was built in 1909 and had iron braces had problems with her hull flexing and opening up and needed constant mechanical pumping to stop her hold flooding."</p><p style="margin-left: 20px;">Alice: "It's possible that God intervened and allowed the Ark to float, and since we don't know what gopher wood is, it is possible that it is a much stronger form of wood than any that comes from a modern tree."</p>
Snow job<p><span style="background-color: initial;">This fallacy occurs when someone doesn't really have a strong argument, so they just throw a bunch of irrelevant facts, numbers, anecdotes and other information at the audience to confuse the issue, making it harder to refute the original claim. Example:</span><br></p><ul><li>A tobacco company spokesperson who is confronted about the health risks of smoking, but then proceeds to show graph after graph depicting many of the other ways people develop cancer, and how cancer metastasizes in the body, etc.</li></ul><p>Watch out for long-winded, data-heavy arguments that seem confusing by design.</p>
McNamara fallacy<p>Named after <a href="https://en.wikipedia.org/wiki/Robert_McNamara" target="_blank">Robert McNamara</a>, the <a href="https://en.wikipedia.org/wiki/United_States_Secretary_of_Defense" target="_blank">U.S. secretary of defense</a> from 1961 to 1968, this fallacy occurs when decisions are made based solely on <em>quantitative metrics or observations,</em> ignoring other factors. It stems from the Vietnam War, in which McNamara sought to develop a formula to measure progress in the war. He decided on bodycount. But this "objective" formula didn't account for other important factors, such as the possibility that the Vietnamese people would never surrender.<br></p><p>You could also imagine this fallacy playing out in a medical situation. Imagine a terminal cancer patient has a tumor, and a certain procedure helps to reduce the size of the tumor, but also causes a lot of pain. Ignoring quality of life would be an example of the McNamara fallacy.</p><p>Language to watch out for: "You can't measure that, so it's not important."</p>
A new study looks at what would happen to human language on a long journey to other star systems.
- A new study proposes that language could change dramatically on long space voyages.
- Spacefaring people might lose the ability to understand the people of Earth.
- This scenario is of particular concern for potential "generation ships".
Generation Ships<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="a1e6445c7168d293a6da3f9600f534a2"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/H2f0Wd3zNj0?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>
Many of the most popular apps are about self-improvement.
Emotions are the newest hot commodity, and we can't get enough.