from the world's big
The chance to text with the dead via AI is creepy or wonderful
An effort is underway in the AI community to develop posthumous avatars that can ease the pain of mourning. Not everyone thinks this is a good idea.
The human mind isn't good at grasping certain very big things, perhaps chief among them infinity. The idea of infinite space is a head-scratcher, but the idea of infinite time can be truly mind-busting, especially when one is trying to get a grip on death, either their own or someone else's. What does it mean, as many of us believe to be our fates, to not be forever? When someone dies, it's this aspect of their passing that seems so unreal, impossible almost. How can someone who was just here be so completely gone? And forever? The process of acquiring enough acceptance to return to life and be able to even imagine joy again is often a painful, protracted one. Different cultures through the ages have developed their own ways of grieving, and now technologists are working on a new one for our time, or a little ways off in the future: griefbots.
Chatting with the dead
We spend so much of our time feeding social media and each other with our thoughts, photos, favorite memes, and so on, that each of us leaves quite the digital footprint behind. This is especially true of Millennials. It all amounts to a lot of information about us — or at least who we pretend to be on social media. A number of AI experts and programmers think it's nearly enough info to construct digital replicas of ourselves who can convincingly chat with our friends and loved ones after we're gone.
Hossein Rahnama of Ryerson Univerity says the sweet spot of having enough data to really pull this off is around a zettabyte, telling Quartz, “Fifty or 60 years from now, Millennials will have reached a point in their lives where they each will have collected zettabytes [1trillion gigabytes] of data, which is just what is needed to create a digital version of yourself." How much of yourself is an interesting question, though, with Rahnama cautioning that a zettabyte is also just about the threshold at which a simulation would be Campbell of revealing a bit too much: “We have to consider an individual's privacy when it comes to passing on virtual profiles. You should be able to own your data and only pass it along to people you trust, so allowing people to engage with their own ancestors would be likely."
Still, there are a number of programmers working on developing believable chatbots, motivated not just by their own desire to continue to interact with the departed, but also to share those lost with others who never got to know them alive, as Muhammad Ahmad is doing for his children by programming a bot of his late father.
The idea is for the bots to do more than merely play back — in text, audio, or synthesized speech — thing once written or said by the deceased. Instead, the goal is for machine-learning algorithms to learn from data left behind what the person was like and how they communicated in order to create a digital avatar that's identical to the original person. These avatars could even conceivably keep up with current events, allowing the living to continue having brand-new conversations that they theoretically might have had with the dead. Of course, there are hazards to this. As Pamela Rutledge of Media Psychology Research Center tells The Daily Beast, “What you don't want is people taking advice from a bot."
We're still a ways off from truly convincing bots, though, and even when such a thing is possible, the AI “person" will not be the original one, no matter how seemingly identical it is. Should a time arrive when we physically merge with machines, of course, things may not be so simple.
There's some disagreement as to whether AI simulacra would help make grieving less painful or make it worse by prolonging the recovery process. Certainly, an AI replica of a loved one does nothing to help us answer the big questions mentioned above — the originals are still gone as they'll ever be.
Some grief experts feel that we'd be better off to face the pain and confusion that accompany a death directly. Psychologist Ernest Becker, the author of The Denial of Death, is concerned that an AI doppelgänger could interfere with an important corner that needs to be turned during grieving, saying, “People will always continue to mourn, but at a certain point people remember instead of relive." (Our emphasis.) Getting too attached to what he calls a “projection of memories" could leave one stuck in a sad, irresolvable place.
Others suggest a bot could help a survivor transition into acceptance gently, essentially weaning themselves from the departed's presence in their lives. But would you be comforted by a little more time with the something like the person who's just died? And would you find a bot texting you “I'm alright," or “I miss you, too," to be comforting or a stabbing reminder of your loss? Eugenia Kuyda, a pioneer in this field, likes when the bot she's developed of her late friend Roman Mazurenko reassures her.
Some say that a griefbot could help survivors through their loss by giving them a comforting way to share their pain. Grief counselor Andrea Warnick tells Quartz, “In modern society, many people are hesitant to talk about someone who has died for fear of upsetting those who are grieving — so perhaps the importance of continuing to share stories and advice from someone who has died is something that we humans can learn from chatbots."
In Season 2 of the Netflix series Black Mirror, a new widow invites first a chatbot simulation of her husband, then a voice bot, and then finally a full-sized — and apparently fully functional, ahem — android into her life. “Be Right Back" is a haunting episode that captures what this experience may indeed someday be like. Rutledge's concern is pretty much what happens to the widow: “If you have a lot of contact with something… [but] you don't have this awareness that [para-social relationships] can happen, you might end up with a relationship that actually keeps you from grieving the loss of that person."
The missing piece of the puzzle
Here's a problem. What we miss as much about the dead as their traits, manner, and sense of humor — to name just three attributes — is how they make us feel because of how they feel about us. Considering this, for all of our genius at data collection, machine learning, and AI, until a time arrives when AI can truly feel, any such simulated relationship will emanate from an ice-cold core, empty of the most important ingredient in a close relationship: Love.
Join multiple Tony and Emmy Award-winning actress Judith Light live on Big Think at 2 pm ET on Monday.
From "if-by-whiskey" to the McNamara fallacy, being able to spot logical missteps is an invaluable skill.
- A fallacy is the use of invalid or faulty reasoning in an argument.
- There are two broad types of logical fallacies: formal and informal.
- A formal fallacy describes a flaw in the construction of a deductive argument, while an informal fallacy describes an error in reasoning.
Appeal to privacy<p>When someone behaves in a way that negatively affects (or could affect) others, but then gets upset when others criticize their behavior, they're likely engaging in the appeal to privacy — or "mind your own business" — fallacy. Examples:<br></p><ul><li>Someone who speeds excessively on the highway, considering his driving to be his own business.</li><li>Someone who doesn't see a reason to bathe or wear deodorant, but then boards a packed 10-hour flight.</li></ul><p>Language to watch out for: "You're not the boss of me." "Worry about yourself."</p>
Sunk cost fallacy<p>When someone argues for continuing a course of action despite evidence showing it's a mistake, it's often a sunk cost fallacy. The flawed logic here is something like: "We've already invested so much in this plan, we can't give up now." Examples:<br></p><ul><li>Someone who intentionally overeats at an all-you-can-eat buffet just to get their "money's worth"</li><li>A scientist who won't admit his theory is incorrect because it would be too painful or costly</li></ul><p>Language to watch out for: "We must stay the course." "I've already invested so much...." "We've always done it this way, so we'll keep doing it this way."</p>
If-by-whiskey<p>This fallacy is named after a speech given in 1952 by <a href="https://en.wikipedia.org/wiki/Noah_S._Sweat" target="_blank">Noah S. "Soggy" Sweat, Jr.</a>, a state representative for <a href="https://en.wikipedia.org/wiki/Mississippi" target="_blank">Mississippi</a>, on the subject of whether the state should legalize alcohol. Sweat's argument on prohibition was (to paraphrase):<br></p><p><em>If, by whiskey, you mean the devil's brew that causes so many problems in society, then I'm against it. But if whiskey means the oil of conversation, the philosopher's wine, "</em><em>the stimulating drink that puts the spring in the old gentleman's step on a frosty, crispy morning;" then I am certainly for it.</em></p>
Slippery slope<p>This fallacy involves arguing against a position because you think choosing it would start a chain reaction of bad things, even though there's little evidence to support your claim. Example:<br></p><ul><li>"We can't allow abortion because then society will lose its general respect for life, and it'll become harder to punish people for committing violent acts like murder."</li><li>"We can't legalize gay marriage. If we do, what's next? Allowing people to marry cats and dogs?" (Some people actually made this <a href="https://www.daytondailynews.com/news/national/cats-marrying-dogs-and-five-other-things-same-sex-marriage-won-mean/dLV9jKqkJOWUFZrSBETWkK/" target="_blank">argument</a> before same-sex marriage was legalized in the U.S.)</li></ul><p>Of course, sometimes decisions <em>do </em>start a chain reaction, which could be bad. The slippery slope device only becomes a fallacy when there's no evidence to suggest that chain reaction would actually occur.</p><p>Language to watch out for: "If we do that, then what's next?"</p>
"There is no alternative"<p><span style="background-color: initial;">A modification of the </span><a href="https://en.wikipedia.org/wiki/False_dilemma" target="_blank" style="background-color: initial;">false dilemma</a><span style="background-color: initial;">, this fallacy (often abbreviated to TINA) argues for a specific position because there are no realistic alternatives. Former British Prime Minister Margaret Thatcher used this exact line as a slogan to defend capitalism, and it's still used today to that same end: Sure, capitalism has its problems, but we've seen the horrors that occur when we try anything else, so there is no alternative.</span><br></p><p>Language to watch out for: "If I had a magic wand…" "What <em>else</em> are we going to do?!"</p>
Ad hoc arguments<p>An ad hoc argument isn't really a logical fallacy, but it is a fallacious rhetorical strategy that's common and often hard to spot. It occurs when someone's claim is threatened with counterevidence, so they come up with a rationale to dismiss the counterevidence, hoping to protect their original claim. Ad hoc claims aren't designed to be generalizable. Instead, they're typically invented in the moment. <a href="https://rationalwiki.org/wiki/Ad_hoc" target="_blank">RationalWiki</a> provides an example:<br></p><p style="margin-left: 20px;">Alice: "It is clearly said in the Bible that the Ark was 450 feet long, 75 feet wide and 45 feet high."</p><p style="margin-left: 20px;">Bob: "A purely wooden vessel of that size could not be constructed; the largest real wooden vessels were Chinese treasure ships which required iron hoops to build their keels. Even the <em>Wyoming</em> which was built in 1909 and had iron braces had problems with her hull flexing and opening up and needed constant mechanical pumping to stop her hold flooding."</p><p style="margin-left: 20px;">Alice: "It's possible that God intervened and allowed the Ark to float, and since we don't know what gopher wood is, it is possible that it is a much stronger form of wood than any that comes from a modern tree."</p>
Snow job<p><span style="background-color: initial;">This fallacy occurs when someone doesn't really have a strong argument, so they just throw a bunch of irrelevant facts, numbers, anecdotes and other information at the audience to confuse the issue, making it harder to refute the original claim. Example:</span><br></p><ul><li>A tobacco company spokesperson who is confronted about the health risks of smoking, but then proceeds to show graph after graph depicting many of the other ways people develop cancer, and how cancer metastasizes in the body, etc.</li></ul><p>Watch out for long-winded, data-heavy arguments that seem confusing by design.</p>
McNamara fallacy<p>Named after <a href="https://en.wikipedia.org/wiki/Robert_McNamara" target="_blank">Robert McNamara</a>, the <a href="https://en.wikipedia.org/wiki/United_States_Secretary_of_Defense" target="_blank">U.S. secretary of defense</a> from 1961 to 1968, this fallacy occurs when decisions are made based solely on <em>quantitative metrics or observations,</em> ignoring other factors. It stems from the Vietnam War, in which McNamara sought to develop a formula to measure progress in the war. He decided on bodycount. But this "objective" formula didn't account for other important factors, such as the possibility that the Vietnamese people would never surrender.<br></p><p>You could also imagine this fallacy playing out in a medical situation. Imagine a terminal cancer patient has a tumor, and a certain procedure helps to reduce the size of the tumor, but also causes a lot of pain. Ignoring quality of life would be an example of the McNamara fallacy.</p><p>Language to watch out for: "You can't measure that, so it's not important."</p>
A new study looks at what would happen to human language on a long journey to other star systems.
- A new study proposes that language could change dramatically on long space voyages.
- Spacefaring people might lose the ability to understand the people of Earth.
- This scenario is of particular concern for potential "generation ships".
Generation Ships<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="a1e6445c7168d293a6da3f9600f534a2"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/H2f0Wd3zNj0?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>
Many of the most popular apps are about self-improvement.
Emotions are the newest hot commodity, and we can't get enough.