Once a week.
Subscribe to our weekly newsletter.
Are Alexa, Siri, and Their Pals Teaching Kids to Be Rude?
Are virtual assistants teaching children to be nasty?
Our lives are beginning to be, um, “populated" by virtual assistants like Amazon's Alexa, Google Assistant, Apple's Siri, and Microsoft's Cortana. For us adults, they're handy, if occasionally flawed, helpers to whom we say “thank you" upon the successful completion of a task if it hasn't required too many frustrating, failed attempts. This is a quaint impulse left over from traditional inter-human exchanges. Kids, though, are growing up with these things, and they may be being taught a very different way of communicating. It's okay if the lack of niceties — “please," “thank you," and the like — is contained to conversations with automatons, but neural pathways being susceptible to training as they are, we have to wonder if these habits are going to give rise to colder, less civil communications between people. Parents like Hunter Walk, writing on Medium, are wondering just what kind of little monsters we're creating.
Neuroscientists are in general agreement that when we repeat an action, we build a neural pathway for doing so; the more we repeat it, the more fixed the pathway. This is the reason that a mistake gets harder and harder to correct — we've in effect taught the brain to make the mistake.
So what happens when kids get used to not saying “please" and “thank you," and generally feeling unconcerned with the feelings of those with whom they speak?
Of course, it's not as if an intelligent assistant cares how you talk to it. When Rebecca of the hilarious Mommyproof blog professed her love to Alexa, she got a few responses, including “I cannot answer that question," and “Aw. That's nice."
I told Siri I loved her three times and got these responses:
1. You hardly know me.
2. I value you. [I think I've been friend-zoned.]
3. Oh, stop.
Neither one says “I love you back." At least they don't lie. But they also present a model that's pretty unaffectionate, meaning there are no virtual hugs to support little kids' emotional needs.
And that's worrisome in its own way, since the borderline between alive and not can be unclear to little children. Peter Kahn, a developmental psychologist at the University of Washington studies human-robot interaction. He told Judith Shulevitz, writing for The New Republic, that even though kids understand that robots aren't human, they still see virtual personalities as being sort of alive. Kahn says, “we're creating a new category of being," a “personified non-animal semi-conscious half-agent." A child interacting with one of Kahn's robots told him, “He's like, he's half living, half not."
That nebulous status also threatens to make a virtual assistant something to practice bullying on, too. One parent, Avi Greengart, told Quartz that, “I've found my kids pushing the virtual assistant further than they would push a human. “[Alexa ] never says 'That was rude' or 'I'm tired of you asking me the same question over and over again.'"
Virtual assistants do teach good diction, which is nice, but that's about it. And they serve up lots of info, some which, at least, is appropriate. But we're just at the dawn of interacting by voice with computers, so there's still much to learn about what we're doing and the long-term effects our virtual assistants will have.
Hm, Captain Picard never said “please" either: “Earl Grey, hot!"
"Deepfakes" and "cheap fakes" are becoming strikingly convincing — even ones generated on freely available apps.
- A writer named Magdalene Visaggio recently used FaceApp and Airbrush to generate convincing portraits of early U.S. presidents.
- "Deepfake" technology has improved drastically in recent years, and some countries are already experiencing how it can weaponized for political purposes.
- It's currently unknown whether it'll be possible to develop technology that can quickly and accurately determine whether a given video is real or fake.
The future of deepfakes<p>In 2018, Gabon's president Ali Bongo had been out of the country for months receiving medical treatment. After Bongo hadn't been seen in public for months, rumors began swirling about his condition. Some suggested Bongo might even be dead. In response, Bongo's administration released a video that seemed to show the president addressing the nation.</p><p>But the <a href="https://www.facebook.com/watch/?v=324528215059254" target="_blank">video</a> is strange, appearing choppy and blurry in parts. After political opponents declared the video to be a deepfake, Gabon's military attempted an unsuccessful coup. What's striking about the story is that, to this day, experts in the field of deepfakes can't conclusively verify whether the video was real. </p><p>The uncertainty and confusion generated by deepfakes poses a "global problem," according to a <a href="https://www.brookings.edu/research/is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politics/#cancel" target="_blank">2020 report from The Brookings Institution</a>. In 2018, the U.S. Department of Defense released some of the first tools able to successfully detect deepfake videos. The problem, however, is that deepfake technology keeps improving, meaning forensic approaches may forever be one step behind the most sophisticated forms of deepfakes. </p><p>As the 2020 report noted, even if the private sector or governments create technology to identify deepfakes, they will:</p><p style="margin-left: 20px;">"...operate more slowly than the generation of these fakes, allowing false representations to dominate the media landscape for days or even weeks. "A lie can go halfway around the world before the truth can get its shoes on," warns David Doermann, the director of the Artificial Intelligence Institute at the University of Buffalo. And if defensive methods yield results short of certainty, as many will, technology companies will be hesitant to label the likely misrepresentations as fakes."</p>
Context is everything.
The COVID-19 pandemic has introduced a number of new behaviours into daily routines, like physical distancing, mask-wearing and hand sanitizing. Meanwhile, many old behaviours such as attending events, eating out and seeing friends have been put on hold.
A new study looks at how images of coffee's origins affect the perception of its premiumness and quality.
- Images can affect how people perceive the quality of a product.
- In a new study, researchers show using virtual reality that images of farms positively influence the subjects' experience of coffee.
- The results provide insights on the psychology and power of marketing.