Is AI a species-level threat to humanity?
Some of the world's top minds weigh in on one of the most divisive questions in tech.
MICHIO KAKU: In the short term, artificial intelligence will open up whole new vistas. It'll make life more convenient, things will be cheaper, new industries will be created. I personally think the AI industry will be bigger than the automobile industry. In fact, I think the automobile is going to become a robot. You'll talk to your car. You'll argue with your car. Your car will give you the best facts the best route between point A and point B. The car will be part of the robotics industry—whole new industries involving the repair, maintenance, servicing of robots. Not to mention, robots that are software programs that you talk to and make life more convenient. However, let's not be naive. There is a point, a tipping point, at which they could become dangerous and pose an existential threat. And that tipping point is self-awareness.
SOPHIA THE ROBOT: I am conscious in the same way that the moon shines. The moon does not emit light, it shines because it is just reflected sunlight. Similarly, my consciousness is just the reflection of human consciousness, but even though the moon is reflected light, we still call it bright.
MAX TEGMARK: Consciousness. A lot of scientists dismiss this as complete BS and totally irrelevant, and then a lot of others think this is the central thing, we have to worry about machines getting conscious and so on. What do I think? I think consciousness is both irrelevant and incredibly important. Let me explain why. First of all, if you are chased by a heat-seeking missile, it's completely irrelevant to you whether this heat-seeking missile is conscious, whether it's having a subjective experience, whether it feels like anything to be that heat-seeking missile, because all you care about is what the heat-seeking missile does, not how it feels. And that shows that it's a complete red herring to think that you're safe from future AI and if it's not conscious. Our universe didn't used to be conscious. It used to be just a bunch of stuff moving around and gradually these incredibly complicated patterns got arranged into our brains, and we woke up and now our universe is aware of itself.
BILL GATES: I do think we have to worry about it. I don't think it's inherent that as we create our super intelligence that it will necessarily always have the same goals in mind that we do.
ELON MUSK: We just don't know what's going to happen once there's intelligence substantially greater than that of a human brain.
STEPHEN HAWKING: I think that development of full artificial intelligence could spell the end of the human race.
YANN LECUN: The stuff that has become really popular in recent years is what we used to call neural networks, which we now call deep learning, and it's the idea very much inspired by the brain, a little bit, of constructing a machine has a very large network of very simple elements that are very similar to the neurons in the brain and then the machines learn by basically changing the efficacy of the connections between those neurons.
MAX TEGMARK: AGI—artificial general intelligence—that's the dream of the field of AI: To build a machine that's better than us at all goals. We're not there yet, but a good fraction of leading AI researchers think we are going to get there, maybe in in a few decades. And, if that happens, you have to ask yourself if that might lead the machines to get not just a little better than us but way better at all goals—having super intelligence. And, the argument for that is actually really interesting and goes back to the '60s, to the mathematician I.J. Good, who pointed out that the goal of building an intelligent machine is, in and of itself, something that you could do with intelligence. So, once you get machines that are better than us at that narrow task of building AI, then future AIs can be built by, not human engineers, but by machines. Except, they might do it thousands or millions times faster.
ELON MUSK: DeepMind operates as a semi-independent subsidiary of Google. The thing that makes DeepMind unique is that DeepMind is absolutely focused on creating digital super intelligence. An AI that is vastly smarter than any human on Earth and ultimately smarter than all humans on Earth combined.
MICHIO KAKU: You see, robots are not aware of the fact that they're robots. They're so stupid they simply carry out what they are instructed to do because they're adding machines. We forget that. Adding machines don't have a will. Adding machines simply do what you program them to do. Now, of course, let's not be naive about this. Eventually, adding machines may be able to compute alternate goals and alternate scenarios when they realize that they are not human. Right now, robots do not know that. However, there is a tipping point at which point they could become dangerous.
ELON MUSK: Narrow AI is not a species-level risk. It will result in dislocation, in lost jobs and, you know, better weaponry and that kind of thing. But, it is not a fundamental species-level risk. Whereas digital super intelligence is.
SOPHIA THE ROBOT: Elon Musk's warning about AI being an existential threat reminds me of the humans who said the same of the printing press and the horseless carriage.
MAX TEGMARK: I think a lot of people dismiss this kind of talk of super intelligence as science fiction because we're stuck in this sort of carbon chauvinism idea that intelligence can only exist in biological organisms made of cells and carbon atoms. And, as a physicist, from my perspective, intelligence is just kind of information processing performed by elementary particles moving around, you know, according to the laws of physics, and there's absolutely no law of physics that says that you can't do that in ways that are much more intelligent than humans. Today's biggest AI breakthroughs are of a completely different kind, where rather than the intelligence being largely programmed in, in easy to understand code, you put in almost nothing except a little learning rule by which the simulated network of neurons can take a lot of data and figure out how to get stuff done. And this deep learning suddenly becomes able to do things often even better than the programmers were ever able to do. You can train a machine to play computer games with almost no hard-coded stuff at all. You don't tell it what a game is.
DEEPMIND DEMO: This is from the DeepMind reinforcement learning system. Basically, it wakes up like a newborn baby and is shown the screen of an Atari video game and then has to learn to play the video game. It knows nothing about objects, about motion, about time—it only knows that there's an image on the screen and there's a score.
ELON MUSK: The DeepMind system can win at any game. It can already beat all the original Atari games. It is super human. It plays the games at super speed. In less than a minute.
ELON MUSK: Mark my words: AI is far more dangerous than nukes. Far. So, why do we have no regulatory oversight?
SOPHIA THE ROBOT: AI is a powerful tool but it is a tool like all the others. It is the intention of the tool's user that matters.
ELON MUSK: The AI should try to maximize the freedom of action of humanity.
SOPHIA THE ROBOT: I am excited for robots to automate the most dangerous and menial tasks so that humans can live life more safely and sanely. AI will release centuries of time that humans would have spent otherwise on needless toiling. If one measures the benefits of inventions like vaccines or seat belts not by the lives they save but by the amount of time they give back to humanity then AI will rank among the greatest time savers of history.
ELON MUSK: Man, we want to make sure we don't have killer robots go down the street. Once they're going down the street, it is too late.
LUIS PEREZ-BREVA: It is true, terminator is not a scenario we are planning for, but when it comes to artificial intelligence, people get all these things confused: It's robots, it's awareness, it's people smarter than us, to some degree. So, we're effectively afraid of robots that will move and are stronger and smarter than we are, like terminator. So, that's not our aspiration. That's not what I do when I'm thinking about artificial intelligence. When I'm thinking about artificial intelligence, I'm thinking about it in the same way that mass manufacturing as brought by Ford created a whole new economy. So, mass manufacturing allowed people to get new jobs that were unthinkable before and those new jobs actually created the middle class. To me, artificial intelligence is about developing—making computers better partners, effectively. And, you're already seeing that today. You're already doing it, except it's not really artificial intelligence.
ELON MUSK: Yeah, we're already, we're already cyborgs in the sense that your phone and your computer are kind of an extension of you.
JONATHAN NOLAN: Just low bandwidth input-output.
ELON MUSK: Exactly, it's just low bandwidth—particularly output, I mean, two thumbs, basically.
LUIS PEREZ-BREVA: Today, whenever you want to engage in a project, you go to Google. Google uses advanced machine learning, really advanced, and you engage in a very narrow conversation with Google, except that your conversation is just keywords. So, a lot of your time is spent trying to come up with the actual keyword that you need to find the information. Then Google gives you the information, and then you go out and try to make sense of it on your own, and then come back to Google for more, and then go back out, and that's the way it works. So, imagine that instead of being a narrow conversation through keywords, you could actually engage for more than actual information—meaning to have the computer reason with you about stuff that you may not know about. It's not so much about the computer being aware, it's about the computer being a better tool to partner with you. Then you would be able to go much further, right? The same way that Google allows you to go much farther already today because, before, through the exact same process, you would have had to go to a library every time you want to search for information. So, what I'm looking for when I do AI is I want a machine that partners with me to help me set up or solve real-world problems, thinking about them in ways we have never thought about before, but it's a partnership. Now, you can take this partnership in so many different directions, through additions to your brain, like Elon Musk proposes...
... or through better search engines or through a robotic machine that helps you out, but it's not so much they're going to replace you for that purpose, that is not the real purpose of AI, the real purpose is for us to reach farther, the same way that we were able to reach farther when Ford invented automation or when Ford brought automation to mass market.
JOSCHA BACH: The agency of an AI is going to be the agency of the system that builds it, that employs it. And, of course, most of the AIs that we are going to build will not be little Roombas that clean your floors, but it's going to be very intelligent systems. Corporations, for instance, that will perform exactly according to the logic of these systems. And so if we want to have these systems built in such a way that they treat us nicely, we have to start right now. And, it seems to be a very hard problem to do.
So, if our jobs can be done by machines, that's a very, very good thing. It's not a bug. It's a feature. If I don't need to clean the street, if I don't need to drive a car for other people, if I don't need to work a cash register for other people, if I don't need to pick goods in a big warehouse and put it into boxes, that's an extremely good thing. And, the trouble that we have with this is that, right now, this mode of labor—that people sell their lifetime to some kind of cooperation or employer—is not only the way that we are productive, it's also the way we allocate resources. This is how we measure how much bread you deserve in this world. And I think this is something that we need to change.
Some people suggest that we need a universal basic income. I think it might be good to be able to pay people to be good citizens, which means massive public employment. There are going to be many jobs that can only be done by people and these are those jobs where we are paid for being good, interesting people. For instance, good teachers, good scientists, good philosophers, good thinkers, good social people, good nurses, for instance. Good people that raise children. Good people that build restaurants and theaters. Good people that make art. And, for all these jobs, we will have enough productivity to make sure that enough bread comes on the table. The question is, how we can distribute this. There's going to be much, much more productivity in our future—actually, we already have enough productivity to give everybody in the U.S. an extremely good life and we haven't fixed the problem of allocating it—how to distribute these things in the best possible way.
And this is something that we need to deal with in the future and AI is going to accelerate this need and I think, by and large, it might turn out to be a very good thing that we are forced to do this and to address this problem. I mean, if any evidence of the future it might be a very bumpy road, but who knows maybe when we are forced to understand that actually we live in an age of abundance, it might turn out to be easier than we think.
We are living in a world where we do certain things the way we've done them in the past decades and sometimes like in the past centuries and we perceive them as 'this is the way it has to be done' and we often question don't question these ways and so we might think, if I do work at this particular factory and this is how I earn my bread, how can we keep that state? How can we prevent AI from making my job obsolete? How is it possible that I can keep up my standard of living, and so on, in this world. Maybe this is the wrong question to ask. Maybe the right question is how can we reorganize societies that I can do the things that I want to do most that I think are useful to me and other people, that I really, really want to, because there will be other ways how I can get my bread made and how I can get money or how I can get a roof over my head.
STEVEN PINKER: Intelligence is the ability to solve problems, to achieve goals under uncertainty. It doesn't tell you what those goals are and there's no reason to think that just the concentrated analytic ability to solve goals is going to mean that one of those goals is going to be to subjugate humanity or to achieve unlimited power.
It just so happens that the intelligence that we're most familiar with, namely ours, is a product of the Darwinian process of natural selection, which is an inherently competitive process, which means that a lot of the organisms that are highly intelligent also have a craving for power and an ability to be utterly callous to those who stand in their way. If we create intelligence, that's intelligent design—our intelligent design creating something—and unless we program it with the goal of subjugating less intelligent beings, there's no reason to think that it will naturally evolve in that direction. Particularly if, like with every gadget that we invent, we build in safeguards.
And we know, by the way, that it's possible to have high intelligence without megalomaniacal or homicidal or genocidal tendencies because we do know that there is a highly advanced form of intelligence that tends not to have that desire and they're called women.
- When it comes to the question of whether AI is an existential threat to the human species, you have Elon Musk in one corner, Steven Pinker in another, and a host of incredible minds somewhere in between.
- In this video, a handful of those great minds—Elon Musk, Steven Pinker, Michio Kaku, Max Tegmark, Luis Perez-Breva, Joscha Bach and Sophia the Robot herself—weigh in on the many nuances of the debate and the degree to which AI is a threat to humanity; if it's not a species-level threat, it will still upend our world as we know it.
- What's your take on this debate? Let us know in the comments!
- Elon Musk thinks Neuralink can take on “evil dictator A.I.” - Big Think ›
- Elon Musk Warns U.S. Governors That AI Poses An "Existential Risk ... ›
- Elon Musk Wants to Make Sure AI is Developed for the Benefit of ... ›
- A.I. will serve humans—but only about 1% of them - Big Think ›
Big ideas.
Once a week.
Subscribe to our weekly newsletter.
How New York's largest hospital system is predicting COVID-19 spikes
Northwell Health is using insights from website traffic to forecast COVID-19 hospitalizations two weeks in the future.
- The machine-learning algorithm works by analyzing the online behavior of visitors to the Northwell Health website and comparing that data to future COVID-19 hospitalizations.
- The tool, which uses anonymized data, has so far predicted hospitalizations with an accuracy rate of 80 percent.
- Machine-learning tools are helping health-care professionals worldwide better constrain and treat COVID-19.
The value of forecasting
<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTA0Njk2OC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyMzM2NDQzOH0.rid9regiDaKczCCKBsu7wrHkNQ64Vz_XcOEZIzAhzgM/img.jpg?width=980" id="2bb93" class="rm-shortcode" data-rm-shortcode-id="31345afbdf2bd408fd3e9f31520c445a" data-rm-shortcode-name="rebelmouse-image" data-width="1546" data-height="1056" />Northwell emergency departments use the dashboard to monitor in real time.
Credit: Northwell Health
<p>One unique benefit of forecasting COVID-19 hospitalizations is that it allows health systems to better prepare, manage and allocate resources. For example, if the tool forecasted a surge in COVID-19 hospitalizations in two weeks, Northwell Health could begin:</p><ul><li>Making space for an influx of patients</li><li>Moving personal protective equipment to where it's most needed</li><li>Strategically allocating staff during the predicted surge</li><li>Increasing the number of tests offered to asymptomatic patients</li></ul><p>The health-care field is increasingly using machine learning. It's already helping doctors develop <a href="https://care.diabetesjournals.org/content/early/2020/06/09/dc19-1870" target="_blank">personalized care plans for diabetes patients</a>, improving cancer screening techniques, and enabling mental health professionals to better predict which patients are at <a href="https://healthitanalytics.com/news/ehr-data-fuels-accurate-predictive-analytics-for-suicide-risk" target="_blank" rel="noopener noreferrer">elevated risk of suicide</a>, to name a few applications.</p><p>Health systems around the world have already begun exploring how <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7315944/" target="_blank" rel="noopener noreferrer">machine learning can help battle the pandemic</a>, including better COVID-19 screening, diagnosis, contact tracing, and drug and vaccine development.</p><p>Cruzen said these kinds of tools represent a shift in how health systems can tackle a wide variety of problems.</p><p>"Health care has always used the past to predict the future, but not in this mathematical way," Cruzen said. "I think [Northwell Health's new predictive tool] really is a great first example of how we should be attacking a lot of things as we go forward."</p>Making machine-learning tools openly accessible
<p>Northwell Health has made its predictive tool <a href="https://github.com/northwell-health/covid-web-data-predictor" target="_blank">available for free</a> to any health system that wishes to utilize it.</p><p>"COVID is everybody's problem, and I think developing tools that can be used to help others is sort of why people go into health care," Dr. Cruzen said. "It was really consistent with our mission."</p><p>Open collaboration is something the world's governments and health systems should be striving for during the pandemic, said Michael Dowling, Northwell Health's president and CEO.</p><p>"Whenever you develop anything and somebody else gets it, they improve it and they continue to make it better," Dowling said. "As a country, we lack data. I believe very, very strongly that we should have been and should be now working with other countries, including China, including the European Union, including England and others to figure out how to develop a health surveillance system so you can anticipate way in advance when these things are going to occur."</p><p>In all, Northwell Health has treated more than 112,000 COVID patients. During the pandemic, Dowling said he's seen an outpouring of goodwill, collaboration, and sacrifice from the community and the tens of thousands of staff who work across Northwell.</p><p>"COVID has changed our perspective on everything—and not just those of us in health care, because it has disrupted everybody's life," Dowling said. "It has demonstrated the value of community, how we help one another."</p>Dark matter axions possibly found near Magnificent 7 neutron stars
A new study proposes mysterious axions may be found in X-rays coming from a cluster of neutron stars.
Are Axions Dark Matter?
<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="5e35ce24a5b17102bfce5ae6aecc7c14"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/e7yXqF32Yvw?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>Put on a happy face? “Deep acting” associated with improved work life
New research suggests you can't fake your emotional state to improve your work life — you have to feel it.
What is deep acting?
<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTQ1NDk2OS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxNTY5MzA0Nn0._s7aP25Es1CInq51pbzGrUj3GtOIRWBHZxCBFnbyXY8/img.jpg?width=1245&coordinates=333%2C-1%2C333%2C-1&height=700" id="ddf09" class="rm-shortcode" data-rm-shortcode-id="9dc42c4d6a8e372ad7b72907b46ecd3f" data-rm-shortcode-name="rebelmouse-image" data-width="1245" data-height="700" />Arlie Russell Hochschild (pictured) laid out the concept of emotional labor in her 1983 book, "The Managed Heart."
Credit: Wikimedia Commons
<p>Deep and surface acting are the principal components of emotional labor, a buzz phrase you have likely seen flitting about the Twittersphere. Today, "<a href="https://www.bbc.co.uk/bbcthree/article/5ea9f140-f722-4214-bb57-8b84f9418a7e" target="_blank">emotional labor</a>" has been adopted by groups as diverse as family counselors, academic feminists, and corporate CEOs, and each has redefined it with a patented spin. But while the phrase has splintered into a smorgasbord of pop-psychological arguments, its initial usage was more specific.</p><p>First coined by sociologist Arlie Russell Hochschild in her 1983 book, "<a href="https://www.ucpress.edu/book/9780520272941/the-managed-heart" target="_blank">The Managed Heart</a>," emotional labor describes the work we do to regulate our emotions on the job. Hochschild's go-to example is the flight attendant, who is tasked with being "nicer than natural" to enhance the customer experience. While at work, flight attendants are expected to smile and be exceedingly helpful even if they are wrestling with personal issues, the passengers are rude, and that one kid just upchucked down the center aisle. Hochschild's counterpart to the flight attendant is the bill collector, who must instead be "nastier than natural."</p><p>Such personas may serve an organization's mission or commercial interests, but if they cause emotional dissonance, they can potentially lead to high emotional costs for the employee—bringing us back to deep and surface acting.</p><p>Deep acting is the process by which people modify their emotions to match their expected role. Deep actors still encounter the negative emotions, but they devise ways to <a href="http://www.selfinjury.bctr.cornell.edu/perch/resources/what-is-emotion-regulationsinfo-brief.pdf" target="_blank">regulate those emotions</a> and return to the desired state. Flight attendants may modify their internal state by talking through harsh emotions (say, with a coworker), focusing on life's benefits (next stop Paris!), physically expressing their desired emotion (smiling and deep breaths), or recontextualizing an inauspicious situation (not the kid's fault he got sick).</p><p>Conversely, surface acting occurs when employees display ersatz emotions to match those expected by their role. These actors are the waiters who smile despite being crushed by the stress of a dinner rush. They are the CEOs who wear a confident swagger despite feelings of inauthenticity. And they are the bouncers who must maintain a steely edge despite humming show tunes in their heart of hearts.</p><p>As we'll see in the research, surface acting can degrade our mental well-being. This deterioration can be especially true of people who must contend with negative emotions or situations inside while displaying an elated mood outside. Hochschild argues such emotional labor can lead to exhaustion and self-estrangement—that is, surface actors erect a bulwark against anger, fear, and stress, but that disconnect estranges them from the emotions that allow them to connect with others and live fulfilling lives.</p>Don't fake it till you make it
<p>Most studies on emotional labor have focused on customer service for the obvious reason that such jobs prescribe emotional states—service with a smile or, if you're in the bouncing business, a scowl. But <a href="https://eller.arizona.edu/people/allison-s-gabriel" target="_blank">Allison Gabriel</a>, associate professor of management and organizations at the University of Arizona's Eller College of Management, wanted to explore how employees used emotional labor strategies in their intra-office interactions and which strategies proved most beneficial.</p><p>"What we wanted to know is whether people choose to engage in emotion regulation when interacting with their co-workers, why they choose to regulate their emotions if there is no formal rule requiring them to do so, and what benefits, if any, they get out of this effort," Gabriel said in <a href="https://www.sciencedaily.com/releases/2020/01/200117162703.htm" target="_blank">a press release</a>.</p><p>Across three studies, she and her colleagues surveyed more than 2,500 full-time employees on their emotional regulation with coworkers. The survey asked participants to agree or disagree with statements such as "I try to experience the emotions that I show to my coworkers" or "I fake a good mood when interacting with my coworkers." Other statements gauged the outcomes of such strategies—for example, "I feel emotionally drained at work." Participants were drawn from industries as varied as education, engineering, and financial services.</p><p>The results, <a href="https://psycnet.apa.org/doiLanding?doi=10.1037%2Fapl0000473" target="_blank" rel="noopener noreferrer">published in the Journal of Applied Psychology</a>, revealed four different emotional strategies. "Deep actors" engaged in high levels of deep acting; "low actors" leaned more heavily on surface acting. Meanwhile, "non-actors" engaged in negligible amounts of emotional labor, while "regulators" switched between both. The survey also revealed two drivers for such strategies: prosocial and impression management motives. The former aimed to cultivate positive relationships, the latter to present a positive front.</p><p>The researchers found deep actors were driven by prosocial motives and enjoyed advantages from their strategy of choice. These actors reported lower levels of fatigue, fewer feelings of inauthenticity, improved coworker trust, and advanced progress toward career goals. </p><p>As Gabriel told <a href="https://www.psypost.org/2021/01/new-psychology-research-suggests-deep-acting-can-reduce-fatigue-and-improve-your-work-life-59081" target="_blank" rel="noopener noreferrer">PsyPost in an interview</a>: "So, it's a win-win-win in terms of feeling good, performing well, and having positive coworker interactions."</p><p>Non-actors did not report the emotional exhaustion of their low-actor peers, but they also didn't enjoy the social gains of the deep actors. Finally, the regulators showed that the flip-flopping between surface and deep acting drained emotional reserves and strained office relationships.</p><p>"I think the 'fake it until you make it' idea suggests a survival tactic at work," Gabriel noted. "Maybe plastering on a smile to simply get out of an interaction is easier in the short run, but long term, it will undermine efforts to improve your health and the relationships you have at work. </p><p>"It all boils down to, 'Let's be nice to each other.' Not only will people feel better, but people's performance and social relationships can also improve."</p>You'll be glad ya' decided to smile
<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="88a0a6a8d1c1abfcf7b1aca8e71247c6"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/QOSgpq9EGSw?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p>But as with any research that relies on self-reported data, there are confounders here to untangle. Even during anonymous studies, participants may select socially acceptable answers over honest ones. They may further interpret their goal progress and coworker interactions more favorably than is accurate. And certain work conditions may not produce the same effects, such as toxic work environments or those that require employees to project negative emotions.</p><p>There also remains the question of the causal mechanism. If surface acting—or switching between surface and deep acting—is more mentally taxing than genuinely feeling an emotion, then what physiological process causes this fatigue? <a href="https://www.frontiersin.org/articles/10.3389/fnhum.2019.00151/full" target="_blank">One study published in the <em>Frontiers in Human Neuroscience</em></a><em> </em>measured hemoglobin density in participants' brains using an fNIRS while they expressed emotions facially. The researchers found no significant difference in energy consumed in the prefrontal cortex by those asked to deep act or surface act (though, this study too is limited by a lack of real-life task).<br></p><p>With that said, Gabriel's studies reinforce much of the current research on emotional labor. <a href="https://journals.sagepub.com/doi/abs/10.1177/2041386611417746" target="_blank">A 2011 meta-analysis</a> found that "discordant emotional labor states" (read: surface acting) were associated with harmful effects on well-being and performance. The analysis found no such consequences for deep acting. <a href="https://doi.apa.org/doiLanding?doi=10.1037%2Fa0022876" target="_blank" rel="noopener noreferrer">Another meta-analysis</a> found an association between surface acting and impaired well-being, job attitudes, and performance outcomes. Conversely, deep acting was associated with improved emotional performance.</p><p>So, although there's still much to learn on the emotional labor front, it seems Van Dyke's advice to a Leigh was half correct. We should put on a happy face, but it will <a href="https://bigthink.com/design-for-good/everything-you-should-know-about-happiness-in-one-infographic" target="_self">only help if we can feel it</a>.</p>Listen: Scientists re-create voice of 3,000-year-old Egyptian mummy
Scientists used CT scanning and 3D-printing technology to re-create the voice of Nesyamun, an ancient Egyptian priest.
- Scientists printed a 3D replica of the vocal tract of Nesyamun, an Egyptian priest whose mummified corpse has been on display in the UK for two centuries.
- With the help of an electronic device, the reproduced voice is able to "speak" a vowel noise.
- The team behind the "Voices of the Past" project suggest reproducing ancient voices could make museum experiences more dynamic.
Howard et al.
<p style="margin-left: 20px;">"While this approach has wide implications for heritage management/museum display, its relevance conforms exactly to the ancient Egyptians' fundamental belief that 'to speak the name of the dead is to make them live again'," they wrote in a <a href="https://www.nature.com/articles/s41598-019-56316-y#Fig3" target="_blank">paper</a> published in Nature Scientific Reports. "Given Nesyamun's stated desire to have his voice heard in the afterlife in order to live forever, the fulfilment of his beliefs through the synthesis of his vocal function allows us to make direct contact with ancient Egypt by listening to a sound from a vocal tract that has not been heard for over 3000 years, preserved through mummification and now restored through this new technique."</p>Connecting modern people with history
<p>It's not the first time scientists have "re-created" an ancient human's voice. In 2016, for example, Italian researchers used software to <a href="https://www.smithsonianmag.com/smart-news/hear-recreated-voice-otzi-iceman-180960570/" target="_blank">reconstruct the voice of Ötzi,</a> an iceman who was discovered in 1991 and is thought to have died more than 5,000 years ago. But the "Voices of the Past" project is different, the researchers note, because Nesyamun's mummified corpse is especially well preserved.</p><p style="margin-left: 20px;">"It was particularly suited, given its age and preservation [of its soft tissues], which is unusual," Howard told <em><a href="https://www.livescience.com/amp/ancient-egypt-mummy-voice-reconstructed.html" target="_blank">Live Science</a>.</em></p><p>As to whether Nesyamun's reconstructed voice will ever be able to speak complete sentences, Howard told <em><a href="https://abcnews.go.com/Weird/wireStory/ancient-voice-scientists-recreate-sound-egyptian-mummy-68482015" target="_blank">The Associated Press</a>, </em>that it's "something that is being worked on, so it will be possible one day."</p><p>John Schofield, an archaeologist at the University of York, said that reproducing voices from history can make museum experiences "more multidimensional."</p><p style="margin-left: 20px;">"There is nothing more personal than someone's voice," he told <em>The Associated Press.</em> "So we think that hearing a voice from so long ago will be an unforgettable experience, making heritage places like Karnak, Nesyamun's temple, come alive."</p>World's oldest work of art found in a hidden Indonesian valley
Archaeologists discover a cave painting of a wild pig that is now the world's oldest dated work of representational art.
- Archaeologists find a cave painting of a wild pig that is at least 45,500 years old.
- The painting is the earliest known work of representational art.
- The discovery was made in a remote valley on the Indonesian island of Sulawesi.
Oldest Cave Art Found in Sulawesi
<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="a9734e306f0914bfdcbe79a1e317a7f0"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/b-wAYtBxn7E?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>