Get smarter, faster. Subscribe to our daily newsletter.
Could A.I. detect mass shooters before they strike?
President Trump has called for Silicon Valley to develop digital precogs, but such systems raise efficacy concerns.

- President Donald Trump wants social media companies to develop A.I. that can flag potential mass shooters.
- Experts agree that artificial intelligence is not advanced enough, nor are current moderating systems up to the task.
- A majority of Americans support stricter gun laws, but such policies have yet to make headway.
On August 3, a man in El Paso, Texas, shot and killed 22 people and injured 24 others. Hours later, another man in Dayton, Ohio, shot and killed nine people, including his own sister. Even in a country left numb by countless mass shootings, the news was distressing and painful.
President Donald Trump soon addressed the nation to outline how his administration planned to tackle this uniquely American problem. Listeners hoping the tragedies might finally spur motivation for stricter gun control laws, such as universal background checks or restrictions on high-capacity magazines, were left disappointed.
Trump's plan was a ragbag of typical Republican talking points: red flag laws, mental health concerns, and regulation on violent video games. Tucked among them was an idea straight out of a Philip K. Dick novel.
"We must recognize that the internet has provided a dangerous avenue to radicalize disturbed minds and perform demented acts," Trump said. "First, we must do a better job of identifying and acting on early warning signs. I am directing the Department of Justice to work in partnership with local, state and federal agencies as well as well as social media companies to develop tools that can detect mass shooters before they strike."
Basically, Trump wants digital precogs. But has artificial intelligence reached such grand, and potentially terrifying, heights?
A digitized state of mind

It's worth noting that A.I. has made impressive strides at reading and quantifying the human mind. Social media is a vast repository of data on how people feel and think. If we can suss out the internal from the performative, we could improve mental health care in the U.S. and abroad.
For example, a study from 2017 found that A.I. could read the predictive markers for depression in Instagram photos. Researchers tasked machine learning tools with analyzing data from 166 individuals, some of whom had been previously diagnosed with depression. The algorithms looked at filter choice, facial expressions, metadata tags, etc., in more than 43,950 photos.
The results? The A.I. outperformed human practitioners at diagnosing depression. These results held even when analyzing images from before the patients' diagnoses. (Of course, Instagram is also the social media platform most likely to make you depressed and anxious, but that's another study.)
Talking with Big Think, Eric Topol, a professor in the Department of Molecular Medicine at Scripps, called this the ability to "digitize our state of mind." In addition to the Instagram study, he pointed out that patients will share more with a self-chosen avatar than a human psychiatrist.
"So when you take this ability to digitize a state of mind and also have a support through an avatar, this could turn out to be a really great way to deal with the problem we have today, which is a lack of mental health professionals with a very extensive burden of depression and other mental health conditions," Topol said.
Detecting mass shooters?
....mentally ill or deranged people. I am the biggest Second Amendment person there is, but we all must work togeth… https://t.co/T9OthUAsXe— Donald J. Trump (@Donald J. Trump)1565352202.0
However, it's not as simple as turning the A.I. dial from "depression" to "mass shooter." Machine learning tools have gotten excellent at analyzing images, but they lag behind the mind's ability to read language, intonation, and social cues.
As Facebook CEO Mark Zuckerberg said: "One of the pieces of criticism we get that I think is fair is that we're much better able to enforce our nudity policies, for example, than we are hate speech. The reason for that is it's much easier to make an A.I. system that can detect a nipple than it is to determine what is linguistically hate speech."
Trump should know this. During a House Homeland Security subcommittee hearing earlier this year, experts testified that A.I. was not a panacea for curing online extremism. Alex Stamos, Facebook's former chief security officer, likened the world's best A.I. to "a crowd of millions of preschoolers" and the task to demanding those preschoolers "get together to build the Taj Mahal."
None of this is to say that the problem is impossible, but it's certainly intractable.
Yes, we can create an A.I. that plays Go or analyzes stock performance better than any human. That's because we have a lot of data on these activities and they follow predictable input-output patterns. Yet even these "simple" algorithms require some of the brightest minds to develop.
Mass shooters, though far too common in the United States, are still rare. We've played more games of Go, analyzed more stocks, and diagnosed more people with depression, which millions of Americans struggle with. This gives machine learning software more data points on these activities in order to create accurate, responsible predictions — that still don't perform flawlessly.
Add to this that hate, extremism, and violence don't follow reliable input-output patterns, and you can see why experts are leery of Trump's direction to employ A.I. in the battle against terrorism.
"As we psychological scientists have said repeatedly, the overwhelming majority of people with mental illness are not violent. And there is no single personality profile that can reliably predict who will resort to gun violence," Arthur C. Evans, CEO of the American Psychological Association, said in a release. "Based on the research, we know only that a history of violence is the single best predictor of who will commit future violence. And access to more guns, and deadlier guns, means more lives lost."
Social media can't protect us from ourselves
First Lady Melania Trump visits with the victims of the El Paso, Texas, shooting. Image source: Andrea Hanks / Flickr
One may wonder if we can utilize current capabilities more aggressively? Unfortunately, social media moderating systems are a hodgepodge, built piecemeal over the last decade. They rely on a mixture of A.I., paid moderators, and community policing. The outcome is an inconsistent system.
For example, the New York Times reported in 2017 that YouTube had removed thousands of videos using machine learning systems. The videos showed atrocities from the Syrian War, such as executions and people spouting Islamic State propaganda. The algorithm flagged and removed them as coming from extremist groups.
In truth, the videos came from humanitarian organizations to document human rights violations. The machine couldn't tell the difference. YouTube reinstated some of the videos after users reported the issue, but mistakes at such a scale do not give one hope that today's moderating systems could accurately identify would-be mass shooters.
That's the conclusion reached in a report from the Partnership on A.I. (PAI). It argued there were "serious shortcomings" in using A.I. as a risk-assessment tool in U.S. criminal justice. Its writers cite three overarching concerns: accuracy and bias; questions of transparency and accountability; and issues with the interface between tools and people.
"Although the use of these tools is in part motivated by the desire to mitigate existing human fallibility in the criminal justice system, it is a serious misunderstanding to view tools as objective or neutral simply because they are based on data," the report states. "While formulas and statistical models provide some degree of consistency and replicability, they still share or amplify many weaknesses of human decision-making."
In addition to the above, there are practical barriers. The technical capabilities of law enforcement vary between locations. Social media platforms deal in massive amounts of traffic and data. And even when the red flags are self-evident — such as when shooters publish manifestos — they offer a narrow window in which to act.
The tools to reduce mass shootings
Protesters at March for Our Lives 2018 in San Francisco. Image source: Gregory Varnum / Wikimedia Commons
Artificial intelligence offers many advantages today and will offer more in the future. But as an answer to extremism and mass shootings, experts agree it's simply the wrong tool. That's the bad news. The good news is we have the tools we need already, and they can be implemented with readily available tech.
"Based on the psychological science, we know some of the steps we need to take. We need to limit civilians' access to assault weapons and high-capacity magazines. We need to institute universal background checks. And we should institute red flag laws that remove guns from people who are at high risk of committing violent acts," Evans wrote.
Evans isn't alone. Experts agree that the policies he suggests, and a few others, will reduce the likelihood of mass shootings. And six in 10 Americans already support these measures.
We don't need advanced A.I. to figure this out. There's only one developed country in the world where someone can legally and easily acquire an armory of guns, and it's the only developed country that suffers mass shootings with such regularity. It's a simple arithmetic.
- From AI to Mass Shootings, Neuroscience Is the Future of Problem ... ›
- Can A.I. create more diverse workplaces? - Big Think ›
How New York's largest hospital system is predicting COVID-19 spikes
Northwell Health is using insights from website traffic to forecast COVID-19 hospitalizations two weeks in the future.
- The machine-learning algorithm works by analyzing the online behavior of visitors to the Northwell Health website and comparing that data to future COVID-19 hospitalizations.
- The tool, which uses anonymized data, has so far predicted hospitalizations with an accuracy rate of 80 percent.
- Machine-learning tools are helping health-care professionals worldwide better constrain and treat COVID-19.
The value of forecasting
<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTA0Njk2OC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyMzM2NDQzOH0.rid9regiDaKczCCKBsu7wrHkNQ64Vz_XcOEZIzAhzgM/img.jpg?width=980" id="2bb93" class="rm-shortcode" data-rm-shortcode-id="31345afbdf2bd408fd3e9f31520c445a" data-rm-shortcode-name="rebelmouse-image" data-width="1546" data-height="1056" />Northwell emergency departments use the dashboard to monitor in real time.
Credit: Northwell Health
<p>One unique benefit of forecasting COVID-19 hospitalizations is that it allows health systems to better prepare, manage and allocate resources. For example, if the tool forecasted a surge in COVID-19 hospitalizations in two weeks, Northwell Health could begin:</p><ul><li>Making space for an influx of patients</li><li>Moving personal protective equipment to where it's most needed</li><li>Strategically allocating staff during the predicted surge</li><li>Increasing the number of tests offered to asymptomatic patients</li></ul><p>The health-care field is increasingly using machine learning. It's already helping doctors develop <a href="https://care.diabetesjournals.org/content/early/2020/06/09/dc19-1870" target="_blank">personalized care plans for diabetes patients</a>, improving cancer screening techniques, and enabling mental health professionals to better predict which patients are at <a href="https://healthitanalytics.com/news/ehr-data-fuels-accurate-predictive-analytics-for-suicide-risk" target="_blank" rel="noopener noreferrer">elevated risk of suicide</a>, to name a few applications.</p><p>Health systems around the world have already begun exploring how <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7315944/" target="_blank" rel="noopener noreferrer">machine learning can help battle the pandemic</a>, including better COVID-19 screening, diagnosis, contact tracing, and drug and vaccine development.</p><p>Cruzen said these kinds of tools represent a shift in how health systems can tackle a wide variety of problems.</p><p>"Health care has always used the past to predict the future, but not in this mathematical way," Cruzen said. "I think [Northwell Health's new predictive tool] really is a great first example of how we should be attacking a lot of things as we go forward."</p>Making machine-learning tools openly accessible
<p>Northwell Health has made its predictive tool <a href="https://github.com/northwell-health/covid-web-data-predictor" target="_blank">available for free</a> to any health system that wishes to utilize it.</p><p>"COVID is everybody's problem, and I think developing tools that can be used to help others is sort of why people go into health care," Dr. Cruzen said. "It was really consistent with our mission."</p><p>Open collaboration is something the world's governments and health systems should be striving for during the pandemic, said Michael Dowling, Northwell Health's president and CEO.</p><p>"Whenever you develop anything and somebody else gets it, they improve it and they continue to make it better," Dowling said. "As a country, we lack data. I believe very, very strongly that we should have been and should be now working with other countries, including China, including the European Union, including England and others to figure out how to develop a health surveillance system so you can anticipate way in advance when these things are going to occur."</p><p>In all, Northwell Health has treated more than 112,000 COVID patients. During the pandemic, Dowling said he's seen an outpouring of goodwill, collaboration, and sacrifice from the community and the tens of thousands of staff who work across Northwell.</p><p>"COVID has changed our perspective on everything—and not just those of us in health care, because it has disrupted everybody's life," Dowling said. "It has demonstrated the value of community, how we help one another."</p>Designer uses AI to bring 54 Roman emperors to life
It's hard to stop looking back and forth between these faces and the busts they came from.
Meet Emperors Augustus, left, and Maximinus Thrax, right
- A quarantine project gone wild produces the possibly realistic faces of ancient Roman rulers.
- A designer worked with a machine learning app to produce the images.
- It's impossible to know if they're accurate, but they sure look plausible.
How the Roman emperors got faced
<a href="https://payload.cargocollective.com/1/6/201108/14127595/2K-ENGLISH-24x36-Educational_v8_WATERMARKED_2000.jpg" ><img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDQ2NDk2MS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyOTUzMzIxMX0.OwHMrgKu4pzu0eCsmOUjybdkTcSlJpL_uWDCF2djRfc/img.jpg?width=980" id="775ca" class="rm-shortcode" data-rm-shortcode-id="436000b6976931b8320313478c624c82" data-rm-shortcode-name="rebelmouse-image" alt="lineup of emperor faces" data-width="1440" data-height="963" /></a>Credit: Daniel Voshart
<p>Voshart's imaginings began with an AI/neural-net program called <a href="https://www.artbreeder.com" target="_blank">Artbreeder</a>. The freemium online app intelligently generates new images from existing ones and can combine multiple images into…well, who knows. It's addictive — people have so far used it to generate nearly 72.7 million images, says the site — and it's easy to see how Voshart fell down the rabbit hole.</p><p>The Roman emperor project began with Voshart feeding Artbreeder images of 800 busts. Obviously, not all busts have weathered the centuries equally. Voshart told <a href="https://www.livescience.com/ai-roman-emperor-portraits.html" target="_blank" rel="noopener noreferrer">Live Science</a>, "There is a rule of thumb in computer programming called 'garbage in garbage out,' and it applies to Artbreeder. A well-lit, well-sculpted bust with little damage and standard face features is going to be quite easy to get a result." Fortunately, there were multiple busts for some of the emperors, and different angles of busts captured in different photographs.</p><p>For the renderings Artbreeder produced, each face required some 15-16 hours of additional input from Voshart, who was left to deduce/guess such details as hair and skin coloring, though in many cases, an individual's features suggested likely pigmentations. Voshart was also aided by written descriptions of some of the rulers.</p><p>There's no way to know for sure how frequently Voshart's guesses hit their marks. It is obviously the case, though, that his interpretations look incredibly plausible when you compare one of his emperors to the sculpture(s) from which it was derived.</p><p>For an in-depth description of Voshart's process, check out his posts on <a href="https://medium.com/@voshart/photoreal-roman-emperor-project-236be7f06c8f" target="_blank">Medium</a> or on his <a href="https://voshart.com/ROMAN-EMPEROR-PROJECT" target="_blank" rel="noopener noreferrer">website</a>.</p><p>It's fascinating to feel like you're face-to-face with these ancient and sometimes notorious figures. Here are two examples, along with some of what we think we know about the men behind the faces.</p>Caligula
<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDQ2NDk4Mi9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY3MzQ1NTE5NX0.LiTmhPQlygl9Fa9lxay8PFPCSqShv4ELxbBRFkOW_qM/img.jpg?width=980" id="7bae0" class="rm-shortcode" data-rm-shortcode-id="ce795c554490fe0a36a8714b86f55b16" data-rm-shortcode-name="rebelmouse-image" data-width="992" data-height="558" />One of numerous sculptures of Caligula, left
Credit: Rogers Fund, 1914/Wikimedia Commons/Daniel Voshart
<p><span style="background-color: initial;"><a href="https://en.wikipedia.org/wiki/Caligula" target="_blank">Caligula</a></span> was the third Roman Emperor, ruling the city-state from AD 37 to 41. His name was actually Gaius Caesar Augustus Germanicus — Caligula is a nickname meaning "Little Boot."</p><p>One of the reputed great madmen of history, he was said to have made a horse his consul, had conversations with the moon, and to have ravaged his way through his kingdom, including his three sisters. Caligula is known for extreme cruelty, terrorizing his subjects, and accounts suggest he would deliberately distort his face to surprise and frighten people he wished to intimidate.</p><p>It's <a href="https://www.history.com/news/7-things-you-may-not-know-about-caligula" target="_blank">not totally clear</a> if Caligula was as over-the-top as history paints him, but that hasn't stopped Hollywood from churning out some <a href="https://www.imdb.com/title/tt0080491/" target="_blank" rel="noopener noreferrer">howlers</a> in his name.</p><p>A 1928 journal, <a href="https://www.jstor.org/stable/4172009" target="_blank">Studies in Philology</a>, noted that contemporary descriptions of Caligula depicted him as having a "head misshapen, eyes and temples sunken," and "eyes staring and with a glare savage enough to torture." In some sculptures not shown above, his head <em>is</em> a bit acorn-shaped. </p>Nero
<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDQ2NTAwMC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY1NTQ2ODU0NX0.AgYuQZzRQCanqehSI5UeakpxU8fwLagMc_POH7xB3-M/img.jpg?width=980" id="a8825" class="rm-shortcode" data-rm-shortcode-id="9e0593d79c591c97af4bd70f3423885e" data-rm-shortcode-name="rebelmouse-image" data-width="992" data-height="558" />One of numerous sculptures of Nero, left
Credit: Bibi_Saint-Pol/Wikimedia Commons/Daniel Voshart
<p>There's a good German word for the face of <a href="https://en.wikipedia.org/wiki/Nero" target="_blank" rel="noopener noreferrer">Nero</a>, that guy famous for fiddling as Rome burned. It's "<a href="https://www.urbandictionary.com/define.php?term=Backpfeifengesicht" target="_blank">backpfeifengesicht</a>." Properly named Nero Claudius Caesar Augustus Germanicus, he was Rome's fifth emperor. He ruled from AD 54 until his suicide in AD 68.</p><p>Another Germanicus-family gem, Nero's said to have murdered his own mother, Agrippa, as well as (maybe) his second wife. As for the fiddling, he <em>was</em> a lover of music and the arts, and there are stories of his charitability. And, oh yeah, he may have set the fire as an excuse to rebuild the city center, making it his own.</p><p>While it may not be the most historically sound means of assessing an historical personage, Voshart's imagining of Nero does suggest an over-indulged, entitled young man. Backpfeifengesicht.</p>Dark matter axions possibly found near Magnificent 7 neutron stars
A new study proposes mysterious axions may be found in X-rays coming from a cluster of neutron stars.
A rendering of the XMM-Newton (X-ray multi-mirror mission) space telescope.
Are Axions Dark Matter?
<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="5e35ce24a5b17102bfce5ae6aecc7c14"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/e7yXqF32Yvw?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>Put on a happy face? “Deep acting” associated with improved work life
New research suggests you can't fake your emotional state to improve your work life — you have to feel it.
What is deep acting?
<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTQ1NDk2OS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxNTY5MzA0Nn0._s7aP25Es1CInq51pbzGrUj3GtOIRWBHZxCBFnbyXY8/img.jpg?width=1245&coordinates=333%2C-1%2C333%2C-1&height=700" id="ddf09" class="rm-shortcode" data-rm-shortcode-id="9dc42c4d6a8e372ad7b72907b46ecd3f" data-rm-shortcode-name="rebelmouse-image" data-width="1245" data-height="700" />Arlie Russell Hochschild (pictured) laid out the concept of emotional labor in her 1983 book, "The Managed Heart."
Credit: Wikimedia Commons
<p>Deep and surface acting are the principal components of emotional labor, a buzz phrase you have likely seen flitting about the Twittersphere. Today, "<a href="https://www.bbc.co.uk/bbcthree/article/5ea9f140-f722-4214-bb57-8b84f9418a7e" target="_blank">emotional labor</a>" has been adopted by groups as diverse as family counselors, academic feminists, and corporate CEOs, and each has redefined it with a patented spin. But while the phrase has splintered into a smorgasbord of pop-psychological arguments, its initial usage was more specific.</p><p>First coined by sociologist Arlie Russell Hochschild in her 1983 book, "<a href="https://www.ucpress.edu/book/9780520272941/the-managed-heart" target="_blank">The Managed Heart</a>," emotional labor describes the work we do to regulate our emotions on the job. Hochschild's go-to example is the flight attendant, who is tasked with being "nicer than natural" to enhance the customer experience. While at work, flight attendants are expected to smile and be exceedingly helpful even if they are wrestling with personal issues, the passengers are rude, and that one kid just upchucked down the center aisle. Hochschild's counterpart to the flight attendant is the bill collector, who must instead be "nastier than natural."</p><p>Such personas may serve an organization's mission or commercial interests, but if they cause emotional dissonance, they can potentially lead to high emotional costs for the employee—bringing us back to deep and surface acting.</p><p>Deep acting is the process by which people modify their emotions to match their expected role. Deep actors still encounter the negative emotions, but they devise ways to <a href="http://www.selfinjury.bctr.cornell.edu/perch/resources/what-is-emotion-regulationsinfo-brief.pdf" target="_blank">regulate those emotions</a> and return to the desired state. Flight attendants may modify their internal state by talking through harsh emotions (say, with a coworker), focusing on life's benefits (next stop Paris!), physically expressing their desired emotion (smiling and deep breaths), or recontextualizing an inauspicious situation (not the kid's fault he got sick).</p><p>Conversely, surface acting occurs when employees display ersatz emotions to match those expected by their role. These actors are the waiters who smile despite being crushed by the stress of a dinner rush. They are the CEOs who wear a confident swagger despite feelings of inauthenticity. And they are the bouncers who must maintain a steely edge despite humming show tunes in their heart of hearts.</p><p>As we'll see in the research, surface acting can degrade our mental well-being. This deterioration can be especially true of people who must contend with negative emotions or situations inside while displaying an elated mood outside. Hochschild argues such emotional labor can lead to exhaustion and self-estrangement—that is, surface actors erect a bulwark against anger, fear, and stress, but that disconnect estranges them from the emotions that allow them to connect with others and live fulfilling lives.</p>Don't fake it till you make it
<p>Most studies on emotional labor have focused on customer service for the obvious reason that such jobs prescribe emotional states—service with a smile or, if you're in the bouncing business, a scowl. But <a href="https://eller.arizona.edu/people/allison-s-gabriel" target="_blank">Allison Gabriel</a>, associate professor of management and organizations at the University of Arizona's Eller College of Management, wanted to explore how employees used emotional labor strategies in their intra-office interactions and which strategies proved most beneficial.</p><p>"What we wanted to know is whether people choose to engage in emotion regulation when interacting with their co-workers, why they choose to regulate their emotions if there is no formal rule requiring them to do so, and what benefits, if any, they get out of this effort," Gabriel said in <a href="https://www.sciencedaily.com/releases/2020/01/200117162703.htm" target="_blank">a press release</a>.</p><p>Across three studies, she and her colleagues surveyed more than 2,500 full-time employees on their emotional regulation with coworkers. The survey asked participants to agree or disagree with statements such as "I try to experience the emotions that I show to my coworkers" or "I fake a good mood when interacting with my coworkers." Other statements gauged the outcomes of such strategies—for example, "I feel emotionally drained at work." Participants were drawn from industries as varied as education, engineering, and financial services.</p><p>The results, <a href="https://psycnet.apa.org/doiLanding?doi=10.1037%2Fapl0000473" target="_blank" rel="noopener noreferrer">published in the Journal of Applied Psychology</a>, revealed four different emotional strategies. "Deep actors" engaged in high levels of deep acting; "low actors" leaned more heavily on surface acting. Meanwhile, "non-actors" engaged in negligible amounts of emotional labor, while "regulators" switched between both. The survey also revealed two drivers for such strategies: prosocial and impression management motives. The former aimed to cultivate positive relationships, the latter to present a positive front.</p><p>The researchers found deep actors were driven by prosocial motives and enjoyed advantages from their strategy of choice. These actors reported lower levels of fatigue, fewer feelings of inauthenticity, improved coworker trust, and advanced progress toward career goals. </p><p>As Gabriel told <a href="https://www.psypost.org/2021/01/new-psychology-research-suggests-deep-acting-can-reduce-fatigue-and-improve-your-work-life-59081" target="_blank" rel="noopener noreferrer">PsyPost in an interview</a>: "So, it's a win-win-win in terms of feeling good, performing well, and having positive coworker interactions."</p><p>Non-actors did not report the emotional exhaustion of their low-actor peers, but they also didn't enjoy the social gains of the deep actors. Finally, the regulators showed that the flip-flopping between surface and deep acting drained emotional reserves and strained office relationships.</p><p>"I think the 'fake it until you make it' idea suggests a survival tactic at work," Gabriel noted. "Maybe plastering on a smile to simply get out of an interaction is easier in the short run, but long term, it will undermine efforts to improve your health and the relationships you have at work. </p><p>"It all boils down to, 'Let's be nice to each other.' Not only will people feel better, but people's performance and social relationships can also improve."</p>You'll be glad ya' decided to smile
<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="88a0a6a8d1c1abfcf7b1aca8e71247c6"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/QOSgpq9EGSw?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p>But as with any research that relies on self-reported data, there are confounders here to untangle. Even during anonymous studies, participants may select socially acceptable answers over honest ones. They may further interpret their goal progress and coworker interactions more favorably than is accurate. And certain work conditions may not produce the same effects, such as toxic work environments or those that require employees to project negative emotions.</p><p>There also remains the question of the causal mechanism. If surface acting—or switching between surface and deep acting—is more mentally taxing than genuinely feeling an emotion, then what physiological process causes this fatigue? <a href="https://www.frontiersin.org/articles/10.3389/fnhum.2019.00151/full" target="_blank">One study published in the <em>Frontiers in Human Neuroscience</em></a><em> </em>measured hemoglobin density in participants' brains using an fNIRS while they expressed emotions facially. The researchers found no significant difference in energy consumed in the prefrontal cortex by those asked to deep act or surface act (though, this study too is limited by a lack of real-life task).<br></p><p>With that said, Gabriel's studies reinforce much of the current research on emotional labor. <a href="https://journals.sagepub.com/doi/abs/10.1177/2041386611417746" target="_blank">A 2011 meta-analysis</a> found that "discordant emotional labor states" (read: surface acting) were associated with harmful effects on well-being and performance. The analysis found no such consequences for deep acting. <a href="https://doi.apa.org/doiLanding?doi=10.1037%2Fa0022876" target="_blank" rel="noopener noreferrer">Another meta-analysis</a> found an association between surface acting and impaired well-being, job attitudes, and performance outcomes. Conversely, deep acting was associated with improved emotional performance.</p><p>So, although there's still much to learn on the emotional labor front, it seems Van Dyke's advice to a Leigh was half correct. We should put on a happy face, but it will <a href="https://bigthink.com/design-for-good/everything-you-should-know-about-happiness-in-one-infographic" target="_self">only help if we can feel it</a>.</p>World's oldest work of art found in a hidden Indonesian valley
Archaeologists discover a cave painting of a wild pig that is now the world's oldest dated work of representational art.
