Big ideas.
Once a week.
Subscribe to our weekly newsletter.
Could A.I. detect mass shooters before they strike?
President Trump has called for Silicon Valley to develop digital precogs, but such systems raise efficacy concerns.

- President Donald Trump wants social media companies to develop A.I. that can flag potential mass shooters.
- Experts agree that artificial intelligence is not advanced enough, nor are current moderating systems up to the task.
- A majority of Americans support stricter gun laws, but such policies have yet to make headway.
On August 3, a man in El Paso, Texas, shot and killed 22 people and injured 24 others. Hours later, another man in Dayton, Ohio, shot and killed nine people, including his own sister. Even in a country left numb by countless mass shootings, the news was distressing and painful.
President Donald Trump soon addressed the nation to outline how his administration planned to tackle this uniquely American problem. Listeners hoping the tragedies might finally spur motivation for stricter gun control laws, such as universal background checks or restrictions on high-capacity magazines, were left disappointed.
Trump's plan was a ragbag of typical Republican talking points: red flag laws, mental health concerns, and regulation on violent video games. Tucked among them was an idea straight out of a Philip K. Dick novel.
"We must recognize that the internet has provided a dangerous avenue to radicalize disturbed minds and perform demented acts," Trump said. "First, we must do a better job of identifying and acting on early warning signs. I am directing the Department of Justice to work in partnership with local, state and federal agencies as well as well as social media companies to develop tools that can detect mass shooters before they strike."
Basically, Trump wants digital precogs. But has artificial intelligence reached such grand, and potentially terrifying, heights?
A digitized state of mind

It's worth noting that A.I. has made impressive strides at reading and quantifying the human mind. Social media is a vast repository of data on how people feel and think. If we can suss out the internal from the performative, we could improve mental health care in the U.S. and abroad.
For example, a study from 2017 found that A.I. could read the predictive markers for depression in Instagram photos. Researchers tasked machine learning tools with analyzing data from 166 individuals, some of whom had been previously diagnosed with depression. The algorithms looked at filter choice, facial expressions, metadata tags, etc., in more than 43,950 photos.
The results? The A.I. outperformed human practitioners at diagnosing depression. These results held even when analyzing images from before the patients' diagnoses. (Of course, Instagram is also the social media platform most likely to make you depressed and anxious, but that's another study.)
Talking with Big Think, Eric Topol, a professor in the Department of Molecular Medicine at Scripps, called this the ability to "digitize our state of mind." In addition to the Instagram study, he pointed out that patients will share more with a self-chosen avatar than a human psychiatrist.
"So when you take this ability to digitize a state of mind and also have a support through an avatar, this could turn out to be a really great way to deal with the problem we have today, which is a lack of mental health professionals with a very extensive burden of depression and other mental health conditions," Topol said.
Detecting mass shooters?
....mentally ill or deranged people. I am the biggest Second Amendment person there is, but we all must work togeth… https://t.co/T9OthUAsXe— Donald J. Trump (@Donald J. Trump)1565352202.0
However, it's not as simple as turning the A.I. dial from "depression" to "mass shooter." Machine learning tools have gotten excellent at analyzing images, but they lag behind the mind's ability to read language, intonation, and social cues.
As Facebook CEO Mark Zuckerberg said: "One of the pieces of criticism we get that I think is fair is that we're much better able to enforce our nudity policies, for example, than we are hate speech. The reason for that is it's much easier to make an A.I. system that can detect a nipple than it is to determine what is linguistically hate speech."
Trump should know this. During a House Homeland Security subcommittee hearing earlier this year, experts testified that A.I. was not a panacea for curing online extremism. Alex Stamos, Facebook's former chief security officer, likened the world's best A.I. to "a crowd of millions of preschoolers" and the task to demanding those preschoolers "get together to build the Taj Mahal."
None of this is to say that the problem is impossible, but it's certainly intractable.
Yes, we can create an A.I. that plays Go or analyzes stock performance better than any human. That's because we have a lot of data on these activities and they follow predictable input-output patterns. Yet even these "simple" algorithms require some of the brightest minds to develop.
Mass shooters, though far too common in the United States, are still rare. We've played more games of Go, analyzed more stocks, and diagnosed more people with depression, which millions of Americans struggle with. This gives machine learning software more data points on these activities in order to create accurate, responsible predictions — that still don't perform flawlessly.
Add to this that hate, extremism, and violence don't follow reliable input-output patterns, and you can see why experts are leery of Trump's direction to employ A.I. in the battle against terrorism.
"As we psychological scientists have said repeatedly, the overwhelming majority of people with mental illness are not violent. And there is no single personality profile that can reliably predict who will resort to gun violence," Arthur C. Evans, CEO of the American Psychological Association, said in a release. "Based on the research, we know only that a history of violence is the single best predictor of who will commit future violence. And access to more guns, and deadlier guns, means more lives lost."
Social media can't protect us from ourselves
First Lady Melania Trump visits with the victims of the El Paso, Texas, shooting. Image source: Andrea Hanks / Flickr
One may wonder if we can utilize current capabilities more aggressively? Unfortunately, social media moderating systems are a hodgepodge, built piecemeal over the last decade. They rely on a mixture of A.I., paid moderators, and community policing. The outcome is an inconsistent system.
For example, the New York Times reported in 2017 that YouTube had removed thousands of videos using machine learning systems. The videos showed atrocities from the Syrian War, such as executions and people spouting Islamic State propaganda. The algorithm flagged and removed them as coming from extremist groups.
In truth, the videos came from humanitarian organizations to document human rights violations. The machine couldn't tell the difference. YouTube reinstated some of the videos after users reported the issue, but mistakes at such a scale do not give one hope that today's moderating systems could accurately identify would-be mass shooters.
That's the conclusion reached in a report from the Partnership on A.I. (PAI). It argued there were "serious shortcomings" in using A.I. as a risk-assessment tool in U.S. criminal justice. Its writers cite three overarching concerns: accuracy and bias; questions of transparency and accountability; and issues with the interface between tools and people.
"Although the use of these tools is in part motivated by the desire to mitigate existing human fallibility in the criminal justice system, it is a serious misunderstanding to view tools as objective or neutral simply because they are based on data," the report states. "While formulas and statistical models provide some degree of consistency and replicability, they still share or amplify many weaknesses of human decision-making."
In addition to the above, there are practical barriers. The technical capabilities of law enforcement vary between locations. Social media platforms deal in massive amounts of traffic and data. And even when the red flags are self-evident — such as when shooters publish manifestos — they offer a narrow window in which to act.
The tools to reduce mass shootings
Protesters at March for Our Lives 2018 in San Francisco. Image source: Gregory Varnum / Wikimedia Commons
Artificial intelligence offers many advantages today and will offer more in the future. But as an answer to extremism and mass shootings, experts agree it's simply the wrong tool. That's the bad news. The good news is we have the tools we need already, and they can be implemented with readily available tech.
"Based on the psychological science, we know some of the steps we need to take. We need to limit civilians' access to assault weapons and high-capacity magazines. We need to institute universal background checks. And we should institute red flag laws that remove guns from people who are at high risk of committing violent acts," Evans wrote.
Evans isn't alone. Experts agree that the policies he suggests, and a few others, will reduce the likelihood of mass shootings. And six in 10 Americans already support these measures.
We don't need advanced A.I. to figure this out. There's only one developed country in the world where someone can legally and easily acquire an armory of guns, and it's the only developed country that suffers mass shootings with such regularity. It's a simple arithmetic.
- From AI to Mass Shootings, Neuroscience Is the Future of Problem ... ›
- Can A.I. create more diverse workplaces? - Big Think ›
A Cave in France Changes What We Thought We Knew About Neanderthals
A cave in France contains man’s earliest-known structures that had to be built by Neanderthals who were believed to be incapable of such things.
In a French cave deep underground, scientists have discovered what appear to be 176,000-year-old man-made structures. That's 150,000 years earlier than any that have been discovered anywhere before. And they could only have been built by Neanderthals, people who were never before considered capable of such a thing.
This is going to force a major shift in the way we see these early hominids. Researchers had thought that Neanderthals were profoundly primitive, and just barely human. This cave in France's Aveyron Valley changes all that: It's suddenly obvious that Neanderthals were not quite so unlike us.
According to The Atlantic, Bruniquel Cave was first explored in 1990 by Bruno Kowalsczewski, who was 15 at the time. He'd spent three years digging away at rubble covering a space through which his father felt air moving.
Some members of a local caving club managed to squeeze through the narrow, 30-meter long tunnel Kowalsczewski had dug to arrive in a passageway. They followed it past pools of water and old animal bones for over 330 meters before coming into a large chamber and a scene they had no reason to expect: Stalagmites that someone had broken into hundreds of small pieces, most of which were arranged into two rings—one roughly 6 meters across, and one 2 meters wide—with the remaining pieces stacked into one of four piles or leaning against the rings. There were also indications of fires and burnt bones.
Image source: Etienne FABRE - SSAC
What the?
A professional archeologist, Francois Rouzaud, determined with carbon dating that a burnt bear bone found in the chamber was 47,600 years old, which made the stalagmite structures older than any known cave painting. It also put the cave squarely within the age of the Neanderthals since they were the only humans in France that early. No one had suspected them of being capable of constructing complex forms or doing anything that far underground.
After Rouzard suddenly died in 1999, exploration at the cave stopped until life-long caver Sophie Verheyden, vacationing in the area, heard about it and decided to try and uranium-date the stalagmites inside.
The team she assembled eventually determined that the stalagmites had been broken up by people 176,000 years ago, way farther back even than Rouzard had supposed.
There weren't any signs that Neanderthals lived in the cave, so it's a mystery what they were up to down there. Verheyden thinks it's unlikely that a solitary artist created the tableaux, and so an organized group of skilled workers must've been involved. And “When you see such a structure so far into the cave, you think of something cultural or religious, but that's not proven," Verheyden told The Atlantic.
Whatever they built, the Bruniquel Cave reveals some big surprises about Neanderthals: They had fire, they built things, and likely used tools. Add this to recent discoveries that suggest they buried their dead, made art, and maybe even had language, and these mysterious proto-humans start looking a lot more familiar. A lot more like homo sapiens, and a lot more like distant cousins lost to history.
Paul Hudson/Flickr
Psychopath-ish: How “healthy” brains can look and function like those of psychopaths
A recent study used fMRI to compare the brains of psychopathic criminals with a group of 100 well-functioning individuals, finding striking similarities.
Obscure freaky smiling psycho man
- The study used psychological inventories to assess a group of violent criminals and healthy volunteers for psychopathy, and then examined how their brains responded to watching violent movie scenes.
- The fMRI results showed that the brains of healthy subjects who scored high in psychopathic traits reacted similarly as the psychopathic criminal group. Both of these groups also showed atrophy in brain regions involved in regulating emotion.
- The study adds complexity to common conceptions of what differentiates a psychopath from a "healthy" individual.
When considering what precisely makes someone a psychopath, the lines can be blurry.
Psychological research has shown that many people in society have some degree of malevolent personality traits, such as those described by the "dark triad": narcissism (entitled self-importance), Machiavellianism (strategic exploitation and deceit), and psychopathy (callousness and cynicism). But while people who score high in these traits are more likely to end up in prison, most of them are well functioning and don't engage in extreme antisocial behaviors.
Now, a new study published in Cerebral Cortex found that the brains of psychopathic criminals are structurally and functionally similar to many well-functioning, non-criminal individuals with psychopathic traits. The results suggest that psychopathy isn't a binary classification, but rather a "constellation" of personality traits that "vary in the non-incarcerated population with normal range of social functioning."
Assessing your inner psychopath
The researchers used functional magnetic resonance imaging (fMRI) to compare the brains of violent psychopathic criminals to those of healthy volunteers. All participants were assessed for psychopathy through commonly used inventories: the Hare Psychopathy Checklist-Revised and the Levenson Self-Report Psychopathy Scale.
Experimental design and sample stimuli. The subjects viewed a compilation of 137 movie clips with variable violent and nonviolent content.Nummenmaa et al.
Both groups watched a 26-minute-long medley of movie scenes that were selected to portray a "large variability of social and emotional content." Some scenes depicted intense violence. As participants watched the medley, fMRI recorded how various regions of their brains responded to the content.
The goal was to see whether the brains of psychopathic criminals looked and reacted similarly to the brains of healthy subjects who scored high in psychopathic traits. The results showed similar reactions: When both groups viewed violent scenes, the fMRI revealed strong reactions in the orbitofrontal cortex and anterior insula, brain regions associated with regulating emotion.
These similarities manifested as a positive association: The more psychopathic traits a healthy subject displayed, the more their brains responded like the criminal group. What's more, the fMRI revealed a similar association between psychopathic traits and brain structure, with those scoring high in psychopathy showing lower gray matter density in the orbitofrontal cortex and anterior insula.
There were some key differences between the groups, however. The researchers noted that the structural abnormalities in the healthy sample were mainly associated with primary psychopathic traits, which are: inclination to lie, lack of remorse, and callousness. Meanwhile, the functional responses of the healthy subjects were associated with secondary psychopathic traits: impulsivity, short temper, and low tolerance for frustration.
Overall, the study further illuminates some of the biological drivers of psychopathy, and it adds nuance to common conceptions of the differences between psychopathy and being "healthy."
Why do some psychopaths become criminals?
The million-dollar question remains unanswered: Why do some psychopaths end up in prison, while others (or, people who score high in psychopathic traits) lead well-functioning lives? The researchers couldn't give a definitive answer, but they did note that psychopathic criminals had lower connectivity within "key nodes of the social and emotional brain networks, including amygdala, insula, thalamus, and frontal pole."
"Thus, even though there are parallels in the regional responsiveness of the brain's affective circuit in the convicted psychopaths and well-functioning subjects with psychopathic traits, it is likely that the disrupted functional connectivity of this network is specific to criminal psychopathy."
Fighting online misinformation: We're doing it wrong
Counterintuitively, directly combating misinformation online can spread it further. A different approach is needed.
- Like the coronavirus, engaging with misinformation can inadvertently cause it to spread.
- Social media has a business model based on getting users to spend increasing amounts of time on their platforms, which is why they are hesitant to remove engaging content.
- The best way to fight online misinformation is to drown it out with the truth.
A year ago, the Center for Countering Digital Hate warned of the parallel pandemics — the biological contagion of COVID-19 and the social contagion of misinformation, aiding the spread of the disease. Since the outbreak of COVID-19, anti-vaccine accounts have gained 10 million new social media followers, while we have witnessed arson attacks against 5G masts, hospital staff abused for treating COVID patients, and conspiracists addressing crowds of thousands.
Many have refused to follow guidance issued to control the spread of the virus, motivated by beliefs in falsehoods about its origins and effects. The reluctance we see in some to get the COVID vaccine is greater amongst those who rely on social media rather than traditional media for their information. In a pandemic, lies cost lives, and it has felt like a new conspiracy theory has sprung up online every day.
How we, as social media users, behave in response to misinformation can either enable or prevent it from being seen and believed by more people.
The rules are different online
Credit: Pool via Getty Images
If a colleague mentions in the office that Bill Gates planned the pandemic, or a friend at dinner tells the table that the COVID vaccine could make them infertile, the right thing to do is often to challenge their claims. We don't want anyone to be left believing these falsehoods.
But digital is different. The rules of physics online are not the same as they are in the offline world. We need new solutions for the problems we face online.
Now, imagine that in order to reply to your friend, you must first hand him a megaphone so that everyone within a five-block radius can hear what he has to say. It would do more damage than good, but this is essentially what we do when we engage with misinformation online.
Think about misinformation as being like the coronavirus — when we engage with it, we help to spread it to everyone else with whom we come into contact. If a public figure with a large following responds to a post containing misinformation, they ensure the post is seen by hundreds of thousands or even millions of people with one click. Social media algorithms also push content into more users' newsfeeds if it appears to be engaging, so lots of interactions from users with relatively small followings can still have unintended negative consequences.
The trend of people celebrating and posting photos of themselves or loved ones receiving the vaccine has been far more effective than any attempt to disprove a baseless claim about Bill Gates or 5G mobile technology.
Additionally, whereas we know our friend from the office or dinner, most of the misinformation we see online will come from strangers. They often will be from one of two groups — true believers, whose minds are made up, and professional propagandists, who profit from building large audiences online and selling them products (including false cures). Both of these groups use trolling tactics, that is, seeking to trigger people to respond in anger, thus helping them reach new audiences and thereby gaming the algorithm.
On the day the COVID vaccine was approved in the UK, anti-vaccine activists were able to provoke pro-vaccine voices into posting about thalidomide, exposing new audiences to a reason to distrust the medical establishment. Those who spread misinformation understand the rules of the game online; it's time those of us on the side of enlightenment values of truth and science did too.
How to fight online misinformation
Of course, it is much easier for social media companies to take on this issue than for us citizens. Research from the Center for Countering Digital Hate and Anti-Vax Watch last month found that 65% of anti-vaccine content on social media is linked to just twelve individuals and their organizations. Were the platforms to simply remove the accounts of these superspreaders, it would do a huge amount to reduce harmful misinformation.
The problem is that social media platforms are resistant to do so. These businesses have been built by constantly increasing the amount of time users spend on their platforms. Getting rid of the creators of engaging content that has millions of people hooked is antithetical to the business model. It will require intervention from governments to force tech companies to finally protect their users and society as a whole.
So, what can the rest of us do, while we await state regulation?
Instead of engaging, we should be outweighing the bad with the good. Every time you see a piece of harmful misinformation, share advice or information from a trusted source, like the WHO or BBC, on the same subject. The trend of people celebrating and posting photos of themselves or loved ones receiving the vaccine has been far more effective than any attempt to disprove a baseless claim about Bill Gates or 5G mobile technology. In the attention economy that governs tech platforms, drowning out is a better strategy than rebuttal.
Imran Ahmed is CEO of the Center for Countering Digital Hate.
Self-awareness is what makes us human
Because of our ability to think about thinking, "the gap between ape and man is immeasurably greater than the one between amoeba and ape."
