We can accept our differences – it’s better than killing each other

  • Human beings are psychologically hardwired to fear differences
  • Several recent studies show evidence that digital spaces exacerbate the psychology which contributes to tribalism
  • Shared experiences of awe, such as space travel, or even simple shared meals, have surprising effectives for uniting opposing groups

The year was fraught to say the least.

Riots in the streets, engagement abroad in a long-fought war, and an encroaching sense that the fabric that knits us together is pulling at the seams.

Still a single sentence from it all speaks to our ability to come together: "Thank you for saving 1968."

Those words of gratitude—standing out against the backdrop of a tumultuous year—arrived via telegram at NASA half a century ago following Apollo 8's successful trip around the moon. But their lesson applies today.

Emerging research in the fields of psychology and neuroscience demonstrates that we often need a shared experience of awe, humor, or physical exertion to help transcend our differences. (Think team-building trust falls and rope courses in the woods). Fortunately, it doesn't take a trip around the moon to bridge the deepest divides—even conversation over a cup of coffee or a meal can remind us of each other's humanity.

What does this mean for free speech? Everything, it turns out.

We are hardwired to censor what's unfamiliar. 

Our brains' first response to difference is not curiosity but fear or scorn. In a mid-20th century experiment called "Robber's Cave," researchers brought together two demographically identical groups of boys, randomly sorted them into two groups, gave them each a few days to form bonds within their "tribes," and then kicked off a baseball competition. The boys quickly started generalizing about the other team and drawing distinctions that didn't exist.

Even absent any particular reason to be in disagreement, the experiment reveals that people are primed to sort into 'in' groups and 'out' groups. In the spectacle that politics can be at times, we can see how easy it is for Americans to fall into this trap. The tendency is not unique to any one political tribe. Recent research by the Cato Institute found at least one thing the left and the right agree on. They both want to silence someone, they just disagree on who.

In light of all this, what we've witnessed in recent months may not feel surprising but is troubling nonetheless. Other examples are readily available from both sides of the aisle: elected and appointed leaders shouted out of restaurants, journalists receiving death threats, and bombs sent in the mail to public figures villainized by radical activists. Relying on intimidation to silence people goes beyond censorship. It's abhorrent.

Now take that tendency and add in the emerging digital landscape. 

The same technology that has made it possible to connect people instantly across great distances can also compound our divisional instincts. A recent study found that when individuals confront different perspectives online, the new information further entrenches their existing beliefs and increases skepticism of the opposing view. It's also now easier than ever to opt into what MoveOn.org board president and digital guru Eli Pariser calls "filter bubbles"—choosing to surround ourselves with homogenous communities and replacing diverse, in-person interaction with digital engagement.

So what do we do about it?

The increasing division and polarization our country's experiencing is the result of many factors—many likely not fully understood or even still unknown. The long-term solutions will be as complex as the humans at the root of it.

But we can make progress in the meantime in surprisingly simple ways. People are full of potential. Individuals can play an essential role in helping fractured communities heal and turning fear to curiosity when encountering other ideas, cultures, and perspectives. We have the opportunity to learn from each other's differences.

A decade ago on another space flight the crew of the International Space Station gathered for a meal. The astronauts came from different backgrounds: Iranian, Russian, American, and others—individuals who have shared that they may have otherwise struggled to find common ground on Earth. But when they broke (freeze-dried) bread together as they watched the Earth passing by, they experienced a "profound emotional experience of interconnection."

It doesn't take zero gravity. Just start a conversation rooted in deep respect for each other's inherent dignity. Be the change you want to see in this tiny, blue dot.


Sarah Ruger directs the Charles Koch Foundation's free expression work
More From Sarah Ruger
Related Articles

Climate deniers get more airtime than experts

There's fairness, and then there's craziness.

Image source: SaveLightStudio/Shutterstock/Big Think
Politics & Current Affairs
  • There's no dispute that climate change is real and we're causing it.
  • Climate coverage coverage gives non-expert outsized influence.
  • Non-scientists with mere opinions get as much or more exposure as experts.
Keep reading Show less

Could A.I. detect mass shooters before they strike?

President Trump has called for Silicon Valley to develop digital precogs, but such systems raise efficacy concerns.

Politics & Current Affairs
  • President Donald Trump wants social media companies to develop A.I. that can flag potential mass shooters.
  • Experts agree that artificial intelligence is not advanced enough, nor are current moderating systems up to the task.
  • A majority of Americans support stricter gun laws, but such policies have yet to make headway.

On August 3, a man in El Paso, Texas, shot and killed 22 people and injured 24 others. Hours later, another man in Dayton, Ohio, shot and killed nine people, including his own sister. Even in a country left numb by countless mass shootings, the news was distressing and painful.

President Donald Trump soon addressed the nation to outline how his administration planned to tackle this uniquely American problem. Listeners hoping the tragedies might finally spur motivation for stricter gun control laws, such as universal background checks or restrictions on high-capacity magazines, were left disappointed.

Trump's plan was a ragbag of typical Republican talking points: red flag laws, mental health concerns, and regulation on violent video games. Tucked among them was an idea straight out of a Philip K. Dick novel.

"We must recognize that the internet has provided a dangerous avenue to radicalize disturbed minds and perform demented acts," Trump said. "First, we must do a better job of identifying and acting on early warning signs. I am directing the Department of Justice to work in partnership with local, state and federal agencies as well as well as social media companies to develop tools that can detect mass shooters before they strike."

Basically, Trump wants digital precogs. But has artificial intelligence reached such grand, and potentially terrifying, heights?

A digitized state of mind

It's worth noting that A.I. has made impressive strides at reading and quantifying the human mind. Social media is a vast repository of data on how people feel and think. If we can suss out the internal from the performative, we could improve mental health care in the U.S. and abroad.

For example, a study from 2017 found that A.I. could read the predictive markers for depression in Instagram photos. Researchers tasked machine learning tools with analyzing data from 166 individuals, some of whom had been previously diagnosed with depression. The algorithms looked at filter choice, facial expressions, metadata tags, etc., in more than 43,950 photos.

The results? The A.I. outperformed human practitioners at diagnosing depression. These results held even when analyzing images from before the patients' diagnoses. (Of course, Instagram is also the social media platform most likely to make you depressed and anxious, but that's another study.)

Talking with Big Think, Eric Topol, a professor in the Department of Molecular Medicine at Scripps, called this the ability to "digitize our state of mind." In addition to the Instagram study, he pointed out that patients will share more with a self-chosen avatar than a human psychiatrist.

"So when you take this ability to digitize a state of mind and also have a support through an avatar, this could turn out to be a really great way to deal with the problem we have today, which is a lack of mental health professionals with a very extensive burden of depression and other mental health conditions," Topol said.

Detecting mass shooters?

However, it's not as simple as turning the A.I. dial from "depression" to "mass shooter." Machine learning tools have gotten excellent at analyzing images, but they lag behind the mind's ability to read language, intonation, and social cues.

As Facebook CEO Mark Zuckerberg said: "One of the pieces of criticism we get that I think is fair is that we're much better able to enforce our nudity policies, for example, than we are hate speech. The reason for that is it's much easier to make an A.I. system that can detect a nipple than it is to determine what is linguistically hate speech."

Trump should know this. During a House Homeland Security subcommittee hearing earlier this year, experts testified that A.I. was not a panacea for curing online extremism. Alex Stamos, Facebook's former chief security officer, likened the world's best A.I. to "a crowd of millions of preschoolers" and the task to demanding those preschoolers "get together to build the Taj Mahal."

None of this is to say that the problem is impossible, but it's certainly intractable.

Yes, we can create an A.I. that plays Go or analyzes stock performance better than any human. That's because we have a lot of data on these activities and they follow predictable input-output patterns. Yet even these "simple" algorithms require some of the brightest minds to develop.

Mass shooters, though far too common in the United States, are still rare. We've played more games of Go, analyzed more stocks, and diagnosed more people with depression, which millions of Americans struggle with. This gives machine learning software more data points on these activities in order to create accurate, responsible predictions — that still don't perform flawlessly.

Add to this that hate, extremism, and violence don't follow reliable input-output patterns, and you can see why experts are leery of Trump's direction to employ A.I. in the battle against terrorism.

"As we psychological scientists have said repeatedly, the overwhelming majority of people with mental illness are not violent. And there is no single personality profile that can reliably predict who will resort to gun violence," Arthur C. Evans, CEO of the American Psychological Association, said in a release. "Based on the research, we know only that a history of violence is the single best predictor of who will commit future violence. And access to more guns, and deadlier guns, means more lives lost."

Social media can't protect us from ourselves

First Lady Melania Trump visits with the victims of the El Paso, Texas, shooting. Image source: Andrea Hanks / Flickr

One may wonder if we can utilize current capabilities more aggressively? Unfortunately, social media moderating systems are a hodgepodge, built piecemeal over the last decade. They rely on a mixture of A.I., paid moderators, and community policing. The outcome is an inconsistent system.

For example, the New York Times reported in 2017 that YouTube had removed thousands of videos using machine learning systems. The videos showed atrocities from the Syrian War, such as executions and people spouting Islamic State propaganda. The algorithm flagged and removed them as coming from extremist groups.

In truth, the videos came from humanitarian organizations to document human rights violations. The machine couldn't tell the difference. YouTube reinstated some of the videos after users reported the issue, but mistakes at such a scale do not give one hope that today's moderating systems could accurately identify would-be mass shooters.

That's the conclusion reached in a report from the Partnership on A.I. (PAI). It argued there were "serious shortcomings" in using A.I. as a risk-assessment tool in U.S. criminal justice. Its writers cite three overarching concerns: accuracy and bias; questions of transparency and accountability; and issues with the interface between tools and people.

"Although the use of these tools is in part motivated by the desire to mitigate existing human fallibility in the criminal justice system, it is a serious misunderstanding to view tools as objective or neutral simply because they are based on data," the report states. "While formulas and statistical models provide some degree of consistency and replicability, they still share or amplify many weaknesses of human decision-making."

In addition to the above, there are practical barriers. The technical capabilities of law enforcement vary between locations. Social media platforms deal in massive amounts of traffic and data. And even when the red flags are self-evident — such as when shooters publish manifestos — they offer a narrow window in which to act.

The tools to reduce mass shootings

Protesters at March for Our Lives 2018 in San Francisco. Image source: Gregory Varnum / Wikimedia Commons

Artificial intelligence offers many advantages today and will offer more in the future. But as an answer to extremism and mass shootings, experts agree it's simply the wrong tool. That's the bad news. The good news is we have the tools we need already, and they can be implemented with readily available tech.

"Based on the psychological science, we know some of the steps we need to take. We need to limit civilians' access to assault weapons and high-capacity magazines. We need to institute universal background checks. And we should institute red flag laws that remove guns from people who are at high risk of committing violent acts," Evans wrote.

Evans isn't alone. Experts agree that the policies he suggests, and a few others, will reduce the likelihood of mass shootings. And six in 10 Americans already support these measures.

We don't need advanced A.I. to figure this out. There's only one developed country in the world where someone can legally and easily acquire an armory of guns, and it's the only developed country that suffers mass shootings with such regularity. It's a simple arithmetic.

Why Secular Humanism can do what Atheism can't.

Atheism doesn't offer much beyond non-belief, can Secular Humanism fill the gaps?

Photo by mauro mora on Unsplash
Culture & Religion
  • Atheism is increasingly popular, but the lack of an organized community around it can be problematic.
  • The decline in social capital once offered by religion can cause severe problems.
  • Secular Humanism can offer both community and meaning, but it has also attracted controversy.
Keep reading Show less