David Goggins
Former Navy Seal
Career Development
Bryan Cranston
Critical Thinking
Liv Boeree
International Poker Champion
Emotional Intelligence
Amaryllis Fox
Former CIA Clandestine Operative
Chris Hadfield
Retired Canadian Astronaut & Author
from the world's big
Start Learning

Is free expression online threatened by content removal?

U.S. laws regulating online speech offer broad protections for private companies, but experts worry free expression may be threatened by "better safe than sorry" voluntary censorship.

A member of the Westboro Baptist Church demonstrates outside the Basilica of the National Shrine of the Immaculate Conception. (Photo: NICHOLAS KAMM/AFP/Getty Images)
  • U.S. laws regulating online speech offer broad protections for internet intermediaries.
  • Despite this, companies typically follow a "better safe than sorry" approach to protect against legal action or loss of reputation.
  • Silencing contentious opinions can have detrimental effects, such as social exclusion and negating reconciliation.

Megan Phelps-Roper grew up in the Westboro Baptist Church. At the tender age of five, she joined her parents on Westboro's now notorious picket lines. She held up signs reading 'God Hates Fags' to protest the funerals of homosexual men. She thanked God for dead soldiers at the funerals of Afghanistan war veterans. In 2009, she took the church's vitriol online and began tweeting for the congregation.

If one organization seemed primed to be deplatformed online, it's Westboro. The church is considered a hate group by the Anti-Defamation League, Southern Poverty Law Center, and others. Its radical opinions seem patently designed to insult those on the left, on the right, and with common decency. Although Phelps-Roper no longer tweets for the church — we'll return to her story later — the church maintains various Twitter accounts (though others have been suspended.)

How is it that an organization as universally despised as Westboro can maintain an online presence? The answer lays in the United States' cultural traditions of free expression, and the complex interplay between U.S. laws, public opinion, and online intermediaries attempting to navigate these new digital public spaces.

How U.S. laws regulate online speech

\u200bThe Free Speech Wall in Charlottesville, VA.

The Free Speech Wall in Charlottesville, VA.

(Photo: Wikimedia Commons)

All online content arrives to our screens through intermediaries: ISPs, DNSs, hosts, search engines, social media platforms, to name but a few. Their responsibilities differ when it comes to regulating content, but for simplicity we'll be considering them as a single group.

Intermediaries maintain some degree of obligation for the content published or shared through their service, yet U.S. liability law grants them broad immunity, even when compared to other Western democracies. They remain legally safe as long as the content originates from the users, and they remove any illegal content once it is made known to them.

Daphne Keller is the Director of Intermediary Liability at the Stanford Center for Internet and Society. In a Hoover Institution essay, she notes that intermediary liability falls mostly under three laws. They are:

The Communications Decency Act (CDA). This law effectively "immunizes platforms from traditional speech torts, such as defamation, and other civil claims." But platforms lose that protection if they create, edit, or collaborate with users on the content.

The Digital Millennium Copyright Act (DMCA). The DMCA ensures intermediaries can avoid liability without resorting to monitoring user speech. It also adds due process protocols, allowing defendants to argue against "mistaken or malicious claims."

Federal Criminal Law. Keller points out that intermediaries are also bound to criminal law. With regards to terrorism and child pornography, for example, intermediaries are not held liable if they remove the material and follow reporting requirements.

Of course, as private organizations, intermediaries have their own policies as well. Hate speech, for example, is not illegal in the United States; however, Twitter enforces a policy against hateful conduct. The policy prohibits inciting violence or harm against other people, but also the spread of fearful stereotypes, symbols associated with hate groups, and slurs designed to dehumanize someone.

The threat of over-removal

Despite these broad immunities, over-removal of content and speech remains a reality on today's internet. Size is part of the problem. As Keller notes in her essay, Google received "a few hundred DMCA notices" in 2006. Today, the search engine receives millions per day. Under such a strain, intermediaries can find it difficult to assess the validity of takedown requests.

A Takedown Project report undertaken by researchers at UC Berkeley and Columbia University found that intermediaries "may be subject to large numbers of suspect claims, even from a single individual."

The researchers argued that the automated systems used by large intermediaries to assess claims were in need of more accurate algorithms and human review. Due process safeguards were also found to be lacking.

Small intermediaries, who may not possess the resources and time to litigate claims, follow "better safe than sorry" polices, which can lead to compliance of all claims as a matter of course.

Platforms can also be motivated to remove extreme content over political worries, loss of customers or investors, and to create more inviting online spaces. Even if contentious speech is legal, platforms may remove it just to be safe.

Network service CloudFlare faced such a reputational dilemma in 2017. The organization dropped far-right message board the Daily Stormer from its services after claims were made by Stormer staff that CloudFlare supported its ideology.

CloudFlare co-founder Matthew Prince called the decision necessary but dangerous. In a release, he said, "We're going to have a long debate internally about whether we need to remove [the claim] about not terminating a customer due to political pressure."

What we lose when we over-regulate

Former Westboro Baptist Church Member Megan Phelps-Rope of 'The Story of Us with Morgan Freeman' speaks onstage during the National Geographic Channels portion of the 2017 Summer Television Critics Association Press Tour.

Former Westboro Baptist Church Member Megan Phelps-Rope of 'The Story of Us with Morgan Freeman' speaks onstage during the National Geographic Channels portion of the 2017 Summer Television Critics Association Press Tour.

(Photo: Frederick M. Brown/Getty Images)

CloudFlare's dilemma shows the difficulties of private organizations, which are not bound by the same laws as government entities, regulating services that have effectively evolved to become public spaces. Given the growing ubiquity of online spaces, finding the proper balance will be imperative.

In the search for responsible regulation, we must be careful to not silence free expression. Whether by accident or design, such actions will not change the minds of the people holding these ideas. It instead leads to emotions like anger and alienation, in turn creating a sense of prosecution and profound injustice. Unresolved, these emotions are considered to heighten the risk of extremism and political violence.

Lee Rowland, American Civil Liberties Union senior staff attorney, explains the difficulty of navigating the benefits and risks:

It's not a comfortable thing to talk about, because nobody wants to see Nazi ideology, but I will say that I do want the ability to see and find speech that reflects actual human beliefs. That's how we know what's out there. It doesn't benefit us to be blindsided by the private organizing of white supremacist. […] Enforcing that kind of purity only hides those beliefs; it doesn't change them.

We also run the risk of losing an important tool for personal development, both for ourselves and those we disagree with. If people are unable to engage in conversation with bad ideas, we'll lose remedies for extreme ideological thought, such as debate and forced examination.

This is exactly what happened to Megan Phelps-Roper. After she started tweeting for Westboro, she encountered much hostility for the views she espoused. But among the bellicose voices, she also met people willing to engage her in civil debate.

"There was no confusion about our positions, but the line between friend and foe was becoming blurred," Phelps-Roper said during her TED talk. "We'd started to see each other as human beings, and it changed the way we spoke to one another."

Over time, these conversations changed her perspective. Her relationship with Westboro and its hateful ideology ended in 2012.

"My friends on Twitter didn't abandon their beliefs or their principles — only their scorn," she added. "They channeled their infinitely justifiable offense and came to me with pointed questions tempered with kindness and humor. They approached me as a human being, and that was more transformative than two full decades of outrage, disdain, and violence."

There is definitely a need to regulate speech online. But Phelps-Roper's story is a warning of all we'll lose if free expression becomes threatened online.

The opinions expressed in this article do not necessarily reflect the views of the Charles Koch Foundation, which encourages the expression of diverse viewpoints within a culture of civil discourse and mutual respect.

More From Charles Koch Foundation
Related Articles

Is mask-shaming effective?

To empathize or scream, that is the question.

Photo by David McNew/Getty Images
  • Jennifer Jacquet writes that effective shaming can be a powerful tool for social change.
  • Tess Wilkinson-Ryan believes shame is useless in the case of the pandemic.
  • The politicization of the coronavirus takes our attention away from the failure of the administration.
Keep reading Show less

Human brains remember certain words more easily than others

A study of the manner in which memory works turns up a surprising thing.

Image Point Fr / Shutterstock
Mind & Brain
  • Researchers have found that some basic words appear to be more memorable than others.
  • Some faces are also easier to commit to memory.
  • Scientists suggest that these words serve as semantic bridges when the brain is searching for a memory.

Cognitive psychologist Weizhen Xie (Zane) of the NIH's National Institute of Neurological Disorders and Stroke (NINDS) works with people who have intractable epilepsy, a form of the disorder that can't be controlled with medications. During research into the brain activity of patients, he and his colleagues discovered something odd about human memory: It appears that certain basic words are consistently more memorable than other basic words.

The research is published in Nature Human Behaviour.

An odd find

Image source: Tsekhmister/Shutterstock

Xie's team was re-analyzing memory tests of 30 epilepsy patients undertaken by Kareem Zaghloul of NINDS.

"Our goal is to find and eliminate the source of these harmful and debilitating seizures," Zaghloul said. "The monitoring period also provides a rare opportunity to record the neural activity that controls other parts of our lives. With the help of these patient volunteers we have been able to uncover some of the blueprints behind our memories."

Specifically, the participants were shown word pairs, such as "hand" and "apple." To better understand how the brain might remember such pairings, after a brief interval, participants were supplied one of the two words and asked to recall the other. Of the 300 words used in the tests, five of them proved to be five times more likely to be recalled: pig, tank, doll, pond, and door.

The scientists were perplexed that these words were so much more memorable than words like "cat," "street," "stair," "couch," and "cloud."

Intrigued, the researchers looked at a second data source from a word test taken by 2,623 healthy individuals via Amazon's Mechanical Turk and found essentially the same thing.

"We saw that some things — in this case, words — may be inherently easier for our brains to recall than others," Zaghloul said. That the Mechanical Turk results were so similar may "provide the strongest evidence to date that what we discovered about how the brain controls memory in this set of patients may also be true for people outside of the study."

Why understanding memory matters

person holding missing piece from human head puzzle

Image source: Orawan Pattarawimonchai/Shutterstock

"Our memories play a fundamental role in who we are and how our brains work," Xie said. "However, one of the biggest challenges of studying memory is that people often remember the same things in different ways, making it difficult for researchers to compare people's performances on memory tests." He added that the search for some kind of unified theory of memory has been going on for over a century.

If a comprehensive understanding of the way memory works can be developed, the researchers say that "we can predict what people should remember in advance and understand how our brains do this, then we might be able to develop better ways to evaluate someone's overall brain health."

Party chat

Image source: joob_in/Shutterstock

Xie's interest in this was piqued during a conversation with Wilma Bainbridge of University of Chicago at a Christmas party a couple of years ago. Bainbridge was, at the time, wrapping up a study of 1,000 volunteers that suggested certain faces are universally more memorable than others.

Bainbridge recalls, "Our exciting finding is that there are some images of people or places that are inherently memorable for all people, even though we have each seen different things in our lives. And if image memorability is so powerful, this means we can know in advance what people are likely to remember or forget."

spinning 3D model of a brain

Temporal lobes

Image source: Anatomography/Wikimedia

At first, the scientists suspected that the memorable words and faces were simply recalled more frequently and were thus easier to recall. They envisioned them as being akin to "highly trafficked spots connected to smaller spots representing the less memorable words." They developed a modeling program based on word frequencies found in books, new articles, and Wikipedia pages. Unfortunately, the model was unable to predict or duplicate the results they saw in their clinical experiments.

Eventually, the researchers came to suspect that the memorability of certain words was linked to the frequency with which the brain used them as semantic links between other memories, making them often-visited hubs in individuals's memory networks, and therefore places the brain jumped to early and often when retrieving memories. This idea was supported by observed activity in participants' anterior temporal lobe, a language center.

In epilepsy patients, these words were so frequently recalled that subjects often shouted them out even when they were incorrect responses to word-pair inquiries.

Seek, find

Modern search engines no longer simply look for raw words when resolving an inquiry: They also look for semantic — contextual and meaning — connections so that the results they present may better anticipate what it is you're looking for. Xie suggests something similar may be happening in the brain: "You know when you type words into a search engine, and it shows you a list of highly relevant guesses? It feels like the search engine is reading your mind. Well, our results suggest that the brains of the subjects in this study did something similar when they tried to recall a paired word, and we think that this may happen when we remember many of our past experiences."

He also notes that it may one day be possible to leverage individuals' apparently wired-in knowledge of their language as a fixed point against which to assess the health of their memory and brain.

Does conscious AI deserve rights?

If machines develop consciousness, or if we manage to give it to them, the human-robot dynamic will forever be different.

  • Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.
  • Given the grave tragedy of slavery throughout human history, philosophers and technologists must answer this question ahead of technological development to avoid humanity creating a slave class of conscious beings.
  • One potential safeguard against that? Regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.