Skip to content
The Present

Is free expression online threatened by content removal?

U.S. laws regulating online speech offer broad protections for private companies, but experts worry free expression may be threatened by “better safe than sorry” voluntary censorship.

A member of the Westboro Baptist Church demonstrates outside the Basilica of the National Shrine of the Immaculate Conception. (Photo: NICHOLAS KAMM/AFP/Getty Images)

Key Takeaways
  • U.S. laws regulating online speech offer broad protections for internet intermediaries.
  • Despite this, companies typically follow a “better safe than sorry” approach to protect against legal action or loss of reputation.
  • Silencing contentious opinions can have detrimental effects, such as social exclusion and negating reconciliation.

Megan Phelps-Roper grew up in the Westboro Baptist Church. At the tender age of five, she joined her parents on Westboro’s now notorious picket lines. She held up signs reading ‘God Hates Fags’ to protest the funerals of homosexual men. She thanked God for dead soldiers at the funerals of Afghanistan war veterans. In 2009, she took the church’s vitriol online and began tweeting for the congregation.

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

If one organization seemed primed to be deplatformed online, it’s Westboro. The church is considered a hate group by the Anti-Defamation League, Southern Poverty Law Center, and others. Its radical opinions seem patently designed to insult those on the left, on the right, and with common decency. Although Phelps-Roper no longer tweets for the church — we’ll return to her story later — the church maintains various Twitter accounts (though others have been suspended.)

How is it that an organization as universally despised as Westboro can maintain an online presence? The answer lays in the United States’ cultural traditions of free expression, and the complex interplay between U.S. laws, public opinion, and online intermediaries attempting to navigate these new digital public spaces.

The Free Speech Wall in Charlottesville, VA.

(Photo: Wikimedia Commons)

How U.S. laws regulate online speech

All online content arrives to our screens through intermediaries: ISPs, DNSs, hosts, search engines, social media platforms, to name but a few. Their responsibilities differ when it comes to regulating content, but for simplicity we’ll be considering them as a single group.

Intermediaries maintain some degree of obligation for the content published or shared through their service, yet U.S. liability law grants them broad immunity, even when compared to other Western democracies. They remain legally safe as long as the content originates from the users, and they remove any illegal content once it is made known to them.

Daphne Keller is the Director of Intermediary Liability at the Stanford Center for Internet and Society. In a Hoover Institution essay, she notes that intermediary liability falls mostly under three laws. They are:

The Communications Decency Act (CDA). This law effectively “immunizes platforms from traditional speech torts, such as defamation, and other civil claims.” But platforms lose that protection if they create, edit, or collaborate with users on the content.

The Digital Millennium Copyright Act (DMCA). The DMCA ensures intermediaries can avoid liability without resorting to monitoring user speech. It also adds due process protocols, allowing defendants to argue against “mistaken or malicious claims.”

Federal Criminal Law. Keller points out that intermediaries are also bound to criminal law. With regards to terrorism and child pornography, for example, intermediaries are not held liable if they remove the material and follow reporting requirements.

Of course, as private organizations, intermediaries have their own policies as well. Hate speech, for example, is not illegal in the United States; however, Twitter enforces a policy against hateful conduct. The policy prohibits inciting violence or harm against other people, but also the spread of fearful stereotypes, symbols associated with hate groups, and slurs designed to dehumanize someone.

Why you should tolerate intolerable ideas

content.jwplatform.com

The threat of over-removal

Despite these broad immunities, over-removal of content and speech remains a reality on today’s internet. Size is part of the problem. As Keller notes in her essay, Google received “a few hundred DMCA notices” in 2006. Today, the search engine receives millions per day. Under such a strain, intermediaries can find it difficult to assess the validity of takedown requests.

A Takedown Project report undertaken by researchers at UC Berkeley and Columbia University found that intermediaries “may be subject to large numbers of suspect claims, even from a single individual.”

The researchers argued that the automated systems used by large intermediaries to assess claims were in need of more accurate algorithms and human review. Due process safeguards were also found to be lacking.

Small intermediaries, who may not possess the resources and time to litigate claims, follow “better safe than sorry” polices, which can lead to compliance of all claims as a matter of course.

Platforms can also be motivated to remove extreme content over political worries, loss of customers or investors, and to create more inviting online spaces. Even if contentious speech is legal, platforms may remove it just to be safe.

Network service CloudFlare faced such a reputational dilemma in 2017. The organization dropped far-right message board the Daily Stormer from its services after claims were made by Stormer staff that CloudFlare supported its ideology.

CloudFlare co-founder Matthew Prince called the decision necessary but dangerous. In a release, he said, “We’re going to have a long debate internally about whether we need to remove [the claim] about not terminating a customer due to political pressure.”

Former Westboro Baptist Church Member Megan Phelps-Rope of ‘The Story of Us with Morgan Freeman’ speaks onstage during the National Geographic Channels portion of the 2017 Summer Television Critics Association Press Tour.

(Photo: Frederick M. Brown/Getty Images)

What we lose when we over-regulate

CloudFlare’s dilemma shows the difficulties of private organizations, which are not bound by the same laws as government entities, regulating services that have effectively evolved to become public spaces. Given the growing ubiquity of online spaces, finding the proper balance will be imperative.

In the search for responsible regulation, we must be careful to not silence free expression. Whether by accident or design, such actions will not change the minds of the people holding these ideas. It instead leads to emotions like anger and alienation, in turn creating a sense of prosecution and profound injustice. Unresolved, these emotions are considered to heighten the risk of extremism and political violence.

Lee Rowland, American Civil Liberties Union senior staff attorney, explains the difficulty of navigating the benefits and risks:

It’s not a comfortable thing to talk about, because nobody wants to see Nazi ideology, but I will say that I do want the ability to see and find speech that reflects actual human beliefs. That’s how we know what’s out there. It doesn’t benefit us to be blindsided by the private organizing of white supremacist. […] Enforcing that kind of purity only hides those beliefs; it doesn’t change them.

We also run the risk of losing an important tool for personal development, both for ourselves and those we disagree with. If people are unable to engage in conversation with bad ideas, we’ll lose remedies for extreme ideological thought, such as debate and forced examination.

This is exactly what happened to Megan Phelps-Roper. After she started tweeting for Westboro, she encountered much hostility for the views she espoused. But among the bellicose voices, she also met people willing to engage her in civil debate.

“There was no confusion about our positions, but the line between friend and foe was becoming blurred,” Phelps-Roper said during her TED talk. “We’d started to see each other as human beings, and it changed the way we spoke to one another.”

Over time, these conversations changed her perspective. Her relationship with Westboro and its hateful ideology ended in 2012.

“My friends on Twitter didn’t abandon their beliefs or their principles — only their scorn,” she added. “They channeled their infinitely justifiable offense and came to me with pointed questions tempered with kindness and humor. They approached me as a human being, and that was more transformative than two full decades of outrage, disdain, and violence.”

There is definitely a need to regulate speech online. But Phelps-Roper’s story is a warning of all we’ll lose if free expression becomes threatened online.

The opinions expressed in this article do not necessarily reflect the views of the Charles Koch Foundation, which encourages the expression of diverse viewpoints within a culture of civil discourse and mutual respect.


Related

Up Next