How Facebook Decided to Delete the Profile of One San Bernardino Shooter
Technology companies are under pressure to remove violent, terrorist content from their sites. Who should decide what gets removed?
A day after Tashfeen Malik and Syed Farook allegedly murdered 14 people in San Bernardino, California, Facebook removed a profile page of one of the suspects. Malik, posting under a moniker, used the page to pledge her support to ISIS around the time of the shooting. According to The Wall Street Journal, a spokesman for Facebook said the page violated Facebook’s community standards, which among other things, prohibits posts supporting terrorism. The page’s removal highlights the long-running debate regarding online freedom and government surveillance efforts and illustrates the pressure many technology companies are under to monitor and respond to violent content posted on their sites.
President Barak Obama, in his address to the nation Sunday evening, called on Silicon Valley to help in the fight against terrorism. "I will urge high-tech and law enforcement leaders to make it harder for terrorists to use technology to escape from justice," Obama said. That position, where technology companies operate in concert with the government, has some folks worried. “When it comes to terrorist content, it’s certainly a tricky position for companies, and one that I don’t envy,” said Jillian York, the Electronic Frontier Foundation’s director of international freedom of expression, in an email to The Wall Street Journal. “Still, I worry that giving more power to companies — which are undemocratic by nature — to regulate speech is dangerous.”
Additionally, Reuters reports that the White House will be asking tech companies to restrict the use of social media if it’s used for violent purposes. "That is a deeply concerning line that we believe has to be addressed. There are cases where we believe that individuals should not have access to social media for that purpose," an official speaking on background said.
In a previous article, I spoke to Google’s management of requests from the public to delete links to content from its index. Known as “the right to be forgotten,” Google determines on a case-by-case basis what information gets unlinked. In fact, the Court of Justice of the European Union says specifically that Google must consider “the type of information in question, its sensitivity for the individual’s private life, and the interest of the public in having access to that information. The role the person requesting the deletion plays in public life might also be relevant.”
As I mentioned in that article, what that means is Google has the responsibility for determining if the deletion request is valid and should be honored. If Google resolves that the link-deletion request is not in the best interest of the public’s access to information, it can deny the request. Google is essentially serving as the arbiter for online speech.
These two processes — one in which the government cedes control to a private entity to unlink content from its search engine and one in which the government asks a private entity to remove content that encourages terrorist activity — seem related. In the first example, by ceding the link-removal decision to Google, the Court of Justice of European Union blurs the line between what a court of law should decide and what a private corporation should be allowed to do. While I’m not opposed to being forgotten, I’m not sure I’m comfortable with some group of people at Google making that determination.
I’m equally troubled by the second example as well. We are now asking Twitter, Facebook, and others to identify and remove content that has “violent ends.” It’s not that I want that content to stay up. I don’t. But, relegating that decision to a private company, just like ceding the right-to-be-forgotten process to Google, doesn’t sit exactly right with me.
If we are concerned that a government can abuse online freedoms like speech, then we should be equally worried about arbitrary decisions made by private entities to remove terrorist speech from online social media. To be clear, I am not arguing that the content not be removed. What I am debating is that its removal be a considered proposition and not determined by a private entity. Restricting speech is a serious thing and because we’ve surrendered control over our data and privacy to corporate interests, sometimes we assume their interests and ours are the same.
How a cataclysm worse than what killed the dinosaurs destroyed 90 percent of all life on Earth.
While the demise of the dinosaurs gets more attention as far as mass extinctions go, an even more disastrous event called "the Great Dying” or the “End-Permian Extinction” happened on Earth prior to that. Now scientists discovered how this cataclysm, which took place about 250 million years ago, managed to kill off more than 90 percent of all life on the planet.
A new study discovers the “liking gap” — the difference between how we view others we’re meeting for the first time, and the way we think they’re seeing us.
We tend to be defensive socially. When we meet new people, we’re often concerned with how we’re coming off. Our anxiety causes us to be so concerned with the impression we’re creating that we fail to notice that the same is true of the other person as well. A new study led by Erica J. Boothby, published on September 5 in Psychological Science, reveals how people tend to like us more in first encounters than we’d ever suspect.
Using advanced laser technology, scientists at NASA will track global changes in ice with greater accuracy.
Leaving from Vandenberg Air Force base in California this coming Saturday, at 8:46 a.m. ET, the Ice, Cloud, and Land Elevation Satellite-2 — or, the "ICESat-2" — is perched atop a United Launch Alliance Delta II rocket, and when it assumes its orbit, it will study ice layers at Earth's poles, using its only payload, the Advance Topographic Laser Altimeter System (ATLAS).
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.