Jonathan Rauch explains why the internet is so hostile to the truth, and what we can do to change that.
- Disruptive technologies tend to regress humanity back to our default mode: deeply ingrained tribalism.
- Rather than using the internet to communicate, many people use it to display their colors or group affinity, like tribespeople wearing face paint. Fake news spreads faster than truth in these tribal environments.
- How can we solve this problem without censorship? Platforms like Facebook and Google are tilting the playing field to be more pro-truth by asking people to stop, think, and take responsibility.
Researchers use algorithms to find the longest straight-line distances on land and at sea
- What links a small town in Portugal and a huge port city in China?
- The answer may surprise even inhabitants of both places: the world's longest straight line over land
- That line and its maritime equivalent were determined not by exploration but by calculation
Why do some smart folk spout such bad ideas? Marilynne Robinson says it's because we teach them "higher twaddle.” She's right, but the situation is worse than she fears.
You mad, bro? The way that Facebook (and Twitter) manipulates your brain should be the very thing that outrages us the most.
Social media has been, without a doubt, one of the biggest explosions in connectivity in human history. That's the good part. The bad part is that the minds of the people within these companies have manipulated users into an addictive cycle. You're already familiar with it: post content, receive rewards (likes, comments, etc). But the staggering of the rewards is the habit-forming part, and the reason most moderately heavy social media users check their apps or newsfeeds some 10-to-50 times a day. And to add to the problem — these algorithms have been strengthend to show you more and more outrageous content. It genuinely depletes your ability to be outraged by things in real life (for instance, a sexual predator for a President). Molly Crockett posits that we should all be aware of the dangers of these algorithms... and that we might have to start using them a lot less if we want to have a normal society back.
Users don't need better media literacy to beat fake news. We need social media to be frank about its commercial interests.
Wikipedia has come a long, long way. Back when teachers and education institutions were banning it as an information source for students, did anyone think that by 2017 "the encyclopedia that anyone can edit" would gain global trust? Wikipedia had a rough start and some very public embarrassments, explains Katherine Maher, the executive director at the Wikimedia Foundation, but it has been a process of constant self-improvement. Maher attributes its success to the Wikimedia community who are doggedly committed to accuracy, and are genuinely thankful to find errors — both factual and systemic ones — so they can evolve away from them. So what has Wikimedia gotten right that social platforms like Facebook haven't yet? "The whole conversation around fake news is a bit perplexing to Wikimedians, because bad information has always existed," says Maher. The current public discourse focuses on the age-old problem of fake news, rather than the root cause: the commercial interests that create a space where misinformation doesn't just thrive — it's rewarded. Why doesn't Facebook provide transparency and context for its algorithms? An explanation for 'why am I seeing this news?' could allow users to make good decisions based on where that information comes from, and what its motive is. "We [at Wikimedia] hold ourselves up to scrutiny because we think that scrutiny and transparency creates accountability and that accountability creates trust," says Maher.