Wikipedia Beats Fake News Every Day, So Why Can't Facebook?

Users don't need better media literacy to beat fake news. We need social media to be frank about its commercial interests.

Technology & Innovation

Wikipedia has come a long, long way. Back when teachers and education institutions were banning it as an information source for students, did anyone think that by 2017 "the encyclopedia that anyone can edit" would gain global trust? Wikipedia had a rough start and some very public embarrassments, explains Katherine Maher, the executive director at the Wikimedia Foundation, but it has been a process of constant self-improvement. Maher attributes its success to the Wikimedia community who are doggedly committed to accuracy, and are genuinely thankful to find errors — both factual and systemic ones — so they can evolve away from them. So what has Wikimedia gotten right that social platforms like Facebook haven't yet? "The whole conversation around fake news is a bit perplexing to Wikimedians, because bad information has always existed," says Maher. The current public discourse focuses on the age-old problem of fake news, rather than the root cause: the commercial interests that create a space where misinformation doesn't just thrive — it's rewarded. Why doesn't Facebook provide transparency and context for its algorithms? An explanation for 'why am I seeing this news?' could allow users to make good decisions based on where that information comes from, and what its motive is. "We [at Wikimedia] hold ourselves up to scrutiny because we think that scrutiny and transparency creates accountability and that accountability creates trust," says Maher.