Skip to content
Who's in the Video
Katherine Maher is the executive director of the Wikimedia Foundation, a position she has held since June 2016. Previously she was chief communications officer. In addition to a background in[…]

Wikipedia has come a long, long way. Back when teachers and education institutions were banning it as an information source for students, did anyone think that by 2017 “the encyclopedia that anyone can edit” would gain global trust? Wikipedia had a rough start and some very public embarrassments, explains Katherine Maher, the executive director at the Wikimedia Foundation, but it has been a process of constant self-improvement. Maher attributes its success to the Wikimedia community who are doggedly committed to accuracy, and are genuinely thankful to find errors — both factual and systemic ones — so they can evolve away from them. So what has Wikimedia gotten right that social platforms like Facebook haven’t yet? “The whole conversation around fake news is a bit perplexing to Wikimedians, because bad information has always existed,” says Maher. The current public discourse focuses on the age-old problem of fake news, rather than the root cause: the commercial interests that create a space where misinformation doesn’t just thrive — it’s rewarded. Why doesn’t Facebook provide transparency and context for its algorithms? An explanation for ‘why am I seeing this news?’ could allow users to make good decisions based on where that information comes from, and what its motive is. “We [at Wikimedia] hold ourselves up to scrutiny because we think that scrutiny and transparency creates accountability and that accountability creates trust,” says Maher.

Katherine Maher: So Wikipedia—it is a fascinating thing today, that Wikipedia is as trusted as it is and is used by as many people as it is. To think that an encyclopedia that anyone could edit could possibly grow to be a resource that gets about a billion visits every single month from all over the globe!

There's a story of constant self-improvement in there, a story of really grappling with our flaws and with our faults along the way. Wikipedia wasn't trusted when it first started because it was the encyclopedia anyone could edit. And then we had a series of fairly high-profile mistakes, hoaxes, screw-ups. The thing that makes Wikipedia work is that the Wikipedia community is so committed to getting it right that when errors happen their first response is to fix them. 

So there's this story from 2005 when a journal Nature, which is a scientific journal, did a study, a sample study on how accurate Wikipedia articles are. And the study found that on average the articles that they surveyed were about as accurate as a relative sample set from Encyclopedia Britannica. And the story of this goes that when this was published the Wikipedia community went to Jimmy Wales (the founder) and asked if he could put them in touch with the editors at Nature so they could find out where the errors were so they could fix them. And I think that that is such a classic example of Wikipedians: when they find out that something's wrong, their first response is not to get defensive. Their first response is generally delight, because it means that there's something to improve.

So, the whole conversation around fake news is a bit perplexing to Wikimedians, because bad information has always existed. The very first press freedom law was passed more than 250 years ago in Sweden, and I bet the very first conversation about misinformation happened within that first year. Yellow journalism, misinformation, propaganda—however you want to name it—there are already names for fake news and there are ways of dealing with them, established ways of dealing with them. So for Wikipedians we look at this and we say this has been a problem since time immemorial, and for the last 16 years we've been working on sorting fact from fiction and doing a pretty good job of it.
So to have this conversation I think is a little bit disingenuous, because it is looking at fake news as though it's the problem instead of actually looking at some of the commercial and other factors that enter into play around the distribution of information, the obfuscation of the source of information, the consolidation of the media landscape, the commercial pressures on publishers that have been created by major platform distributors, the lack of transparency in the way information is presented through algorithmic feeds, and why there is an interest in these platforms doing this.

It's really not about the quality of information itself; bad information has always existed. I would certainly say the media landscape and media literacy are important, and it is a call to arms for us to be more engaged in education around civics and media literacy, but I also think it's an opportunity to have hard conversations with platforms that present information within algorithmically curated feeds about why they aren't presenting some critical information that allows people to make good decisions about understanding where that information comes from. 

One thing that we would point to within Wikipedia is that all of the information that is presented you can scrutinize, you can understand where it comes from, you can check the citations, but you can also check almost every single edit that has ever been made to the projects in their 16 years of existence and those are more than three billion edits. All of that is available to the public. We hold ourselves up to scrutiny because we think that scrutiny and transparency creates accountability and that accountability creates trust.

When I'm looking at a Facebook feed I don't know why information is being presented to me. Is it because it's timely? Is it because it's relevant? Is it because it's trending, popular, important?
All of that is stripped out of context so it's hard for me to assess: is it good information that I should make decisions on? Is it bad information that I should ignore? And then you think about the fact that all of the other sort of heuristics that people use to interpret information, where does it come from? Who wrote it? When was it published? All of that is obscured in the product design as well. 

So the conversation that we're having I think is a bit disingenuous because it doesn't actually address some of the underlying platform questions and commercial pressure questions, it tends to focus on… I’m not even sure! It tends to focus on educating the end consumer, which is good. We believe in an educated user, but we also have a lot of confidence that if you give the user the information they need they can make those decisions and determinations.

 


Related