Wikipedia Beats Fake News Every Day, So Why Can't Facebook?

Users don't need better media literacy to beat fake news. We need social media to be frank about its commercial interests.

Katherine Maher: So Wikipedia—it is a fascinating thing today, that Wikipedia is as trusted as it is and is used by as many people as it is. To think that an encyclopedia that anyone could edit could possibly grow to be a resource that gets about a billion visits every single month from all over the globe!

There's a story of constant self-improvement in there, a story of really grappling with our flaws and with our faults along the way. Wikipedia wasn't trusted when it first started because it was the encyclopedia anyone could edit. And then we had a series of fairly high-profile mistakes, hoaxes, screw-ups. The thing that makes Wikipedia work is that the Wikipedia community is so committed to getting it right that when errors happen their first response is to fix them. 

So there's this story from 2005 when a journal Nature, which is a scientific journal, did a study, a sample study on how accurate Wikipedia articles are. And the study found that on average the articles that they surveyed were about as accurate as a relative sample set from Encyclopedia Britannica. And the story of this goes that when this was published the Wikipedia community went to Jimmy Wales (the founder) and asked if he could put them in touch with the editors at Nature so they could find out where the errors were so they could fix them. And I think that that is such a classic example of Wikipedians: when they find out that something's wrong, their first response is not to get defensive. Their first response is generally delight, because it means that there's something to improve.

So, the whole conversation around fake news is a bit perplexing to Wikimedians, because bad information has always existed. The very first press freedom law was passed more than 250 years ago in Sweden, and I bet the very first conversation about misinformation happened within that first year. Yellow journalism, misinformation, propaganda—however you want to name it—there are already names for fake news and there are ways of dealing with them, established ways of dealing with them. So for Wikipedians we look at this and we say this has been a problem since time immemorial, and for the last 16 years we've been working on sorting fact from fiction and doing a pretty good job of it.
So to have this conversation I think is a little bit disingenuous, because it is looking at fake news as though it's the problem instead of actually looking at some of the commercial and other factors that enter into play around the distribution of information, the obfuscation of the source of information, the consolidation of the media landscape, the commercial pressures on publishers that have been created by major platform distributors, the lack of transparency in the way information is presented through algorithmic feeds, and why there is an interest in these platforms doing this.

It's really not about the quality of information itself; bad information has always existed. I would certainly say the media landscape and media literacy are important, and it is a call to arms for us to be more engaged in education around civics and media literacy, but I also think it's an opportunity to have hard conversations with platforms that present information within algorithmically curated feeds about why they aren't presenting some critical information that allows people to make good decisions about understanding where that information comes from. 

One thing that we would point to within Wikipedia is that all of the information that is presented you can scrutinize, you can understand where it comes from, you can check the citations, but you can also check almost every single edit that has ever been made to the projects in their 16 years of existence and those are more than three billion edits. All of that is available to the public. We hold ourselves up to scrutiny because we think that scrutiny and transparency creates accountability and that accountability creates trust.

When I'm looking at a Facebook feed I don't know why information is being presented to me. Is it because it's timely? Is it because it's relevant? Is it because it's trending, popular, important?
All of that is stripped out of context so it's hard for me to assess: is it good information that I should make decisions on? Is it bad information that I should ignore? And then you think about the fact that all of the other sort of heuristics that people use to interpret information, where does it come from? Who wrote it? When was it published? All of that is obscured in the product design as well. 

So the conversation that we're having I think is a bit disingenuous because it doesn't actually address some of the underlying platform questions and commercial pressure questions, it tends to focus on… I’m not even sure! It tends to focus on educating the end consumer, which is good. We believe in an educated user, but we also have a lot of confidence that if you give the user the information they need they can make those decisions and determinations.

 

Wikipedia has come a long, long way. Back when teachers and education institutions were banning it as an information source for students, did anyone think that by 2017 "the encyclopedia that anyone can edit" would gain global trust? Wikipedia had a rough start and some very public embarrassments, explains Katherine Maher, the executive director at the Wikimedia Foundation, but it has been a process of constant self-improvement. Maher attributes its success to the Wikimedia community who are doggedly committed to accuracy, and are genuinely thankful to find errors — both factual and systemic ones — so they can evolve away from them. So what has Wikimedia gotten right that social platforms like Facebook haven't yet? "The whole conversation around fake news is a bit perplexing to Wikimedians, because bad information has always existed," says Maher. The current public discourse focuses on the age-old problem of fake news, rather than the root cause: the commercial interests that create a space where misinformation doesn't just thrive — it's rewarded. Why doesn't Facebook provide transparency and context for its algorithms? An explanation for 'why am I seeing this news?' could allow users to make good decisions based on where that information comes from, and what its motive is. "We [at Wikimedia] hold ourselves up to scrutiny because we think that scrutiny and transparency creates accountability and that accountability creates trust," says Maher.

Are we really addicted to technology?

Fear that new technologies are addictive isn't a modern phenomenon.

Credit: Rodion Kutsaev via Unsplash
Technology & Innovation

This article was originally published on our sister site, Freethink, which has partnered with the Build for Tomorrow podcast to go inside new episodes each month. Subscribe here to learn more about the crazy, curious things from history that shaped us, and how we can shape the future.

In many ways, technology has made our lives better. Through smartphones, apps, and social media platforms we can now work more efficiently and connect in ways that would have been unimaginable just decades ago.

But as we've grown to rely on technology for a lot of our professional and personal needs, most of us are asking tough questions about the role technology plays in our own lives. Are we becoming too dependent on technology to the point that it's actually harming us?

In the latest episode of Build for Tomorrow, host and Entrepreneur Editor-in-Chief Jason Feifer takes on the thorny question: is technology addictive?

Popularizing medical language

What makes something addictive rather than just engaging? It's a meaningful distinction because if technology is addictive, the next question could be: are the creators of popular digital technologies, like smartphones and social media apps, intentionally creating things that are addictive? If so, should they be held responsible?

To answer those questions, we've first got to agree on a definition of "addiction." As it turns out, that's not quite as easy as it sounds.

If we don't have a good definition of what we're talking about, then we can't properly help people.

LIAM SATCHELL UNIVERSITY OF WINCHESTER

"Over the past few decades, a lot of effort has gone into destigmatizing conversations about mental health, which of course is a very good thing," Feifer explains. It also means that medical language has entered into our vernacular —we're now more comfortable using clinical words outside of a specific diagnosis.

"We've all got that one friend who says, 'Oh, I'm a little bit OCD' or that friend who says, 'Oh, this is my big PTSD moment,'" Liam Satchell, a lecturer in psychology at the University of Winchester and guest on the podcast, says. He's concerned about how the word "addiction" gets tossed around by people with no background in mental health. An increased concern surrounding "tech addiction" isn't actually being driven by concern among psychiatric professionals, he says.

"These sorts of concerns about things like internet use or social media use haven't come from the psychiatric community as much," Satchell says. "They've come from people who are interested in technology first."

The casual use of medical language can lead to confusion about what is actually a mental health concern. We need a reliable standard for recognizing, discussing, and ultimately treating psychological conditions.

"If we don't have a good definition of what we're talking about, then we can't properly help people," Satchell says. That's why, according to Satchell, the psychiatric definition of addiction being based around experiencing distress or significant family, social, or occupational disruption needs to be included in any definition of addiction we may use.

Too much reading causes... heat rashes?

But as Feifer points out in his podcast, both popularizing medical language and the fear that new technologies are addictive aren't totally modern phenomena.

Take, for instance, the concept of "reading mania."

In the 18th Century, an author named J. G. Heinzmann claimed that people who read too many novels could experience something called "reading mania." This condition, Heinzmann explained, could cause many symptoms, including: "weakening of the eyes, heat rashes, gout, arthritis, hemorrhoids, asthma, apoplexy, pulmonary disease, indigestion, blocking of the bowels, nervous disorder, migraines, epilepsy, hypochondria, and melancholy."

"That is all very specific! But really, even the term 'reading mania' is medical," Feifer says.

"Manic episodes are not a joke, folks. But this didn't stop people a century later from applying the same term to wristwatches."

Indeed, an 1889 piece in the Newcastle Weekly Courant declared: "The watch mania, as it is called, is certainly excessive; indeed it becomes rabid."

Similar concerns have echoed throughout history about the radio, telephone, TV, and video games.

"It may sound comical in our modern context, but back then, when those new technologies were the latest distraction, they were probably really engaging. People spent too much time doing them," Feifer says. "And what can we say about that now, having seen it play out over and over and over again? We can say it's common. It's a common behavior. Doesn't mean it's the healthiest one. It's just not a medical problem."

Few today would argue that novels are in-and-of-themselves addictive — regardless of how voraciously you may have consumed your last favorite novel. So, what happened? Were these things ever addictive — and if not, what was happening in these moments of concern?

People are complicated, our relationship with new technology is complicated, and addiction is complicated — and our efforts to simplify very complex things, and make generalizations across broad portions of the population, can lead to real harm.

JASON FEIFER HOST OF BUILD FOR TOMORROW

There's a risk of pathologizing normal behavior, says Joel Billieux, professor of clinical psychology and psychological assessment at the University of Lausanne in Switzerland, and guest on the podcast. He's on a mission to understand how we can suss out what is truly addictive behavior versus what is normal behavior that we're calling addictive.

For Billieux and other professionals, this isn't just a rhetorical game. He uses the example of gaming addiction, which has come under increased scrutiny over the past half-decade. The language used around the subject of gaming addiction will determine how behaviors of potential patients are analyzed — and ultimately what treatment is recommended.

"For a lot of people you can realize that the gaming is actually a coping (mechanism for) social anxiety or trauma or depression," says Billieux.

"Those cases, of course, you will not necessarily target gaming per se. You will target what caused depression. And then as a result, If you succeed, gaming will diminish."

In some instances, a person might legitimately be addicted to gaming or technology, and require the corresponding treatment — but that treatment might be the wrong answer for another person.

"None of this is to discount that for some people, technology is a factor in a mental health problem," says Feifer.

"I am also not discounting that individual people can use technology such as smartphones or social media to a degree where it has a genuine negative impact on their lives. But the point here to understand is that people are complicated, our relationship with new technology is complicated, and addiction is complicated — and our efforts to simplify very complex things, and make generalizations across broad portions of the population, can lead to real harm."

Behavioral addiction is a notoriously complex thing for professionals to diagnose — even more so since the latest edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), the book professionals use to classify mental disorders, introduced a new idea about addiction in 2013.

"The DSM-5 grouped substance addiction with gambling addiction — this is the first time that substance addiction was directly categorized with any kind of behavioral addiction," Feifer says.

"And then, the DSM-5 went a tiny bit further — and proposed that other potentially addictive behaviors require further study."

This might not sound like that big of a deal to laypeople, but its effect was massive in medicine.

"Researchers started launching studies — not to see if a behavior like social media use can be addictive, but rather, to start with the assumption that social media use is addictive, and then to see how many people have the addiction," says Feifer.

Learned helplessness

The assumption that a lot of us are addicted to technology may itself be harming us by undermining our autonomy and belief that we have agency to create change in our own lives. That's what Nir Eyal, author of the books Hooked and Indistractable, calls 'learned helplessness.'

"The price of living in a world with so many good things in it is that sometimes we have to learn these new skills, these new behaviors to moderate our use," Eyal says. "One surefire way to not do anything is to believe you are powerless. That's what learned helplessness is all about."

So if it's not an addiction that most of us are experiencing when we check our phones 90 times a day or are wondering about what our followers are saying on Twitter — then what is it?

"A choice, a willful choice, and perhaps some people would not agree or would criticize your choices. But I think we cannot consider that as something that is pathological in the clinical sense," says Billieux.

Of course, for some people technology can be addictive.

"If something is genuinely interfering with your social or occupational life, and you have no ability to control it, then please seek help," says Feifer.

But for the vast majority of people, thinking about our use of technology as a choice — albeit not always a healthy one — can be the first step to overcoming unwanted habits.

For more, be sure to check out the Build for Tomorrow episode here.

Why the U.S. and Belgium are culture buddies

The Inglehart-Welzel World Cultural map replaces geographic accuracy with closeness in terms of values.

Credit: World Values Survey, public domain.
Strange Maps
  • This map replaces geography with another type of closeness: cultural values.
  • Although the groups it depicts have familiar names, their shapes are not.
  • The map makes for strange bedfellows: Brazil next to South Africa and Belgium neighboring the U.S.
Keep reading Show less

CT scans of shark intestines find Nikola Tesla’s one-way valve

Evolution proves to be just about as ingenious as Nikola Tesla

Credit: Gerald Schömbs / Unsplash
Surprising Science
  • For the first time, scientists developed 3D scans of shark intestines to learn how they digest what they eat.
  • The scans reveal an intestinal structure that looks awfully familiar — it looks like a Tesla valve.
  • The structure may allow sharks to better survive long breaks between feasts.
Keep reading Show less

Mammals dream about the world they are entering even before birth

A study finds that baby mammals dream about the world they are about to experience to prepare their senses.

Michael C. Crair et al, Science, 2021.
Surprising Science
  • Researchers find that babies of mammals dream about the world they are entering.
  • The study focused on neonatal waves in mice before they first opened their eyes.
  • Scientists believe human babies also prime their visual motion detection before birth.
Keep reading Show less
Quantcast