Your future happiness and success will depend on the double-edged sword of embracing new technology to stay connected, and being smart enough to unplug at the right time.
There is a psychological self-deception called the end-of-history illusion, which refers to the feeling that—no matter where you are in the evolution of technology—your time seems incredibly advanced. However Adam Alter reminds us that the trajectory of progress keeps rising, and what we think is cutting-edge now—Snapchat, Facebook, the iPhone 8, the iPhone 12—will in ten years will seem laughably primitive. It's what we'll have in this new world that concerns Alter. He cites experts who predict that most of us will own VR goggles in the next 5 years, and if the success of clickbait and its irresistible effect on our psychology is any indication, the fully immersive alternative realities of VR will shake the foundations of our minds, relationships, and attention spans (which are already kaput). As we're lured into a life on the digital plain by corporations—who make money from every second they can capture our attention—virtual reality may threaten reality itself. Those of us who have known a life without it will have an slight advantage in managing its control over our behavior, but Alter raises concerns for children won't come at this technology pre-equipped and skeptical enough to see the intentions behind such lures—and what might be lost if we don't know how to disconnect. Adam Alter is the author of Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked.
A college course on how to recognize "bullshit" addresses fake news, memes, clickbaiting and misleading advertising.
Taking a course with the word "bullshit" in its title is a cynical student's dream that University of Washington professors Carl Bergstrom and Jevin West are making a reality. Their 10-week seminar, enticingly titled "Calling Bullshit in the Age of Big Data" begins in March.
The course is a perfect match for our fact-challenged times, where charges of "fake news" and "alternative facts" have become part of common public discourse. If you can't take the course in person, you can follow it online, as its syllabus, readings and recordings of lectures will be available to the general public.
The synopsis of the course is listed succinctly as "Our world is saturated with bullshit. Learn to detect and defuse it."
The course looks to teach students key skills for judging information. The specific ways in which those taking the course will benefit are outlined in the syllabus:
Remain vigilant for bullshit contaminating your information diet.
Recognize said bullshit whenever and wherever you encounter it.
Figure out for yourself precisely why a particular bit of bullshit is bullshit.
Provide a statistician or fellow scientist with a technical explanation of why a claim is bullshit.
Provide your crystals-and-homeopathy aunt or casually racist uncle with an accessible and persuasive explanation of why a claim is bullshit.
It's hard not to agree with such objectives.
The course will consider a number of case studies that range from a story on food stamp fraud by Fox News to viral social media memes, clickbaiting and misleading advertising.
The inspiration for the course came to the Professors from reviewing scientific articles over a number of years. They noticed a trend of statistics better suited for smaller data sets being used for big data sets of millions or billions of examples. This could result in forcing correlations that aren't necessarily there.
Another disturbing trend noticed by Professor West was in seeing the "overfitting" of data by machine learning algorithms which were too perfectly matched to particular data sets instead of being more general.
The course is not geared towards a particular political ideology.
"We simply want to help people of all political perspectives resist bullshit, because we are confident that together all of us can make better collective decisions if we know how to evaluate the information that comes our way," explain the authors on the course's website.
You can access the course's materials here.
Why do people believe fake news? It's not because it gets shared all over Facebook; it's because they don't trust mainstream news. And Snopes agrees with them.
Remember when we told you about Google's plan to shut down fake news sites and how Facebook helped spread fake news by not filtering its content? According to myth-busting news site Snopes, fake news might not be the real problem at all.
In an interview with Backchannel, Snopes Editor-in-Chief Brooke Binkowski doesn't place the blame for the spread of false news on social media or search sites: she puts it on the mainstream media. “The problem, Binkowski believes, is that the public has lost faith in the media broadly — therefore no media outlet is considered credible any longer," Backchannel reports.
That faith has been lost as internet news sites have grown at the expense of traditional news media. “As the business of news has grown tougher, many outlets have been stripped of the resources they need for journalists to do their jobs correctly." Backchannel reports. It's referring to widespread budget cuts in print and digital news media across the country, from media giants like Hearst and Salon to local papers. These budgets cuts not only reduce the number of reporters on staff, but also editors, fact checkers and other staffers who can help catch mistakes. Binkowski, who is an award-winning journalist, puts it this way: “When you're on your fifth story of the day and there's no editor because the editor's been fired and there's no fact checker so you have to Google it yourself and you don't have access to any academic journals or anything like that, you will screw stories up."
Snopes founder David Mikkelson confirms that, explaining that “the fictions and fabrications that comprise fake news are but a subset of the larger bad news phenomenon, which also encompasses many forms of shoddy, un-researched, error-filled, and deliberately misleading reporting that do a disservice to everyone."
While the mainstream media does print corrections when they find errors, they are often small and not publicized. That is a perfect storm for breeding mistrust in consumers -- and it is why over 60 percent of Americans don't trust mainstream media according to poll company Gallup.
So what's the solution to rebuilding trust in news? “The solution to this problem isn't less content; it's better curation," Queens College professor Brian Hughes told CNN. He explains:
In the 1950s, the FCC regulated the television industry with a program it called the "Fairness Doctrine." The thinking went like this: With only three networks to choose from, viewers needed reliably balanced news and opinion. So, if a television station aired one perspective on a controversial topic, it was obliged to air an opposing view. As a country, we should look at the possibility of adopting a digital equivalent to the Fairness Doctrine.
Until that happens, we'll just have to be savvy media consumers and learn to spot fake news. Thankfully, according to Binkowski, it's really easy to spot. “'Honestly, most of the fake news is incredibly easy to debunk because it's such obvious bullshit," she says. “A site will have something buried somewhere on it that says, 'This is intended to be satire. Don't sue us,'" Backchannel reports. Snopes offers a full guide to spotting fake news. Generally speaking, fake news is “fabricated stories set loose via social media with clickbait headlines and tantalizing images, intended for no purpose other than to fool readers and generate advertising revenues for their publishers," according to Snopes.
Another way to fact check, according to Backchannel commenter John E Branch, Jr., is to not use Google. “Try a different search engine now and then. Though Google's results are to some degree tailored to each user, its basic ranking of results may be broadly the same for everyone, meaning that if 50 journalists all check Google they're probably all getting the same view of the subject. I often check Bing, and sometimes check Yahoo, for the sake of a possibly different perspective."
Columbia University professor Tim Wu gave us additional insights here:
Truthfully, our best defense against fake news for the foreseeable future is to simply call it out. Or, as Binkowski told Backchannel, “The only thing that we are doing that we can really keep doing is: just say the truth again and again and again and again and again, and just keep doing it."
Don't believe everything Google tells you. Facebook and Google are taking measures against fake news, but it's becoming clear that it's a symptom of a bigger problem.
Watch out, fake news sites: Google and Facebook are cutting your purse strings.
In the wake of fake 2016 election results being the #1 Google search result, the company has finally decided to take action against fake news sites. According to a statement in Reuters, Google “will restrict ad serving on pages that misrepresent, misstate, or conceal information about the publisher, the publisher's content, or the primary purpose of the web property," moving forward. Furthermore, “Facebook updated its advertising policies to spell out that its ban on deceptive and misleading content applies to fake news,” Reuters reports.
Facebook has a big problem with fake news sites, as The Guardian highlights. Fake news sites thrive on views, and Facebook is the main source of those views. As the owner of one site told them, “If your goal is straight page views [sic] then Facebook is the best investment of your time. It does drive more ‘clicks’ than posting on other platforms with an equal amount of followers.” And drive clicks it does: sites like these eschew normal journalistic standards for virality. “The ability to write a clickbaity headline, toss in some user-generated video found on YouTube, and dash off a 400-word post in 15 to 30 minutes is a skill they don’t teach in journalism school,” one site owner told The Guardian. That format, while lacking in journalistic integrity, is ideal for viral sharing. Sites investigated by The Guardian averaged about 700,000 visitors a day just through Facebook. By leveraging the platform and sharing their stories with other like-minded Facebook pages, they earn easily 7 or 8 million visitors per day -- every single one of whom generates ad income when they click on the site. Another site owner told The Guardian that those views “generate income of between $10,000 and $40,000 a month from banks of ads” alone.
So Facebook’s inability to differentiate between informative and misleading content allows misinformation to spread. But Facebook’s approach to misleading content is particularly problematic. After backlash earlier this year over how employees covering Trending Topics filtered out conservative news sites, Facebook has been loathe to use any kind of content filtering tools for fear of appearing biased, according to The Verge: “Facebook had developed a tool to identify fake news on its platform, but chose not to deploy it for fear it would disproportionately affect conservative websites and cause more right-wing backlash.”
By waiting in fear rather than searching for a better solution, Facebook allowed major amounts of misleading information about the 2016 presidential election and its candidates to spread unchecked. For a platform where 63% of its users consider it their primary news source, according to Nieman Lab, that’s irresponsible. “Given the viral aspects of fake news, social networks and search engines were gamed by partisan bad actors intending to influence the outcome of the race,” as The Verge puts it.
Why does this matter? Because Facebook made this the first Google search result for the 2016 Presidential election results:
These numbers are wrong. Clinton won 47.68% of the popular vote with 62,414,338 votes; Trump won 46.79% with 61,252,488 votes, according to CNN, The New York Times, Fox News, Politico, the BBC, and every other news site that reported on this election. No news organization in the country reported these numbers -- yet, they are the first Google search result. I found them on Twitter, and despite the fact that there were hundreds of responses pointing out how they were wrong, people are still sharing and believing them.
The Washington Post made a concerted effort to track these numbers down. They report that the false statistics came from a site called 70news.com, which got them from a tweet, which got them from a site called USA Supreme. The Post did more digging and found:
The source behind the "USA Supreme" website isn't clear. It looks an awful lot like Prntly, a made-up news website we looked at earlier this year. Founded by a former convict named Alex Portelli, Prntly is part of the broad diaspora of websites that takes news about American politics, frames it in a pro-Trump way (often at the expense of accuracy) and then peppers the page with ads.
The Post continues, reporting that Portelli “denied involvement in USA Supreme, suggesting that it was the work of a group of young people in Macedonia.” There may be some truth to that, as The Guardian reports. Yet, 70news.com now has a header at the top of its post-election numbers. That header attributes the false statistics come from “twitter posts” and insisting that “the popular vote number still need [sic] to be updated in Wikipedia or MSM [mainstream] media – which may take another few days because the liberals are still reeling and recovering from Trump-shock victory.”
That’s wrong: the mainstream media has been regularly updating election results as they came in, as The Post points out. “”Alex" is wrong; he's hesitating to change the numbers,” they conclude.
He is: here’s the Wikipedia page. Credit: Wikipedia
While Google and Facebook are taking steps to diminish the spread of this kind of misinformation, they are far from stemming it. As Reuters points out, “Google's move does not address the issue of fake news or hoaxes appearing in Google search results… Rather, the change is aimed at assuring that publishers on the network are legitimate and eliminating financial incentives that appear to have driven the production of much fake news.”
Facebook’s hesitancy to own its role as a major news platform is a problem, too. This issue has caused great internal debate about the future of the platform. Right now, according to The Guardian, they are using a combination of algorithm and live screening to weed out content -- much like Google. But until both platforms embrace their roles as news distributors, they have a long way to go before enacting meaningful change. Until then, we’ll all just have to double-check our Google results.
Columbia professor Tim Wu recently came to the Big Think studio to talk about the ways media companies harvest our attention and re-sell it to advertisers. There's a gaping disparity between the way social media sites and media companies define "audience engagement" and the way good journalists do; to the former it means clicks that can be monetized, to the latter it means stoking deep thought that will gestate and be reborn as comments, emails, and conversation. As this recent study shows, 59% of links shared on Twitter are never even clicked. Reading headlines is a national pastime; reading articles is a rarity. And the worse of the two is incentivized.
Fake news is just an unpleasant symptom of a larger problem that we really don't know how to fix.
Columbia professor Tim Wu came to the Big Think studio to talk about clickbait. What happened next will shock you.
Tim Wu, author of The Attention Merchants, is in a unique position to talk about the emergence of clickbait and viral culture – he’s spent the last few years researching what gets our attention.
BuzzFeed is synonymous with this species of content, so it’s not surprising to learn that the first instance of a viral story originated from Jonah Peretti, the co-founder of BuzzFeed and The Huffington Post. Wu tells the funny story of Peretti's first viral escapade, and notes that the media hasn’t stopped trying to catch that lightning in a bottle success since. Almost every entity in the online news and entertainment world today is in a permanent battle to master the art and science of viralty, to harvest the most attention. Why? For its re-sale value to advertisers.
Wu acknowledges that it has not been a particularly positive influence on our culture, but it’s fascinating when it’s viewed as a project to understand people. In Wu’s research, he came to the realization that although the cry of ‘Clickbait!’ has angrily amplified over the last five years, the phenomenon is anything but new. The penny newspaper headlines of the 1830s were capitalizing on suicides, divorces, and crazy events to hook people in. 'If it bleeds, it leads' has been the news media's slogan for over a century.
Clickbait is not a new cultural phenomenon, but an ancient biological one: what makes us click is exactly what made us tick in prehistoric times. Sex, food, death, violence, women in distress, kittens (don’t scoff, falling for cute things is a serious biological necessity – our ancestors had to be neurologically addicted to their babies to ensure they’d protect them), all of this calls to the most base level of our humanity. We’re hardwired to react to things that alarm or entice us from a survival point of view. "These modern day clickbait things are getting at very basic principles of our neurobiology that are there for a reason," Wu explains.
The intentions of clickbait and viral content can and should be demonized; it’s a manipulative way for media platforms to capitalize on the public's attention. We are less and less able to spend our attention thoughtfully because our biological and psychological buttons are constantly being pressed. In an ethically perfect world, media companies wouldn't do it; but in a capitalist system to refrain is to die. These organizations put in the research and were smart enough to figure out exactly what makes us click. They’ve laid the bait, but only you are in control of your reactions. Not clicking is the best way to send feedback.
Tim Wu’s most recent book is The Attention Merchants The Epic Scramble to Get Inside Our Heads.