Skip to content
Who's in the Video
Dr. Molly Crockett is an Assistant Professor of Psychology at Yale University and a Distinguished Research Fellow at the Oxford Centre for Neuroethics. Prior to joining Yale, Dr. Crockett was[…]

Social media has been, without a doubt, one of the biggest explosions in connectivity in human history. That’s the good part. The bad part is that the minds of the people within these companies have manipulated users into an addictive cycle. You’re already familiar with it: post content, receive rewards (likes, comments, etc). But the staggering of the rewards is the habit-forming part, and the reason most moderately heavy social media users check their apps or newsfeeds some 10-to-50 times a day. And to add to the problem — these algorithms have been strengthend to show you more and more outrageous content. It genuinely depletes your ability to be outraged by things in real life (for instance, a sexual predator for a President). Molly Crockett posits that we should all be aware of the dangers of these algorithms… and that we might have to start using them a lot less if we want to have a normal society back.

Molly Crockett: We live in a world now where there is an economic model that strongly incentivizes online platforms like Facebook, Google, Twitter to capture as much of our attention as possible. The way to do that is to promote content that is the most engaging. And what is the most engaging? Moral content. There was a recent study that came out of NYU recently that characterized the language in tweets. 

And this study, which was led by William Brady and Jay Van Bavel and colleagues, found that each “moral emotional” word in a tweet increased the likelihood of a retweet by 20 percent. 

So content that has moral and emotional qualities to it, of which moral outrage is the poster child, is the most engaging content. And so that means that the algorithms that select for what is shown to all of us in our newsfeeds are selecting for the content that’s going to be the most engaging, because that draws the most attention—because that creates the most revenue through ad sales for these companies. 

And so this creates an information ecosystem where there’s a kind of natural selection process going on, and the most outrageous content is going to rise to the top.

So this suggests that the kinds of stories that we read in our newsfeeds online might be artificially inflated in terms of how much outrage they provoke. And I’ve actually found some data that speaks to this. 

So there was a study a few years ago by Will Hofmann and Linda Skitka, colleagues at the University of Chicago where they tracked people’s daily experiences with moral and immoral events in their everyday lives. And they pinged people’s smartphones a few times a day and had them rate whether in the past hour they had had any moral or immoral experiences. And they had people rate how emotional they felt, out outraged they felt, how happy and so on. 

This data became publicly available and so I was able to reanalyze the data, because these researchers had asked them: “Where did you learn about these immoral events? Online, in person, on TV, radio, newspaper, et cetera?” 

And so I was able to analyze this data and show that immoral events that people learn about online trigger more outrage than immoral events that they learn about in person or through traditional forms of media like TV, newspaper and radio. 

So this supports the idea that the algorithms that drive the presentation of news content online are selecting that content that provokes perhaps higher levels of outrage than we even see on the news. And, of course what we see normally in our daily lives. 

It’s an open question, “What are the long term consequences of this constant exposure to outrage triggering material?” One possibility that has been floated in the news recently is: outrage fatigue—and I think many of us can relate to the idea that—if you’re constantly feeling outraged, it’s exhausting. And there may be a limit to how much outrage we’re able to experience day to day.

That is potentially harmful in terms of the long term social consequences, because if we are feeling outraged about relatively minor things and that’s depleting some kind of reserve, that may mean that we’re not able to feel outraged for things that really matter.

On the other hand there’s also research in aggression showing that if you give people the opportunity to vent their aggressive feelings about something that’s made them mad, that actually can increase the likelihood of future aggression. 

So in the literature on anger and outrage there are two possibilities. One being this long term depletion, “outrage fatigue”.

The other being a kind of sensitization. And we need to do more research to figure out which of those might be operating in the context of online outrage expression. It may be different for different people. 

Social media is very unlikely to go away because it taps into the things that we find most rewarding. Connection with others, expressing our moral values, sharing those moral values with others, building our reputation. And, of course, what makes social media so compelling, and so addictive even, is the fact that these platforms are really tapping into very ancient neural circuitries that we know are involved in reward processing, in habit formation. 

One intriguing possibility because the way these apps are designed are so streamlined—You have stimuli icons that are so recognizable and familiar to all of us who use these apps. And very effortless responses to like, to share, to retweet. 

And then we get feedback, and that feedback in the form of likes and shares is delivered at unpredictable times. And unpredictable rewards, we know from decades of research in neuroscience, are the fastest way to establish habit. 

Now habit is a behavior that is expressed without regard to its long term consequences. Just as someone who’s habitually reaching for the bag of potato chips when they’re not hungry. They’re eating those potato chips, not to achieve some goal to satisfy their hunger, but just mindlessly.

We might be mindlessly expressing moral emotions like outrage without actually necessarily experiencing them strongly or desiring to express those so broadly the way that we just do on social media. 

And so I think it’s really worth considering and having a conversation about whether we want some of our strongest moral emotions, which are so core to who we are—Do we want those under the control of algorithms whose main purpose is to generate advertising revenue for big tech companies?


Related