Skip to content
The Present

The problem with social media is not content but its distortion of reality

Social media distorts the reality of the public sphere.
social media
Credit: Aleksandr Gladkiy / Adobe Stock
Key Takeaways
  • Social media has damaged society. If you ask most people to pinpoint the problem, they will focus on social media content.
  • The real problem is that social media distorts our perceptions of the public sphere. By targeting users with droves of content meant to resonate specifically with them, it causes us to create a false mental model of society.
  • We should push for transparency in targeting. Platforms need to clearly disclose the demographic characteristics of the exposed population when targeting us with any piece of narrowly distributed content.

Social media has profoundly damaged society. If I wrote those words a decade ago, few people would have agreed, but it is a widely shared belief now. Just last year, the Aspen Institute commissioned a six-month study that drew even darker conclusions. The report concluded that the misinformation and disinformation propagated by social media create “a chain reaction of harm,” acting as a “force multiplier for exacerbating our worst problems as a society.”  

If you ask most people to pinpoint the problem, they will focus on social media content, telling you we need to crack down on offensive and divisive material. They will rattle off a list of content maladies that cause societal problems, including hate and harassment, misinformation and disinformation, and the torrent of outright lies that conflict with scientific, medical, and historical facts. If they are professionals in the field, they might tell you that America needs Section 230 reform.

Those professionals are referring to Section 230 of the Communications Decency Act of 1996, which provides immunity to social media platforms regarding content posted by third parties. Some argue the regulation protects free speech on the internet, and it should not be weakened. Others counter that Section 230 shields social media companies from assuming responsibility for the damaging content on their platforms, and it should be eliminated. 

A worrying distraction

Personally, I worry that focusing on content alone distracts from the core problem of social media.

I say that because offensive and divisive content has always existed. Social media reduces belief in science and medicine, weakens trust in longstanding institutions, drives acceptance of ridiculous conspiracy theories, and damages faith in democracy. But no matter how far back you go in human history, you will find the same hate, the same disinformation, and the same deluge of deliberate lies. Awful content has existed my entire life, but it did not polarize society the way we see it doing today. Something is different now, but it is not the content.  

So what is the problem with social media? 

Having spent much of my career studying how software systems can amplify human abilities and enhance human intelligence, it is clear to me that social media does the opposite. It distorts our collective intelligence and degrades our ability to make good decisions about our future. It does this by bending our perceptions of the public sphere.

Building mental models

We humans are decision making machines. We spend our lives capturing and storing information about our world and using that information to build detailed mental models. We start from the moment we are born. We sense and explore our surroundings, and we test and model our experiences. We keep building these models until we can accurately predict how our own actions, and the actions of others, will impact our future. 

As an infant you surely dropped a toy and watched it fall to the ground. Do that many times with the same result and your brain generalizes the phenomenon. You build a mental model of gravity. Experience your first helium balloon, which defies gravity, and your brain has to adjust, accounting for rare objects that behave in different ways. Your mental model gradually becomes more sophisticated, predicting that most things will fall to the ground and a few will not. 

This is how we come to understand the complexities of our world and make good decisions throughout our lives. It is a process that goes back hundreds of millions of years and is shared among countless species, from birds and fish to primates like us. We call it intelligence.

For intelligence to work properly, we humans need to perform three basic steps. First, we perceive our world. Next, we generalize our experiences. Finally, we build mental models to help us navigate our future.

The problem is that social media platforms have inserted themselves into this critical process, changing what it means to perceive our world and generalize our experiences. This distortion drives each of us to make significant errors when we build mental models.  

Social media and the public sphere

No, I am not talking about how we model the physical world of gravity. I am talking about how we model the social world of people, from our local community to our global society. Political scientists refer to this as the public sphere and define it as the arena in which individuals come together to share issues of importance, exchanging opinions through discussion and deliberation. It is within the public sphere that society develops an understanding of ourselves — our collective wisdom. 

The public sphere of course does not represent a singular view. It encompasses the whole spectrum of views, spanning a range of cultural and political perspectives from mainstream to fringe. That spectrum represents our common reality. It embodies our collective sense of what views and values our society holds at each level, from the hyperlocal to the more distant. By forming an accurate model of society, we the people can make good decisions about our future. 

Social media has distorted the public sphere beyond recognition. Each of us now has a deeply flawed mental model of our own communities. This damages our collective wisdom, but it is not the content itself that is most responsible. We must instead blame the machinery of distribution.  

A dangerous middleman

We humans evolved to trust that our daily experiences build a real representation of our world. If most objects we encounter fall to the ground, we generalize and build a mental model of gravity. When a few objects float instead to the sky, we model those as exceptions — rare events that represent a tiny slice of the world. 

But social media has inserted itself between each of us and our daily experiences, moderating and manipulating the information we receive about our society. The platforms do this by profiling us over time and using those profiles to target us with selective content — custom curated news, ads, and posts that do not represent our society as a whole. And this happens without us fully realizing it.

As a result, we all feel like we are experiencing the public sphere every day, when really each of us is trapped in a distorted representation of the world. This causes us to incorrectly generalize our world and build flawed mental models of our own society. Thus social media degrades our collective intelligence and damages our ability to make good decisions about our future. 

A world full of helium

Even worse, the warped public sphere we each inhabit is not random. It is custom-curated to target us with the information that will most likely resonate. This gives most of us an overinflated impression of the prevalence of our own views and values, and an underdeveloped sense of the prevalence of conflicting views and values. This dynamic amplifies extreme perspectives and drives polarization, but even worse, it destroys our collective wisdom as a society. 

I am of course not saying we should all have the same views and values. I am saying that we all need to be exposed to a real representation of how views and values are distributed across our society. That is collective wisdom. Social media has shattered the public sphere into a patchwork of smaller and smaller echo chambers, while obscuring the fact that these silos even exist. 

As a result, if I happen to have a fringe perspective on a particular topic, I may not realize that the vast majority of people find my view to be extreme, offensive, or just plain absurd. I will now build a flawed mental model of my world. I will incorrectly assess how my views fit into the public sphere. 

Credit: Cloneman / Rosenberg

This would be like an evil scientist raising a group of infants in a twisted world where most objects they encounter are filled with helium and only a few fall to the ground. Those infants would generalize their curated experience, because that is what our brains are designed to do. They would each develop a profoundly flawed model of reality.

How can we fix social media? 

This brings me back to my core assertion — that the biggest problem with social media is not the content itself, but the machinery of targeted distribution. It is this machinery that so greatly distorts our perception of our world, destroying our ability to generalize and to build accurate mental models. And without good mental models, we cannot make intelligent decisions about our future.  

We now live in a world where the public sphere is not an accurate representation of our society, but is manipulated by platforms that pull the strings for financial benefit. To fix this, we have two options: We can either cut the strings by banning profiling and targeting practices, or we can make the strings visible so we at least know when we are experiencing distorted views of our world.

An outright ban on profiling and targeting would help restore the public sphere to a far less distorted representation of society. Unfortunately, the economy of social media is built on profiling and targeting. These practices form the core of most platforms’ advertising models. As such, major technology corporations would vigorously fight such restrictions. 

On the other hand, we can make the strings visible without disrupting business models, but we need to do it in an aggressive way. For example, we could require that every piece of content on social media be clearly labeled in ways that allow us to understand how it fits into the public sphere. Is it shared among large segments of the population? Or is it fringe content that is targeted and shared among very narrow groups? Providing such context would help restore our understanding of the public sphere.

Building better mechanisms

Currently platforms like Facebook and Twitter allow users to see primitive targeting information about advertisements. To find this information, users need to click multiple times, which they will rarely do. On Twitter, you must click a tiny “more” button, and then a button called “why this ad?” At that point you get unsatisfying details: “You might be seeing this ad because Company X wants to reach people who are located here: United States. Twitter also personalizes ads using information received from partners and your app and website visits.”  

Does this help users understand how the targeted ad fits into the public sphere? I do not believe so. To make things worse, social media platforms provide no contextual information on content that arrives through sharing algorithms or as part of a targeted news feed. And for many users, it is the content they receive through news and sharing that is the most impactful. 

To solve this, we should push for transparency in targeting. This means requiring platforms to clearly disclose the demographic characteristics of the exposed population when targeting us with any piece of content that is not distributed broadly. That way, if I am targeted with news, ads, messaging, or other content that is going to a narrow slice of the population, I can at least assess that I am in an artificial echo chamber created by sharing algorithms, targeting algorithms, and other social media practices. 

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

This information could show up in a simple visual format that highlights how large or narrow a slice of the public each piece of content targets. Users should not have to click to get this information. It should appear whenever they engage the content in any way, even if they simply pause to look at it or allow their cursor to hover.  It could be as simple as a pie chart showing what percentage of the general public might receive the content through the mechanisms that deploy it.  

If a piece of content I receive is being shown to a 2 percent slice of the general public, that should give me a different mental model of how it fits into society compared to content being shared among a 40 percent slice. And if a user clicks on the graphic revealing the 2 percent targeting, they should get detailed demographics of how that 2% is defined. The goal is not to suppress content. We want to make the machinery of distribution as visible as possible, enabling each of us to appreciate when we are being siloed into a narrow echo chamber, and when we are not.  

Won’t fool myself again

Providing transparency in targeting allows each of us to build a more accurate mental model of our society. As a user, I might still resonate with fringe content on certain topics, but I would have a more accurate perspective of how it fits into the public sphere. And I will not fool myself into thinking that the idea that popped into my head last night about lizard people running my favorite fast food chain is widely accepted and shared among the general public. It is not.

In other words, social media platforms might still send me lots of helium balloons instead of solid objects. And I might appreciate getting many of those balloons. But with transparency in targeting, I will not be fooled into thinking that the whole world is filled with helium. Or lizard people.


Related

Up Next