Traditional News Was Devilish – But It Was a Devil We Knew
Did decentralizing top-down media control bring us any closer to the truth-topia we were hoping for?
Oliver Luckett is a technology entrepreneur and currently CEO of ReviloPark, a global culture accelerator. He has served as Head of Innovation at the Walt Disney Company and co-founder of video sharing platform Revver. As CEO of theAudience, Luckett worked with clients such as Obama for America, Coachella, Pixar, and American Express. He has helped managed the digital personae of hundreds of celebrities and brands, including Star Wars, The Chainsmokers, Steve Aoki, and Toy Story 3.
His book is The Social Organism: A Radical Understanding of Social Media to Transform Your Business and Life.
Oliver Luckett: For all of the history up until this point, our communication structures have for the most part, especially mass media systems, have been very top down and they've been controlled by a few people that had distribution control. If you look back the church was really the first broadcast network. The church built out a very defined architecture of communication that was coming from a centralized place where very few people could have the word of God come down to them and they had the ability to transcribe – this was at a point when literacy was very rare and so you had only a few people that were illiterate that could transcribe this holy word – and then they would distribute it out to a local market where you had a big impressive building that had lots of iconography and lots of beautiful images inside of it and the tallest building usually in the town and they would ring with the steeple at 8:00 a.m. and we would all congregate for mass and we would listen to one message from one incontrovertible truth, from one source.
And that's not too dissimilar from television architecture. You have a group of people in suits in New York or in Los Angeles and they're deciding what's going to be on television and then they distribute it to those towers. And at 7:00 p.m. prime time we aggregate around a television that's been brought into our home and we watch this one incontrovertible truth and this signal from a top down approach. And when the Internet started enabling people first with this underlying network architecture of TCP/IP that allowed us to transcend time and distance, that allowed any node on the network to contribute to the system, and then we started seeing things like video sharing and a photo sharing that allowed us all to become publishers. And then we had this kind of this layer of social that is redefining everything where every single person is now a contributing node on the network, and every person that is part of that uses emotions and memes and content to distributor things in a horizontal fashion. And so what that's doing is destroying the ability to discern what is authentic; what is not, what's real; what's fake, what's commercial; what's non-commercial, what's sponsored; what's non-sponsored, what's a good idea versus a bad idea. And so when we exist in this freeform society where every node on the network can contribute something to the network, and it has no checks and balances if you will, there is no top down authority that's editing it or deciding what's real or not – then suddenly it becomes every node on the network's responsibility. We’re all having to learn a pattern of behavior that we're all responsible for the propagation of this content.
Because the one interesting rule is it's very difficult to make a mass media statement in a cellular holonic structure nodal network because you have to get a bunch of people to agree to share it and agreed to propagate it. No big media company can buy their way into the system anymore. But at the same time if everybody is on a balanced playing field then people that are hackers or people that scam the system or people that kind of arbitrage the new ad features that emerge or decided to take this path, have an advantage over some of the tried-and-true institutions, you know, especially in things like the context of fake news that's been happening a lot. The idea that a bunch of Macedonian teenagers that are arbitraging ad dollars on Facebook's system can put hundreds of stories into a network that people believe, these fake news stories, when a New York Times or a Wall Street Journal refuses to pay for play in a system like Facebook. And so you have this great imbalance because we haven't learned yet how to - we haven't taught ourselves yet how to discern what's real and what's fake and how to look at sources and how to see them for what they are.
And that's also because of a lack of transparency. We're living in these systems now that are controlling our ability to disseminate information and we have no transparency whatsoever when it comes to algorithms. Why this does content behaved this way, why one day when I post something to 500 people does it reach and the next they only five people reaches? And so until we have visibility into that system and into these algorithms we're going to be at a bit of a loss and a bit of a grabbing in the dark trying to make sense of this new communication architecture.
I quote Aldous Huxley, paraphrase him basically says: The only part of the universe you can possibly control is yourself. And now more than ever that kind of social responsibility is upon each one of us. Because now we're in a holonic system, a cellular holonic system that we are all responsible with the propagation of the right information, positive information, negative information, fake information. We're all responsible for it because we're all part of a metabolic factor inside of the system. The sharing of these memes is propelled by emotions. So human emotions are the metabolism of this system. And you can tap into human curiosity and the whole range of emotions from anger to laughter to desire and so we're seeing that played out in real time right now.
The church was the first news magnate, says tech entrepreneur Oliver Luckett. It was a top-down centralized network where just few people could access the word of God, and would disseminate that information to the masses. Centuries later another top-down network emerged: print and later television media boomed and set the agenda, relaying information with authority from just a handful of networks. Today’s communication system has a different architecture: it’s holonic, says Luckett, or horizontally disseminated – everyone with a signal and a device can produce, contribute, dispute and report news. So in which system are we better off? Are we any closer to the truth now than we were then? Luckett contends that human emotion has become the editor-in-chief of today’s news, and that to steer us away from misinformation, fake news, and opinion masquerading as fact, it will require a concerted effort in social responsibility – something that we may not be capable of en masse. Oliver Luckett and Michael J. Casey's book is The Social Organism: A Radical Understanding of Social Media to Transform Your Business and Life.
Oliver Luckett and Michael J. Casey's book is The Social Organism: A Radical Understanding of Social Media to Transform Your Business and Life.
Once a week.
Subscribe to our weekly newsletter.
A recent study used fMRI to compare the brains of psychopathic criminals with a group of 100 well-functioning individuals, finding striking similarities.
- The study used psychological inventories to assess a group of violent criminals and healthy volunteers for psychopathy, and then examined how their brains responded to watching violent movie scenes.
- The fMRI results showed that the brains of healthy subjects who scored high in psychopathic traits reacted similarly as the psychopathic criminal group. Both of these groups also showed atrophy in brain regions involved in regulating emotion.
- The study adds complexity to common conceptions of what differentiates a psychopath from a "healthy" individual.
When considering what precisely makes someone a psychopath, the lines can be blurry.
Psychological research has shown that many people in society have some degree of malevolent personality traits, such as those described by the "dark triad": narcissism (entitled self-importance), Machiavellianism (strategic exploitation and deceit), and psychopathy (callousness and cynicism). But while people who score high in these traits are more likely to end up in prison, most of them are well functioning and don't engage in extreme antisocial behaviors.
Now, a new study published in Cerebral Cortex found that the brains of psychopathic criminals are structurally and functionally similar to many well-functioning, non-criminal individuals with psychopathic traits. The results suggest that psychopathy isn't a binary classification, but rather a "constellation" of personality traits that "vary in the non-incarcerated population with normal range of social functioning."
Assessing your inner psychopath
The researchers used functional magnetic resonance imaging (fMRI) to compare the brains of violent psychopathic criminals to those of healthy volunteers. All participants were assessed for psychopathy through commonly used inventories: the Hare Psychopathy Checklist-Revised and the Levenson Self-Report Psychopathy Scale.
Experimental design and sample stimuli. The subjects viewed a compilation of 137 movie clips with variable violent and nonviolent content.Nummenmaa et al.
Both groups watched a 26-minute-long medley of movie scenes that were selected to portray a "large variability of social and emotional content." Some scenes depicted intense violence. As participants watched the medley, fMRI recorded how various regions of their brains responded to the content.
The goal was to see whether the brains of psychopathic criminals looked and reacted similarly to the brains of healthy subjects who scored high in psychopathic traits. The results showed similar reactions: When both groups viewed violent scenes, the fMRI revealed strong reactions in the orbitofrontal cortex and anterior insula, brain regions associated with regulating emotion.
These similarities manifested as a positive association: The more psychopathic traits a healthy subject displayed, the more their brains responded like the criminal group. What's more, the fMRI revealed a similar association between psychopathic traits and brain structure, with those scoring high in psychopathy showing lower gray matter density in the orbitofrontal cortex and anterior insula.
There were some key differences between the groups, however. The researchers noted that the structural abnormalities in the healthy sample were mainly associated with primary psychopathic traits, which are: inclination to lie, lack of remorse, and callousness. Meanwhile, the functional responses of the healthy subjects were associated with secondary psychopathic traits: impulsivity, short temper, and low tolerance for frustration.
Overall, the study further illuminates some of the biological drivers of psychopathy, and it adds nuance to common conceptions of the differences between psychopathy and being "healthy."
Why do some psychopaths become criminals?
The million-dollar question remains unanswered: Why do some psychopaths end up in prison, while others (or, people who score high in psychopathic traits) lead well-functioning lives? The researchers couldn't give a definitive answer, but they did note that psychopathic criminals had lower connectivity within "key nodes of the social and emotional brain networks, including amygdala, insula, thalamus, and frontal pole."
"Thus, even though there are parallels in the regional responsiveness of the brain's affective circuit in the convicted psychopaths and well-functioning subjects with psychopathic traits, it is likely that the disrupted functional connectivity of this network is specific to criminal psychopathy."
Counterintuitively, directly combating misinformation online can spread it further. A different approach is needed.
- Like the coronavirus, engaging with misinformation can inadvertently cause it to spread.
- Social media has a business model based on getting users to spend increasing amounts of time on their platforms, which is why they are hesitant to remove engaging content.
- The best way to fight online misinformation is to drown it out with the truth.
A year ago, the Center for Countering Digital Hate warned of the parallel pandemics — the biological contagion of COVID-19 and the social contagion of misinformation, aiding the spread of the disease. Since the outbreak of COVID-19, anti-vaccine accounts have gained 10 million new social media followers, while we have witnessed arson attacks against 5G masts, hospital staff abused for treating COVID patients, and conspiracists addressing crowds of thousands.
Many have refused to follow guidance issued to control the spread of the virus, motivated by beliefs in falsehoods about its origins and effects. The reluctance we see in some to get the COVID vaccine is greater amongst those who rely on social media rather than traditional media for their information. In a pandemic, lies cost lives, and it has felt like a new conspiracy theory has sprung up online every day.
How we, as social media users, behave in response to misinformation can either enable or prevent it from being seen and believed by more people.
The rules are different online
Credit: Pool via Getty Images
If a colleague mentions in the office that Bill Gates planned the pandemic, or a friend at dinner tells the table that the COVID vaccine could make them infertile, the right thing to do is often to challenge their claims. We don't want anyone to be left believing these falsehoods.
But digital is different. The rules of physics online are not the same as they are in the offline world. We need new solutions for the problems we face online.
Now, imagine that in order to reply to your friend, you must first hand him a megaphone so that everyone within a five-block radius can hear what he has to say. It would do more damage than good, but this is essentially what we do when we engage with misinformation online.
Think about misinformation as being like the coronavirus — when we engage with it, we help to spread it to everyone else with whom we come into contact. If a public figure with a large following responds to a post containing misinformation, they ensure the post is seen by hundreds of thousands or even millions of people with one click. Social media algorithms also push content into more users' newsfeeds if it appears to be engaging, so lots of interactions from users with relatively small followings can still have unintended negative consequences.
The trend of people celebrating and posting photos of themselves or loved ones receiving the vaccine has been far more effective than any attempt to disprove a baseless claim about Bill Gates or 5G mobile technology.
Additionally, whereas we know our friend from the office or dinner, most of the misinformation we see online will come from strangers. They often will be from one of two groups — true believers, whose minds are made up, and professional propagandists, who profit from building large audiences online and selling them products (including false cures). Both of these groups use trolling tactics, that is, seeking to trigger people to respond in anger, thus helping them reach new audiences and thereby gaming the algorithm.
On the day the COVID vaccine was approved in the UK, anti-vaccine activists were able to provoke pro-vaccine voices into posting about thalidomide, exposing new audiences to a reason to distrust the medical establishment. Those who spread misinformation understand the rules of the game online; it's time those of us on the side of enlightenment values of truth and science did too.
How to fight online misinformation
Of course, it is much easier for social media companies to take on this issue than for us citizens. Research from the Center for Countering Digital Hate and Anti-Vax Watch last month found that 65% of anti-vaccine content on social media is linked to just twelve individuals and their organizations. Were the platforms to simply remove the accounts of these superspreaders, it would do a huge amount to reduce harmful misinformation.
The problem is that social media platforms are resistant to do so. These businesses have been built by constantly increasing the amount of time users spend on their platforms. Getting rid of the creators of engaging content that has millions of people hooked is antithetical to the business model. It will require intervention from governments to force tech companies to finally protect their users and society as a whole.
So, what can the rest of us do, while we await state regulation?
Instead of engaging, we should be outweighing the bad with the good. Every time you see a piece of harmful misinformation, share advice or information from a trusted source, like the WHO or BBC, on the same subject. The trend of people celebrating and posting photos of themselves or loved ones receiving the vaccine has been far more effective than any attempt to disprove a baseless claim about Bill Gates or 5G mobile technology. In the attention economy that governs tech platforms, drowning out is a better strategy than rebuttal.
Imran Ahmed is CEO of the Center for Countering Digital Hate.
A Harvard professor's study discovers the worst year to be alive.
- Harvard professor Michael McCormick argues the worst year to be alive was 536 AD.
- The year was terrible due to cataclysmic eruptions that blocked out the sun and the spread of the plague.
- 536 ushered in the coldest decade in thousands of years and started a century of economic devastation.
The past year has been nothing but the worst in the lives of many people around the globe. A rampaging pandemic, dangerous political instability, weather catastrophes, and a profound change in lifestyle that most have never experienced or imagined.
But was it the worst year ever?
Nope. Not even close. In the eyes of the historian and archaeologist Michael McCormick, the absolute "worst year to be alive" was 536.
Why was 536 so bad? You could certainly argue that 1918, the last year of World War I when the Spanish Flu killed up to 100 million people around the world, was a terrible year by all accounts. 1349 could also be considered on this morbid list as the year when the Black Death wiped out half of Europe, with up to 20 million dead from the plague. Most of the years of World War II could probably lay claim to the "worst year" title as well. But 536 was in a category of its own, argues the historian.
It all began with an eruption...
According to McCormick, Professor of Medieval History at Harvard University, 536 was the precursor year to one of the worst periods of human history. It featured a volcanic eruption early in the year that took place in Iceland, as established by a study of a Swiss glacier carried out by McCormick and the glaciologist Paul Mayewski from the Climate Change Institute of The University of Maine (UM) in Orono.
The ash spewed out by the volcano likely led to a fog that brought an 18-month-long stretch of daytime darkness across Europe, the Middle East, and portions of Asia. As wrote the Byzantine historian Procopius, "For the sun gave forth its light without brightness, like the moon, during the whole year." He also recounted that it looked like the sun was always in eclipse.
Cassiodorus, a Roman politician of that time, wrote that the sun had a "bluish" color, the moon had no luster, and "seasons seem to be all jumbled up together." What's even creepier, he described, "We marvel to see no shadows of our bodies at noon."
...that led to famine...
The dark days also brought a period of coldness, with summer temperatures falling by 1.5° C. to 2.5° C. This started the coldest decade in the past 2300 years, reports Science, leading to the devastation of crops and worldwide hunger.
...and the fall of an empire
In 541, the bubonic plague added considerably to the world's misery. Spreading from the Roman port of Pelusium in Egypt, the so-called Plague of Justinian caused the deaths of up to one half of the population of the eastern Roman Empire. This, in turn, sped up its eventual collapse, writes McCormick.
Between the environmental cataclysms, with massive volcanic eruptions also in 540 and 547, and the devastation brought on by the plague, Europe was in for an economic downturn for nearly all of the next century, until 640 when silver mining gave it a boost.
Was that the worst time in history?
Of course, the absolute worst time in history depends on who you were and where you lived.
Native Americans can easily point to 1520, when smallpox, brought over by the Spanish, killed millions of indigenous people. By 1600, up to 90 percent of the population of the Americas (about 55 million people) was wiped out by various European pathogens.
Like all things, the grisly title of "worst year ever" comes down to historical perspective.
Because of our ability to think about thinking, "the gap between ape and man is immeasurably greater than the one between amoeba and ape."
- Self-awareness — namely, our capacity to think about our thoughts — is central to how we perceive the world.
- Without self-awareness, education, literature, and other human endeavors would not be possible.
- Striving toward greater self-awareness is the spiritual goal of many religions and philosophies.
The following is an excerpt from Dr. Stephen Fleming's forthcoming book Know Thyself. It is reprinted with permission from the author.
I now run a neuroscience lab dedicated to the study of self-awareness at University College London. My team is one of several working within the Wellcome Centre for Human Neuroimaging, located in an elegant town house in Queen Square in London. The basement of our building houses large machines for brain imaging, and each group in the Centre uses this technology to study how different aspects of the mind and brain work: how we see, hear, remember, speak, make decisions, and so on. The students and postdocs in my lab focus on the brain's capacity for self-awareness. I find it a remarkable fact that something unique about our biology has allowed the human brain to turn its thoughts on itself.
Until quite recently, however, this all seemed like nonsense. As the nineteenth-century French philosopher Auguste Comte put it: "The thinking individual cannot cut himself in two — one of the parts reasoning, while the other is looking on. Since in this case the organ observed and the observing organ are identical, how could any observation be made?" In other words, how can the same brain turn its thoughts upon itself?
Comte's argument chimed with scientific thinking at the time. After the Enlightenment dawned on Europe, an increasingly popular view was that self-awareness was special and not something that could be studied using the tools of science. Western philosophers were instead using self-reflection as a philosophical tool, much as mathematicians use algebra in the pursuit of new mathematical truths. René Descartes relied on self-reflection in this way to reach his famous conclusion, "I think, therefore I am," noting along the way that "I know clearly that there is nothing that can be perceived by me more easily or more clearly than my own mind." Descartes proposed that a central soul was the seat of thought and reason, commanding our bodies to act on our behalf. The soul could not be split in two — it just was. Self-awareness was therefore mysterious and indefinable, and off-limits to science.
We now know that the premise of Comte's worry is false. The human brain is not a single, indivisible organ. Instead, the brain is made up of billions of small components — neurons — that each crackle with electrical activity and participate in a wiring diagram of mind-boggling complexity. Out of the interactions among these cells, our entire mental life — our thoughts and feelings, hopes and dreams — flickers in and out of existence. But rather than being a meaningless tangle of connections with no discernible structure, this wiring diagram also has a broader architecture that divides the brain into distinct regions, each engaged in specialized computations. Just as a map of a city need not include individual houses to be useful, we can obtain a rough overview of how different areas of the human brain are working together at the scale of regions rather than individual brain cells. Some areas of the cortex are closer to the inputs (such as the eyes) and others are further up the processing chain. For instance, some regions are primarily involved in seeing (the visual cortex, at the back of the brain), others in processing sounds (the auditory cortex), while others are involved in storing and retrieving memories (such as the hippocampus).
In a reply to Comte in 1865, the British philosopher John Stuart Mill anticipated the idea that self-awareness might also depend on the interaction of processes operating within a single brain and was thus a legitimate target of scientific study. Now, thanks to the advent of powerful brain imaging technologies such as functional magnetic resonance imaging (fMRI), we know that when we self-reflect, particular brain networks indeed crackle into life and that damage or disease to these same networks can lead to devastating impairments of self-awareness.
I often think that if we were not so thoroughly familiar with our own capacity for self-awareness, we would be gobsmacked that the brain is able to pull off this marvelous conjuring trick. Imagine for a moment that you are a scientist on a mission to study new life-forms found on a distant planet. Biologists back on Earth are clamoring to know what they're made of and what makes them tick. But no one suggests just asking them! And yet a Martian landing on Earth, after learning a bit of English or Spanish or French, could do just that. The Martians might be stunned to find that we can already tell them something about what it is like to remember, dream, laugh, cry, or feel elated or regretful — all by virtue of being self-aware.
I find it a remarkable fact that something unique about our biology has allowed the human brain to turn its thoughts on itself.
But self-awareness did not just evolve to allow us to tell each other (and potential Martian visitors) about our thoughts and feelings. Instead, being self-aware is central to how we experience the world. We not only perceive our surroundings; we can also reflect on the beauty of a sunset, wonder whether our vision is blurred, and ask whether our senses are being fooled by illusions or magic tricks. We not only make decisions about whether to take a new job or whom to marry; we can also reflect on whether we made a good or bad choice. We not only recall childhood memories; we can also question whether these memories might be mistaken.
Self-awareness also enables us to understand that other people have minds like ours. Being self-aware allows me to ask, "How does this seem to me?" and, equally importantly, "How will this seem to someone else?" Literary novels would become meaningless if we lost the ability to think about the minds of others and compare their experiences to our own. Without self-awareness, there would be no organized education. We would not know who needs to learn or whether we have the capacity to teach them. The writer Vladimir Nabokov elegantly captured this idea that self-awareness is a catalyst for human flourishing:
"Being aware of being aware of being. In other words, if I not only know that I am but also know that I know it, then I belong to the human species. All the rest follow s— the glory of thought, poetry, a vision of the universe. In that respect, the gap between ape and man is immeasurably greater than the one between amoeba and ape."
In light of these myriad benefits, it's not surprising that cultivating accurate self-awareness has long been considered a wise and noble goal. In Plato's dialogue Charmides, Socrates has just returned from fighting in the Peloponnesian War. On his way home, he asks a local boy, Charmides, if he has worked out the meaning of sophrosyne — the Greek word for temperance or moderation, and the essence of a life well lived. After a long debate, the boy's cousin Critias suggests that the key to sophrosyne is simple: self-awareness. Socrates sums up his argument: "Then the wise or temperate man, and he only, will know himself, and be able to examine what he knows or does not know…No other person will be able to do this."
Likewise, the ancient Greeks were urged to "know thyself" by a prominent inscription carved into the stone of the Temple of Delphi. For them, self-awareness was a work in progress and something to be striven toward. This view persisted into medieval religious traditions: for instance, the Italian priest and philosopher Saint Thomas Aquinas suggested that while God knows Himself by default, we need to put in time and effort to know our own minds. Aquinas and his monks spent long hours engaged in silent contemplation. They believed that only by participating in concerted self-reflection could they ascend toward the image of God.
A similar notion of striving toward self-awareness is seen in Eastern traditions such as Buddhism. The spiritual goal of enlightenment is to dissolve the ego, allowing more transparent and direct knowledge of our minds acting in the here and now. The founder of Chinese Taoism, Lao Tzu, captured this idea that gaining self-awareness is one of the highest pursuits when he wrote, "To know that one does not know is best; Not to know but to believe that one knows is a disease."
Today, there is a plethora of websites, blogs, and self-help books that encourage us to "find ourselves" and become more self-aware. The sentiment is well meant. But while we are often urged to have better self-awareness, little attention is paid to how self-awareness actually works. I find this odd. It would be strange to encourage people to fix their cars without knowing how the engine worked, or to go to the gym without knowing which muscles to exercise. This book aims to fill this gap. I don't pretend to give pithy advice or quotes to put on a poster. Instead, I aim to provide a guide to the building blocks of self-awareness, drawing on the latest research from psychology, computer science, and neuroscience. By understanding how self-awareness works, I aim to put us in a position to answer the Athenian call to use it better.