Once a week.
Subscribe to our weekly newsletter.
3 cognitive biases perpetuating racism at work — and how to overcome them
Researchers say that moral self-licensing occurs "because good deeds make people feel secure in their moral self-regard."
Books about race and anti-racism have dominated bestseller lists in the past few months, bringing to prominence authors including Ibram Kendi, Ijeoma Oluo, Reni Eddo-Lodge, and Robin DiAngelo.
Sales of these books increased by up to 6,800% in the aftermath of global protests against racial injustice, according to Forbes, showing the role such work plays in raising awareness and leading to a cultural reckoning.
While readers learned about allyship, companies also showed their strengthened resolve to tackle racial inequality by making public statements on their social media accounts, and releasing detailed action plans with their commitments to change. It is still too early to say what affect these individual and collective actions will have in the long-term, and whether reading books on anti-racism and making public statements will result in a more just society where everyone has access to the same opportunities and is treated fairly.
But what we do know is that lasting, positive change is difficult to achieve without deliberate, sustained effort informed by reliable data that is free from bias.
And it's important not to underestimate the role cognitive bias can play in undermining these efforts - and to stay vigilant in spotting and mitigating it.
What is cognitive bias?
Human brains are hardwired to take shortcuts when processing information to make decisions, resulting in "systematic thinking errors", or unconscious bias.
When it comes to influencing our decisions and judgments around people, cognitive or unconscious bias is universally recognized to play a role in unequal outcomes for people of colour.
This helps to explain why unconscious bias training is often the first resort for companies looking to build more inclusive workplaces, with outcomes that may be highly variable and, at times, result in little measurable improvement.
These three cognitive biases are likely to be at play and could influence our decisions:
1. Moral licensing
This is when people derive such confidence from past moral behaviour that they are more likely to engage in immoral or unethical ways later.
In a 2010 study, researchers argued that moral self-licensing occurs "because good deeds make people feel secure in their moral self-regard", and future problematic behaviour does not evoke the same feelings of negative self-judgment that it otherwise would.
Participants who had voiced support for US President Barack Obama just before the 2008 election were less likely, when presented with a hypothetical slate of candidates for a police force job, to select a Black candidate for the role.
As the study authors hypothesize, "presumably, the act of expressing support for a Black presidential candidate made them feel that they no longer needed to prove their lack of prejudice". Other research shows that implicit and explicit attitudes toward African Americans did not substantively change during the period of the Obama presidency.
Moral licensing may help explain the limitations of corporate unconscious bias training in creating an anti-racist work environment, an effect which has already been observed when it comes to tackling gender inequality.
Iris Bohnet, a behavioural economist, suggests that "diversity programs aimed at influencing the worst offenders might backfire… Training designed to raise awareness about gender and race inequality may end up making gender and race more salient and thereby highlighting differences."
2. Affinity bias
This is our tendency to get along with others who are like us, and to evaluate them more positively than those who are different. Our personal beliefs, assumptions, preferences, and lack of understanding about people who are not like us may lead to repeatedly favouring 'similar-to-me' individuals.
In organizations, this often affects who gets hired, who gets promoted, and who gets picked for opportunities to manage people or projects.
Employees who look like those already in leadership are given opportunities to develop their careers, due to affinity bias, resulting in a lack of representation in senior leadership roles for BIPOC.
Affinity bias is particularly insidious in recruitment processes, where it presents as a lack of "culture fit", an ambiguous evaluation that should be avoided as an explanation for declining to hire a candidate.
Many hiring managers have a hard time articulating their organization's specific culture, or explaining what exactly they mean when they say "culture fit", leading to this being misused to engage employees that managers feel they will personally relate to.
3. Confirmation bias
This is the tendency to seek out, favour, and use information that confirms what you already believe. The other side of this is that people tend to ignore new information that goes against their preconceived notions, leading to poor decision-making.
It can hinder efforts to create and nurture an antiracist workplace culture, and also contributes to the limited effectiveness of unconscious bias training, together with moral licensing and affinity bias.
Many people's perceptions of others with different identities and with whom they have limited interaction, is strongly influenced by media depictions and longstanding cultural stereotypes.
For example, a 2017 study published in the American Psychological Association's Journal of Personality and Social Psychology found that people tended to perceive young Black men as taller, heavier, and more muscular than similarly sized white men, and hence more physically threatening.
Persistent notions about female or BIPOC candidates being inherently less qualified than white male candidates can undermine efforts to increase diversity, because such candidates are more likely to be negatively evaluated and ultimately not selected.
Confirmation bias also helps to explain why Asian Americans are underrepresented in leadership positions despite outperforming other minorities and white people in the US on education, employment and income. Long-held stereotypes lead to Asian Americans being seen as modest, deferential, and low in social skills, while at the same time penalizing those who adopt more dominant behaviours.
How to overcome unconscious bias
1. Change systems, not individuals
The main reason unconscious bias training programmes fail to have the desired effect in creating lasting change, is that they are focused on changing individual behaviours while leaving largely untouched the systems that enabled those behaviours to thrive.
Individual biases are difficult to shift in the long term, and the academic evidence suggests that knowing about bias does not result in changes in behaviour by managers and employees.
The whole social environment - rather than the individual - needs to be addressed. This can be done by implementing company policies and programmes designed to mitigate bias through all stages of the employee's journey, from selection processes to performance ratings and promotion decisions.
These structures, which should be audited regularly, are important in ensuring that any individual's own bias is limited and does not influence decisions at an organizational level.
Such structural initiatives may end up influencing social norms within organizations, so behavioural change happens on a larger group level, leading to improved compliance from individuals as they gain a new understanding of socially-acceptable behaviour.
2. Slow down and act deliberately
Bias is most likely to affect decision-making when decisions are made quickly, according to Stanford University psychology professor Jennifer Eberhardt, who studies implicit bias in police departments.
We are less likely to act on bias when we slow down and control our thoughts, consciously overcoming first impressions and the biases that come with them.
This is where unconscious bias training may have an impact, because self-awareness and education are key to shifting mindsets. Such mindset shifts are needed for people of colour as well, as research shows that they are equally subject to the unconscious bias provoked by negative stereotypes.
At work, slowing down may take the form of ensuring that one person's biases do not contaminate processes through establishing control mechanisms: ensuring a diversity of feedback givers during recruitment processes, and establishing structured interviews with the same set of defined questions and evaluation criteria for each candidate.
3. Set concrete goals and work towards them
Data is essential to making real progress on diversity goals, and especially important when it comes to mitigating the effects of bias because it provides an objective measure of what has improved – or worsened – over time.
The goals themselves will be specific to each organization's needs and context. But taking into consideration local variables such as countries of operation, company size, business goals, and organizational culture, setting goals and tracking progress in a transparent way ensures the environmental change (as opposed to individual) that is needed for success.
Data is key to buy-in, and companies can increase accountability by collecting and analysing data on diversity over time, comparing the numbers with those at other organizations, and sharing them with key stakeholders internally and externally.
Data collection also helps companies identify roadblocks, and engage with key stakeholders on strategies to address them.
- Can A.I. remove human bias from the hiring process? - Big Think ›
- The Difference Between Implicit Bias and Racism - Big Think ›
- Mindfulness may help the brain transcend racial biases - Big Think ›
- Mindfulness may help the brain transcend racial biases - Big Think ›
Ever since we've had the technology, we've looked to the stars in search of alien life. It's assumed that we're looking because we want to find other life in the universe, but what if we're looking to make sure there isn't any?
Here's an equation, and a rather distressing one at that: N = R* × fP × ne × f1 × fi × fc × L. It's the Drake equation, and it describes the number of alien civilizations in our galaxy with whom we might be able to communicate. Its terms correspond to values such as the fraction of stars with planets, the fraction of planets on which life could emerge, the fraction of planets that can support intelligent life, and so on. Using conservative estimates, the minimum result of this equation is 20. There ought to be 20 intelligent alien civilizations in the Milky Way that we can contact and who can contact us. But there aren't any.
The Drake equation is an example of a broader issue in the scientific community—considering the sheer size of the universe and our knowledge that intelligence life has evolved at least once, there should be evidence for alien life. This is generally referred to as the Fermi paradox, after the physicist Enrico Fermi who first examined the contradiction between high probability of alien civilizations and their apparent absence. Fermi summed this up rather succinctly when he asked, “Where is everybody"?
But maybe this was the wrong question. A better question, albeit a more troubling one, might be “What happened to everybody?" Unlike asking where life exists in the universe, there's a clearer potential answer to this question: the Great Filter.
Why the universe is empty
Alien life is likely, but there is none that we can see. Therefore, it could be the case that somewhere along the trajectory of life's development, there is a massive and common challenge that ends alien life before it becomes intelligent enough and widespread enough for us to see—a great filter.
This filter could take many forms. It could be that having a planet in the Goldilocks' zone—the narrow band around a star where it is neither too hot nor too cold for life to exist—and having that planet contain organic molecules capable of accumulating into life is extremely unlikely. We've observed plenty of planets in the Goldilock's zone of different stars (there's estimated to be 40 billion in the Milky Way), but maybe the conditions still aren't right there for life to exist.
The Great Filter could occur at the very earliest stages of life. When you were in high school bio, you might have the refrain drilled into your head “mitochondria are the powerhouse of the cell." I certainly did. However, mitochondria were at one point a separate bacteria living its own existence. At some point on Earth, a single-celled organism tried to eat one of these bacteria, except instead of being digested, the bacterium teamed up with the cell, producing extra energy that enabled the cell to develop in ways leading to higher forms of life. An event like this might be so unlikely that it's only happened once in the Milky Way.
Or, the filter could be the development of large brains, as we have. After all, we live on a planet full of many creatures, and the kind of intelligence humans have has only occurred once. It may be overwhelmingly likely that living creatures on other planets simply don't need to evolve the energy-demanding neural structures necessary for intelligence.
What if the filter is ahead of us?
These possibilities assume that the Great Filter is behind us—that humanity is a lucky species that overcame a hurdle almost all other life fails to pass. This might not be the case, however; life might evolve to our level all the time but get wiped out by some unknowable catastrophe. Discovering nuclear power is a likely event for any advanced society, but it also has the potential to destroy such a society. Utilizing a planet's resources to build an advanced civilization also destroys the planet: the current process of climate change serves as an example. Or, it could be something entirely unknown, a major threat that we can't see and won't see until it's too late.
The bleak, counterintuitive suggestion of the Great Filter is that it would be a bad sign for humanity to find alien life, especially alien life with a degree of technological advancement similar to our own. If our galaxy is truly empty and dead, it becomes more likely that we've already passed through the Great Filter. The galaxy could be empty because all other life failed some challenge that humanity passed.
If we find another alien civilization, but not a cosmos teeming with a variety of alien civilizations, the implication is that the Great Filter lies ahead of us. The galaxy should be full of life, but it is not; one other instance of life would suggest that the many other civilizations that should be there were wiped out by some catastrophe that we and our alien counterparts have yet to face.
Fortunately, we haven't found any life. Although it might be lonely, it means humanity's chances at long-term survival are a bit higher than otherwise.
Cross-disciplinary cooperation is needed to save civilization.
- There is a great disconnect between the sciences and the humanities.
- Solutions to most of our real-world problems need both ways of knowing.
- Moving beyond the two-culture divide is an essential step to ensure our project of civilization.
For the past five years, I ran the Institute for Cross-Disciplinary Engagement at Dartmouth, an initiative sponsored by the John Templeton Foundation. Our mission has been to find ways to bring scientists and humanists together, often in public venues or — after Covid-19 — online, to discuss questions that transcend the narrow confines of a single discipline.
It turns out that these questions are at the very center of the much needed and urgent conversation about our collective future. While the complexity of the problems we face asks for a multi-cultural integration of different ways of knowing, the tools at hand are scarce and mostly ineffective. We need to rethink and learn how to collaborate productively across disciplinary cultures.
The danger of hyper-specialization
The explosive expansion of knowledge that started in the mid 1800s led to hyper-specialization inside and outside academia. Even within a single discipline, say philosophy or physics, professionals often don't understand one another. As I wrote here before, "This fragmentation of knowledge inside and outside of academia is the hallmark of our times, an amplification of the clash of the Two Cultures that physicist and novelist C.P. Snow admonished his Cambridge colleagues in 1959." The loss is palpable, intellectually and socially. Knowledge is not adept to reductionism. Sure, a specialist will make progress in her chosen field, but the tunnel vision of hyper-specialization creates a loss of context: you do the work not knowing how it fits into the bigger picture or, more alarmingly, how it may impact society.
Many of the existential risks we face today — AI and its impact on the workforce, the dangerous loss of privacy due to data mining and sharing, the threat of cyberwarfare, the threat of biowarfare, the threat of global warming, the threat of nuclear terrorism, the threat to our humanity by the development of genetic engineering — are consequences of the growing ease of access to cutting-edge technologies and the irreversible dependence we all have on our gadgets. Technological innovation is seductive: we want to have the latest "smart" phone, 5k TV, and VR goggles because they are objects of desire and social placement.
Are we ready for the genetic revolution?
When the time comes, and experts believe it is coming sooner than we expect or are prepared for, genetic meddling with the human genome may drive social inequality to an unprecedented level with not just differences in wealth distribution but in what kind of being you become and who retains power. This is the kind of nightmare that Nobel Prize-winning geneticist Jennifer Doudna talked about in a recent Big Think video.
CRISPR 101: Curing Sickle Cell, Growing Organs, Mosquito Makeovers | Jennifer Doudna | Big Think www.youtube.com
At the heart of these advances is the dual-use nature of science, its light and shadow selves. Most technological developments are perceived and sold as spectacular advances that will either alleviate human suffering or bring increasing levels of comfort and accessibility to a growing number of people. Curing diseases is what motivated Doudna and other scientists involved with CRISPR research. But with that also came the potential for altering the genetic makeup of humanity in ways that, again, can be used for good or evil purposes.
This is not a sci-fi movie plot. The main difference between biohacking and nuclear hacking is one of scale. Nuclear technologies require industrial-level infrastructure, which is very costly and demanding. This is why nuclear research and its technological implementation have been mostly relegated to governments. Biohacking can be done in someone's backyard garage with equipment that is not very costly. The Netflix documentary series Unnatural Selection brings this point home in terrifying ways. The essential problem is this: once the genie is out of the bottle, it is virtually impossible to enforce any kind of control. The genie will not be pushed back in.
Cross-disciplinary cooperation is needed to save civilization
What, then, can be done? Such technological challenges go beyond the reach of a single discipline. CRISPR, for example, may be an invention within genetics, but its impact is vast, asking for oversight and ethical safeguards that are far from our current reality. The same with global warming, rampant environmental destruction, and growing levels of air pollution/greenhouse gas emissions that are fast emerging as we crawl into a post-pandemic era. Instead of learning the lessons from our 18 months of seclusion — that we are fragile to nature's powers, that we are co-dependent and globally linked in irreversible ways, that our individual choices affect many more than ourselves — we seem to be bent on decompressing our accumulated urges with impunity.
The experience from our experiment with the Institute for Cross-Disciplinary Engagement has taught us a few lessons that we hope can be extrapolated to the rest of society: (1) that there is huge public interest in this kind of cross-disciplinary conversation between the sciences and the humanities; (2) that there is growing consensus in academia that this conversation is needed and urgent, as similar institutes emerge in other schools; (3) that in order for an open cross-disciplinary exchange to be successful, a common language needs to be established with people talking to each other and not past each other; (4) that university and high school curricula should strive to create more courses where this sort of cross-disciplinary exchange is the norm and not the exception; (5) that this conversation needs to be taken to all sectors of society and not kept within isolated silos of intellectualism.
Moving beyond the two-culture divide is not simply an interesting intellectual exercise; it is, as humanity wrestles with its own indecisions and uncertainties, an essential step to ensure our project of civilization.
New study analyzes gravitational waves to confirm the late Stephen Hawking's black hole area theorem.
- A new paper confirms Stephen Hawking's black hole area theorem.
- The researchers used gravitational wave data to prove the theorem.
- The data came from Caltech and MIT's Advanced Laser Interferometer Gravitational-Wave Observatory.
The late Stephen Hawking's black hole area theorem is correct, a new study shows. Scientists used gravitational waves to prove the famous British physicist's idea, which may lead to uncovering more underlying laws of the universe.
The theorem, elaborated by Hawking in 1971, uses Einstein's theory of general relativity as a springboard to conclude that it is not possible for the surface area of a black hole to become smaller over time. The theorem parallels the second law of thermodynamics that says the entropy (disorder) of a closed system can't decrease over time. Since the entropy of a black hole is proportional to its surface area, both must continue to increase.
As a black hole gobbles up more matter, its mass and surface area grow. But as it grows, it also spins faster, which decreases its surface area. Hawking's theorem maintains that the increase in surface area that comes from the added mass would always be larger than the decrease in surface area because of the added spin.
Will Farr, one of the co-authors of the study that was published in Physical Review Letters, said their finding demonstrates that "black hole areas are something fundamental and important." His colleague Maximiliano Isi agreed in an interview with Live Science: "Black holes have an entropy, and it's proportional to their area. It's not just a funny coincidence, it's a deep fact about the world that they reveal."
What are gravitational waves?
Gravitational waves are "ripples" in spacetime, predicted by Albert Einstein in 1916, that are created by very violent processes happening in space. Einstein showed that very massive, accelerating space objects like neutron stars or black holes that orbit each other could cause disturbances in spacetime. Like the ripples produced by tossing a rock into a lake, they would bring about "waves" of spacetime that would spread in all directions.
As LIGO shared, "These cosmic ripples would travel at the speed of light, carrying with them information about their origins, as well as clues to the nature of gravity itself."
The gravitational waves discovered by LIGO's 3,000-kilometer-long laser beam, which can detect the smallest distortions in spacetime, were generated 1.3 billion years ago by two giant black holes that were quickly spiraling toward each other.
What Stephen Hawking would have discovered if he lived longer | NASA's Michelle Thaller | Big Think www.youtube.com
Confirming Hawking's black hole area theorem
The researchers separated the signal into two parts, depending on whether it was from before or after the black holes merged. This allowed them to figure out the mass and spin of the original black holes as well as the mass and spin of the merged black hole. With this information, they calculated the surface areas of the black holes before and after the merger.
"As they spin around each other faster and faster, the gravitational waves increase in amplitude more and more until they eventually plunge into each other — making this big burst of waves," Isi elaborated. "What you're left with is a new black hole that's in this excited state, which you can then study by analyzing how it's vibrating. It's like if you ping a bell, the specific pitches and durations it rings with will tell you the structure of that bell, and also what it's made out of."
The surface area of the resulting black holes was larger than the combined area of the original black holes. This conformed to Hawking's area law.