The social brain: Culture, change and evolution | A Big Think Long Take
Bret Weinstein says that we're at the end of a massive technological and geographic boom, and that we should prepare for the next step in our societal evolution. Yet the future may not be optimistic for all. A cultural backlash to change, he says, is inevitable.
Professor Bret Weinstein has spent two decades advancing the field of evolutionary biology with a focus on adaptive trade-offs. He has made important discoveries regarding the evolution of cancer and senescence as well as the adaptive significance of moral self-sacrifice.
He applies his evolutionary lens to human behavior in order to sketch a path through the many crises we face as a species. By confronting emerging authoritarianism, and abandoning the archaic distinction between political right and left, we can discover a new model of governance that frees humanity to seek a just, sustainable and abundant future.
Bret Weinstein: We’re heading into a very dangerous phase of history; human beings being addicted to growth are constantly looking for sources, so when we feel austerity coming on we tend to become more tribal.
Unfortunately a perfectly free market will not allow benevolent firms to survive in the long run.
My argument is not an argument for centrism. I regard utopianism as probably the worst idea that human beings have ever had.
We find ourselves unfortunately stuck in an archaic argument about policy; frankly the left and right are both out of answers and they should team up on the basis that they agree at a values level about what a functional society should ideally look like.
Human beings, like all creatures, are the product of adaptive evolution, but they are highly unusual amongst evolved creatures. In order to understand them it is very important to recognize certain things that make us different from even the most similar creatures, like chimpanzees. The most important difference is something I call the omega principal. The omega principal specifies the relationship between human culture and the human genome.
The most important thing to realize about human beings is that a tremendous amount of what we are is not housed in our genomes; it’s housed in a cultural layer that is passed on outside of genes.
Culture is vastly more flexible, more plastic, and more quickly evolving in an adaptive sense than genes, which is why in fact cultural evolution came about in human beings.
It allows human beings to switch what they are doing and how they are doing it much more quickly than they could if all information that was adapting was stored in DNA.
One of the very important benefits of understanding this relationship between the genome and the cultural attributes of human beings is that it frees us to engage in an analysis of the evolutionary meaning of behaviors without having to know where exactly the information is stored.
This is especially important with complex phenomenon, which may be partially housed in the genome and partially housed in the cultural layer—something like human language, for example.
Human language as a capacity is obviously genetically encoded, but individual human languages are not.
And so if we are to talk about the adaptive utility of human language, being obligated to specify what is housed where could put off that discussion for generations, whereas if we recognize that the cultural aspects of language—as well as the genomic aspects of language—are all serving a united interest then we can begin to understand the meaning of something like language in rigorous, adaptive terms.
The hypothesis of cultural evolution, which has now has been sufficiently tested to be regarded as a theory—of human cultural evolution, is the invention of Richard Dawkins, who in 1976 in The Selfish Gene coined the term ‘meme’ as an analog for gene; it’s a unit of cultural evolution.
The genome creates a brain that is capable of being infused with culture after an individual person is born.
If culture was evolving to do things that were not in the genome’s interest they would effectively be wasting the time and resources that the genetic individual has access to on frivolous things at best. So the genome would shut down frivolous culture were it a very common commodity. So the theory of memes tells us that there is a process, very much like the one that shapes our genomes, at work in the cultural layer.
That does not mean, however, that the cultural evolving layer is free of obligation to the genome. In fact, the cultural layer is downstream, and one of the things that we have repeatedly gotten wrong is we have attempted to just simply extend the rules of adaptive evolution as we have learned them from other creatures and apply them to human beings, and it leads to some unfortunate misunderstandings.
The fact that we are primarily culturally informed tells us that culture serves the genetic interests almost all of the time.
Which is to say, if you look at a long-standing cultural trait, it doesn’t matter what it is—whether it’s music or religion or humor—all of those things must be paying for themselves in terms of genetic fitness.
Once we’ve recognized that, we can skip to the much more interesting question of: “in what way do some of the remarkable cultural structures that we see serve genetic interests?”
Some of them seem absolutely paradoxical if we try to imagine that they are serving our genomes, and yet that is the conclusion that we have to reach when we realize that the genome is not only tolerating the existence of that culture, but it is facilitating its acquisition.
This suggests a very odd state of affairs for human beings, in which we have minds that are programmed by culture and that can be completely at odds with our genomes.
And it leads to misunderstandings of evolution, like the idea that religious belief is a mind virus—that effectively these belief structures are parasitizing human beings, and they are wasting the time and effort that those human beings are spending on that endeavor, rather than the more reasonable interpretation, which is that these belief systems have flourished because they have facilitated the interests of the creatures involved.
Our belief systems are built around evolutionary success and they certainly contain human benevolence—which is appropriate to phases of history when there is abundance and people can afford to be good to each other.
The problem is, if you have grown up in a period in which abundance has been the standard state you don’t anticipate the way people change in the face of austerity.
And so what we are currently seeing is messages—that we have all agreed are unacceptable—reemerging, because the signals that we have reached the end of the boom times, those signals are everywhere and so people are triggered to move into a phase that they don’t even know that they have.
Despite the fact that human beings think that they have escaped the evolutionary paradigm, they’ve done nothing of the kind; And so, we should expect the belief systems that people hold to mirror the evolutionary interests that people have rather than to match our best instincts.
When we are capable of being good to each other because there’s abundance, we have those instincts; and so it’s not incorrect to say that human beings are capable of being marvelous creatures and being quite ethical.
Now I would argue there’s a simple way of reconciling the correct understanding—that religious belief often describes “truths” that in many cases fly in the face of what we can understand scientifically—with the idea that these beliefs are adaptive.
I call it the state of being literally false and metaphorically true. A belief is literally false and metaphorically true if it is not factual, but if behaving as if it were factual results in an enhancement of one’s fitness.
To take an example: if one behaves in, let’s say, the Christian tradition, in such a way as to gain access to heaven, one will not actually find themselves at the pearly gates being welcomed in, but one does tend to place their descendants in a good position with respect to the community that those of descendants continue to live in.
So if we were to think evolutionarily, the person who is behaving so as to get into heaven has genetic interests. Those genetic interests are represented in the narrow sense by their immediate descendants and close relatives; in the larger sense they may be represented by the entire population of people from whom that individual came, and by acting so as to get into heaven the fitness of that person, the number of copies of those genes that continue to flourish in the aftermath of that person’s death will go up.
So the believe in heaven is literally false—there is no such place—but it is metaphorically true in the sense that it results in an increase in fitness.
If you think about all of the things that you know human beings to have done over the course of human history you’ll realize that humans must have shifted from one niche to the other again and again.
Effectively humans are a niche-switching creature.
That is the human niche—to discover new things to do when the ancestral ways have petered out and are no longer useful.
Innovating new ways to be is very much the human toolkit. When human beings adopt an opportunity, their population grows in proportion to the size of that opportunity, and that opportunity essentially should be thought of as a frontier.
Now, there are many kinds of frontiers that humans have discovered. The most obvious kind of frontier is a geographic frontier: when a population discoverers an uninhabited island—or in an extreme case they discover a continent that has no people on it—that is a tremendous opportunity, and a tiny population can grow to gigantic size given such a bit of good fortune.
But there are less-obvious kinds of frontier as well.
A technological frontier occurs when people discover a mechanism for doing more with the territory that they have.
So for example, if you think about a piece of territory that has been inhabited by hunter-gatherers, at the point that farming is either invented or brought in (discovered by some other population that same piece of territory), if it is hospitable to farming, can support a much larger population. So it functions just like having discovered a new landmass, because the size of the population that exists on the current landmass goes up as a result of the fact that the land is made more productive.
There’s a third kind of a frontier, which I call a “transfer” frontier, which is not really the same in the sense that it is zero sum: somebody has to lose in order for somebody else to win.
But from the point of view of an individual population, another population that cannot defend the resources that it has is an opportunity, and so many of the worst chapters in human history involve one population targeting another population that can’t defend the resources that it owns.
And so for the population doing the targeting, capturing those resources functions like having discovered a new landmass or a new technology that allows productivity to go up.
All of these types of frontiers eventually run out. There is simply a limit to the number of geographical locations that can be inhabited. There may always be a next technology, but the discovery of new technologies comes in fits and starts, and there can be long dry periods where you have reached or exceeded the limits of a technological opportunity and the next one is nowhere on the horizon.
So human beings, being addicted to growth, are constantly looking for sources.
And when geographic frontiers and technological frontiers don’t provide those opportunities, human beings will sometimes look within their own population and figure out who can’t defend the resources that they hold and they manufacture reasons that they are not entitled to keep them.
And so when we feel austerity coming on we tend to become more tribal.
And this is a very dangerous pattern of history, for example, what took place during the Holocaust when the German population decided to target European Jews, and it made up reasons that those Jews were not entitled to continue.
So, what we are effectively seeing in the present is a circumstance in which we have reached the end of a boom, and human beings are becoming tribal because that is the natural transition at the end of a growth period, and we are naturally inspired to look for something to replace the growth that has run out.
This is why many of these abhorrent messages have become resonant in the present to many people. They are waiting to hear somebody explain what population isn’t capable of defending its resources and to explain what justification will be used to pursue those resources—and to transfer them.
Many people are optimistic that technological breakthroughs will continue to provide access to growth.
And this is an unfortunate perspective, because it leads us into a false sense of security, not realizing that—being evolutionary creatures—we are not programmed to preserve that state of growth and make it last a long time.
What we are wired to do is to capture the benefits of them and bring them into use.
What that means for most creatures is: when a non-zero sum opportunity has been discovered, creatures create many more like themselves; basically more mouths to feed.
For modern people, sometimes creating more mouths is not the natural reaction, but creating greater consumption is.
And so as much as we are wired in a way that is beneficial—where we discover new ways of doing more with less that provides abundance—we are also wired to use up that abundance in consumption.
And in fact we have a dynamic in which our economic theories—the ones that we run society on the basis of—actually define economic health. Growth is the conversion of useful energy into useless heat and the conversion of useful materials into useless waste. So we have what I call a throughput society where we view ourselves as doing something right as we are taking resources that might be made to last a long time and we use them up.
So, for example, if I were to invent a microwave oven that is just as useful as the one you have but would last ten times as long as the one you have, intuitively it seems that that should be a very good thing; but from the point of view of the economy it will result in a reduction in growth because fewer microwave ovens will be sold.
So by defining our economic terms such that they lead us to “correct” for improvements in efficiency and cause us to capture resources and use them up in one kind of consumption or another, we set ourselves up for a situation in which no matter how good an opportunity we discover it is inherently temporary.
Ethics evolved to limit the self-destructive behavior internal to a population.
Unfortunately in our present circumstance, where we have handed over so many functions to an anonymous marketplace, people that learn where the ethical landscape (that we describe to ourselves), where on that “landscape” there are opportunities that are unpoliced—actually come out ahead.
If you discover things that are unethical that therefore many people will not engage in, but you’re willing to engage in those things and there’s no penalty—either informal or legal—built in the system, then you will come out ahead as a result of your increased freedom because you’re not ethically limited to the narrower set of acceptable behaviors.
And so unfortunately society has begun rewarding people who are good at figuring out where we are not policing our ethical standards and exploiting those opportunities as a competitive advantage.
The fate of benevolent firms in the market is a very important topic. Unfortunately a perfectly free market will not allow benevolent firms to survive in the long run.
Now that may seem like an overly declarative statement, but the problem is this: if you imagine two firms, one of which is perfectly amoral and will do absolutely anything that generates a profit and the other one, which is constrained by ethical beliefs that prevent it from availing itself of certain economic opportunities, then it doesn’t matter what the economic circumstance is: the amoral corporation or firm always has an advantage.
The best that can be true is that the ethical choice is also the strategically best choice and the two will be dead even, but in any case where there’s any distinction whatsoever between the ethical choice and the perfectly amoral choice, the amoral firm has the advantage of being perfectly free to avail itself of opportunities that its competitor cannot reach. And what this does is it causes evolution of firms in the direction of ruthlessness.
It is often times the case that people who set things in motion in the marketplace with the best of intentions are surprised at what ultimately becomes of their innovations.
Google famously began with the prime directive “don’t be evil” and many people have recognized that over time Google has become more ruthless than it was at the start. This is actually a perfectly predictable phenomenon.
The reason that Google was able to have noble objectives at first was that it existed in an immature market where having ethical restraints on what is possible did not put it at a competitive disadvantage to any viable competitor that it faced.
As a market matures, its tolerance for firms that restrain themselves is much reduced, and a perfectly efficient market has effectively no tolerance for self-restraint. And as a result of this, firms evolve to become more ruthless or they parish.
And either way what we find is an increased tendency in a market—as it becomes more efficient—towards ruthlessness.
This does not have to be the case, but it is the result of the fact that we leave the market free for this kind of evolutionary trajectory.
If we were wise about this we would realize that a free market is not the ideal state.
We don’t want to tinker, we don’t want to meddle in a way that is overly disruptive of innovation, but we do want to tinker enough that the evolutionary tendency produces the kinds of firms that we wish to see rather than ones that behave in a way that horrifies us.
We find ourselves unfortunately stuck in an archaic argument about policy where right and left disagree about the wisdom of tinkering with society to make certain things better.
In general the left is overly enthusiastic about meddling and it doesn’t appreciate the full danger of unintended consequences, and the right is overly skeptical of the advantages of tinkering and prone to focus on the unintended consequences, and be underambitious with respect to making society better.
But the entire argument is based on ideas of the 18th and 19th century, and those ideas are simply not up to date enough to deal with the problems of the 21st century.
So my argument would be that those on the left and right who are in favor of liberty as perhaps the highest human value should put aside their policy differences—because frankly the left and right are both out of answers—and they should team up on the basis that they agree at a values level about what a functional society should ideally look like.
And we should actually begin a new conversation about policy in which we investigate what we can do that the founders of this nation (and others that are modeled on it) couldn’t imagine because they didn’t have the tools at their disposal.
In particular, we should be very careful that whatever solution-making we engage in is evolutionarily aware.
The founders of the United States did not, of course, know anything about evolution.
Those who have constructed our markets did not know anything about evolution.
And what they have done repeatedly is accidentally set up an evolutionary system in which adaptation begins to take place without anybody’s awareness.
And what that tends to do is it tends to take the best intentions of those who set up these systems and overrun them with things that simply function.
It results in dangerous patterns like regulatory capture, where those entities that figure out how to tinker those parts of the governance apparatus that are supposed to regulate them come out ahead of those that don’t attempt to tinker with those elements.
My argument is not an argument for centrism. I believe that the answers we are looking for are not actually on the map of possibilities that we are familiar with. We are effectively living in Flatland, and what we have to do is learn to detect the Z axis so that we can seek solutions of a type that will at first, be unfamiliar to us.
There’s a great danger in doing this, of course, which is Utopianism. I regard Utopianism as probably the worst idea that human beings have ever had, and if anyone doubts that that’s the case you should look at the history of Utopian ideas across the 20th century. The untold number of bodies that stacked up as a result of Utopian ideas run amok is absolutely staggering.
Utopians make two errors: the first error is they prioritize a single value.
Now, because of the way mechanisms function, anytime you optimize a single value you create incredibly large costs for every other value in question.
By prioritizing things like liberty or equality, if you do so in a narrowly focused way you can’t help but generate a dystopian result, because all of the other values that people might hold are effectively destroyed in the process.
The other mistake that utopians make is they tend to imagine that they know what the future state should look like and they miss what every inventor knows, which is that your grandest ideas are crude to begin with. You have to build a prototype in order to figure out what you don’t understand.
And so I would argue we cannot describe the future that we should be seeking.
We can say what direction it probably is in and we can head in that direction intelligently, but the minute we start telling ourselves that we know what the state that we are trying to construct looks like we will suffer the same failure that an inventor that wanted to bypass the prototyping stage would suffer.
There are two kinds of conflict that people can find themselves in: they can be in conflict when they have fundamentally different interests from each other, but very frequently for human beings we will find ourselves in conflict with somebody with whom we are aligned but we have a difference of opinion about what to pursue or in what manner to do so.
And there are a couple of things that evolution can tell us about how to address such a conflict to be productive.
The first thing is it is very important to figure out what it is that causes you to disagree. Sometimes you may be disagreeing over values. So for example, if the two people prioritize things differently they may have a different sense about what should be done.
On the other hand, people may be disagreeing about how to accomplish something while being completely aligned with respect to what it is that is desired.
So establishing what it is that has you disagreeing is very important. I’ll give you an example.
I frequently find, as somebody who comes from the political left, that I have very easy conversations with Libertarians on the right, and those conversations remain easy until we get to the question of policy.
At the point that we get to the question of policy we diverge. And the reason that we diverge is actually that we have different expectations about the danger of creating new policy, but we do not disagree over the values.
A right-of-center economic Libertarian will agree that ideally the market works best when opportunity is as broadly distributed as possible, so that everybody has an opportunity to innovate. They may disagree about how equal the opportunity is currently, and they may also disagree about the wisdom of attempting to redistribute opportunities so that people who don’t have it can gain access to it, but there’s no disagreement over whether it is a desirable characteristic.
On the other hand, there may be disagreements over what objectives are worthy of pursuing; that some people would like to see liberty prioritized over equality, for example, and some other group of people may want to see equality promoted at a cost to liberty—those are both valid perspectives and it is worth understanding that when values are at issue there may actually be no resolution. Two reasonable people can disagree over how valuable various objectives are, and when that’s the case simply recognizing that the difference cannot be resolved at the level of discovering that somebody is correct and somebody is incorrect because in fact both positions are equally valid.
I would say that there is a failure in the way we view argument, that in general I think our politicized and polarized atmosphere has caused us to look at arguments as always tactical.
And one thing that you find when you interact with people who are very adept intellectually is that they are often capable of putting aside suspicion about the motives of the people with whom they are arguing, and they will argue not to win but to discover what is true.
And it’s a very different state of affairs, because although nobody likes being shown to be wrong the great thing about being shown to be wrong is that it gives you the opportunity to correct your understanding and to be wiser the next time you encounter the question, rather than entrenching yourself in a wrong position and suffering the costs of being wrong every time you encounter the question.
So, putting aside a desire to win and substituting a desire to discover what is true is the key to discovering truth through argument, which benefits everybody who participates; those who have turned out to be correct, those who have turned out to be incorrect and in general what typically happens is one will discover that nobody was perfectly correct, and then all sides have increased their understanding based on hashing out the details of what they disagree over.
We’re heading into a very dangerous phase of history where a large number of people, especially young people, have become convinced that the free exchange of ideas is not only no longer necessary but is actually counterproductive; and so they set out to silence those who have opinions at odds with theirs.
Some of the people who have opinions at odds with theirs truly believe abhorrent things, but the problem is: until you fully understand a topic, you don’t know which opinions to shut out.
One has to actually engage beliefs that are at odds with your own beliefs in order to figure out whether what you believe is correct, and to improve where it isn’t correct.
So shutting down speech has become the mode for a large number of individuals who believe they see very clearly what is wrong with civilization and what must be done to improve it—and they are unfortunately shutting down people who have vital things to tell them that they definitely need to know.
In this wide-ranging talk, controversial professor Bret Weinstein covers several topics: politics, technology, and tribalism, just to name a few. But ultimately the former Biology professor at Evergreen College talks with us about why this particular decade is so interesting. Given the explosive growth of the 20th century, he argues that we've come to the end of that particular boom and have just started searching frantically to keep the pace that we've come to expect. When that change doesn't come, Weinstein posits that we search for scapegoats, turn inwards, and start to attack ourselves. And that's paraphrasing just some of the half-hour talk we have for you.
Once a week.
Subscribe to our weekly newsletter.
A global survey shows the majority of countries favor Android over iPhone.
- When Android was launched soon after Apple's own iPhone, Steve Jobs threatened to "destroy" it.
- Ever since, and across the world, the rivalry between both systems has animated users.
- Now the results are in: worldwide, consumers clearly prefer one side — and it's not Steve Jobs'.
A woman on her phone in Havana, Cuba. Mobile phones have become ubiquitous the world over — and so has the divide between Android and iPhone users.Credit: Yamil Lage / AFP via Getty Images.
Us versus them: it's the archetypal binary. It makes the world understandable by dividing it into two competing halves: labor against capital, West against East, men against women.
These maps are the first to show the dividing lines between one of the world's more recent binaries: Android vs. Apple. Published by Electronics Hub, they are based on a qualitative analysis of almost 350,000 tweets worldwide that presented positive, neutral, and negative attitudes toward Android and/or Apple.
Steve Jobs wanted to go "thermonuclear"
Feelings between Android and Apple were pretty tribal from the get-go. It was Steve Jobs himself who said, when Google rolled out Android a mere ten months after Apple launched the iPhone, "I'm going to destroy Android, because it's a stolen product. I'm willing to go thermonuclear war on this."
Buying a phone is like picking a side in the eternal feud between the Hatfields and the McCoys. Each choice for automatically comes with an in-built arsenal of arguments against.
If you are an iPhone person, you appreciate the sleekness and simplicity of its design, and you are horrified by the confusing mess that is the Android operating system. If you are an Android aficionado, you pity the iPhone user, a captive of an overly expensive closed ecosystem, designed to extract money from its users.
Even without resorting to those extremes, many of us will recognize which side of the dividing line that we are on. Like the American Civil War, that line runs through families and groups of friends, but that would be a bit confusing to chart geographically. To un-muddle the information, these maps zoom out to state and country level.
If the contest is based on the number of countries, Android wins. In all, 74 of the 142 countries surveyed prefer Android (in green on the map). Only 65 favor Apple (colored grey). That's a 52/48 split, which may not sound like a decisive vote, but it was good enough for Boris Johnson to get Brexit done (after he got breakfast done, of course).
And yes, math-heads: 74 plus 65 is three short of 142. Belarus, Fiji, and Peru (in yellow on the map) could not decide which side to support in the Global Phone War.
What about the United States, home of both the Android and the iPhone? Another victory for the former, albeit a slightly narrower one: 30.16 percent of the tweets about Android were positive versus just 29.03 percent of the ones about Apple.
United States: Texas surrounded!
Credit: Electronics Hub
There can be only one winner per state, though, and that leads to this preponderance of Android logos. Frankly, it's a relief to see a map showing a visceral divide within the United States that is not the coasts versus the heartland.
- Apple dominates in 19 states: a solid Midwestern bloc, another of states surrounding Texas, the Dakotas and California, plus North Carolina, New Hampshire, and Rhode Island.
- And that's it. The other 32 are the United States of Android. You can drive from Seattle to Miami without straying into iPhone territory. But no stopovers in Dallas or Houston – both are behind enemy lines!
North America: strongly leaning toward Android
Credit: Electronics Hub
Only eight of North America's 21 countries surveyed fall into the Apple category.
- The U.S. and Canada lean Android, while Mexico goes for the iPhone.
- Central America is divided, but here too Android wins hands down, 5-2.
Europe: Big Five divided
Credit: Electronics Hub
In Europe, Apple wins, with 20 countries preferring the iPhone, 17 going for Android, and Belarus sitting on the fence.
- Of Western Europe's Big Five markets, three (UK, Germany, Spain) are pro-Android, and two (France, Italy) are pro-Apple.
- Czechia and Slovakia are an Apple island in the Android sea that is Central Europe. Glad to see there is still something the divorcees can agree on.
South America: almost even
Credit: Electronics Hub
In South America, the divide is almost even.
- Five countries prefer Android, four Apple, and one is undecided.
- In Peru, both Android- and Apple-related tweets were 25 percent positive.
Africa: watch out for Huawei
Credit: Electronics Hub
In Africa, Android wins by 17 countries versus Apple's 15.
- There's a solid Android bloc running from South Africa via DR Congo all the way to Ethiopia.
- iPhone countries are scattered throughout the north (Algeria), west (Guinea), east (Somalia), and south (Namibia).
Huawei — increasingly popular across the continent — could soon dramatically change the picture in Africa. Currently still running on Android, the Chinese phone manufacturer has just launched its own operating system, called Harmony.
Middle East: Iran vs. Saudi Arabia (again)
Credit: Electronics Hub
In the Middle East and Central Asia, Android wins 8 countries to Apple's 6.
- But it's complicated. One Turkish tweeter wondered how it is that iPhones seem more popular in the Asian half of Istanbul, while Android phones prevailed in the European part of the city.
- The phone divide matches up with the region's main geopolitical one: Iran prefers Android, Saudi Arabia the iPhone.
Asia-Pacific: Apple on the periphery
Credit: Electronics Hub
Another wafer-thin majority for Android in the Asia-Pacific region: 13 countries versus 12 for Apple — and one abstention (Fiji).
- The two giants of the Asian mainland, India and China, are both Android countries. Apple countries are on the periphery.
- And if India is Android, its rival Pakistan must be Apple. Same with North and South Korea.
Experts point to the fact that both operating systems are becoming more alike with every new generation as a potential resolution to the conflict. But as any student of human behavior will confirm: smaller differences will only exacerbate the rivalry between both camps.
Maps taken from Electronics Hub, reproduced with kind permission.
Strange Maps #1096
Got a strange map? Let me know at firstname.lastname@example.org.
English is a dynamic language, and this summer's new additions to dictionary.com tell us a lot about how we're living.
- The summer update to Dictionary.com added hundreds of new words and definitions.
- Many of them are in areas related to justice, technology, and COVID-19.
- The new slang terms will leave more than a few people confused.
In any given year, new words are added to the dictionary to reflect how society's use of them has changed, often in response to ongoing events. For Summer 2021, more than 1200 new, improved, and revised definitions were introduced to Dictionary.com, including 231 entirely new words. A review of those words, the subjects they cover, and the stories behind their creation tells a rich story about the times we live in.
A word by any other definition?
You might wonder why we need to carry out such extensive addition and redefining campaigns. John Kelly, the Managing Editor of Dictionary.com, explained in a statement why these changes were made and their importance:
"The latest update to our dictionary continues to mirror the world around us. Long COVID, minoritize, 5G, content warning, domestic terrorism — it's a complicated and challenging society we live in, and language changes to help us grapple with it. But sometimes language changes just for fun. Yes, yeet is now in the dictionary, which may prompt some of us to use one other of our new entries: oof! Perhaps these lighter slang and pop culture newcomers to our dictionary reflect another important aspect of our time — a cautious optimism and a brighter mood about the future ahead after a trying 2020."
The English language isn't static, so it is up to lexicographers to get the dictionaries up to speed. Let's face it, we might need more than a few new words to talk about last year.
The times they are a-changin'
Good Communication 101: Mirroring, Jargon, Hifalutin Words | Alan Alda | Big Think www.youtube.com
Words that describe the continuing COVID-19 pandemic are still being added. The recent additions, which include long haul and long hauler may speak to the shift in how we interact with the pandemic — it is now a long-term rather than an acute concern for many people. Changes to our lives as a result of the pandemic and new ways to cater to those challenges, like ghost kitchen and side hustle, also made their way in.
In the aftermath of the murder of George Floyd and the protests that followed, Americans searched for terms related to racial issues at significantly higher levels than before. This not only called for updates and additions to words in this area last year but a continuing review, which has added new terms like the acronyms JEDI and DEI and the new word one-drop rule. Other terms long included, like Jim Crow and Black Codes, saw updates this summer.
Technology continued to advance through thick and thin as well. Terms like 5G, asynchronous, and abandonware made it into the recent update. Given how much time we all spent using tech in the last year and a half, it is only fitting that we would need these terms. 5G also has the unfortunate distinction of being both a telecommunications technology and a target for conspiracy theorists, perhaps making a dictionary entry for it all the more important.
Other words previously defined as regional or cultural in nature have been redefined in the light of their evolving use. Y'all is now listed as its own term and deemed an "informal" pronoun rather than a mere variant of "you-all." The post explaining the update noted that the term is now more known for informality than regionalism and has enjoyed a surge in use as a gender neutral pronoun.
Mind hack: 7 secrets to learn any new language | Steve Kaufmann | Big Think www.youtube.com
Perhaps it is necessary that after a year that required so many of the above words to be added or clarified, there are new slang terms that will seem like absolute gibberish to somebody disconnected from popular culture. New words like yeet, zaddy, and oof were added this year, showing that even in difficult times, fun new ways to use language are cropping up all the time.
The website's lexicographers also saw fit to officially add one of the honorable mentions for 2020's word of the year to the list of vulgar slang terms. Regrettably, it is unfit for publication, but it rhymes with spit-snow.
So, now that y'all know about these updates, perhaps we can all order from the new ghost kitchen from apps on our 5G smartphones before getting back to our side hustles. Yeet!
Reality is far stranger than fiction.
- Black holes are stranger than fiction, especially when we explore the weird effects of watching someone or something fall into one.
- Rotating black holes may be traversable if the physics as we understand it holds.
- To discuss the physics, we explore a fictional tale with a grand ending.
What happens when someone falls into a black hole? If you are the unfortunate soul being gobbled up, things don't look too bad until they turn really bad. Unless, there is an outlet through a wormhole. And you are really lucky.
The fictional story below — an abridged version of one published in my 2002 book The Prophet and the Astronomer explains why. Since we now know that black holes exist and that even Jeff Bezos can fly into outer space, it is only a matter of time before humans fly into black holes — albeit a very, very long time from now: the nearest black hole to Earth (as of now) lies a "mere" 1,500 light-years away.
But first, a refresher. In his general theory of relativity, Albert Einstein equated gravity with the curvature of space around a massive body. The effect is quite negligible for light masses but becomes important for massive stars and even more so for very compact massive objects such as neutron stars, whose gravity is 100,000 times stronger than at the sun's surface. Distortions of space caused by a larger mass (stars) will cause small moving masses (planets) to deviate from what Newtonian gravity predicts. Another remarkable consequence of Einstein's theory of gravity is the slowing down of clocks in strong gravitational fields: strong gravity bends space and slows down time.
Now, on with the story.
In my young days, I traveled from planet to planet looking for old spaceship parts. It was in one of my travels in search of a rare gyroscope for a 2180 Mars Lander that I found "Mr. Ström's Rocket Parts," an enormous hanger littered with mountains of space garbage. While I was consulting the store's virtual stock-scanning device to search for the gyroscope, Mr. Ström himself came to greet me. He was famous throughout the galaxy for claiming to have come closer than anyone to a black hole, a story that, to most, was just that — a story.
Like many before me, I asked Mr. Ström to tell me his story. After hesitating a while, he gave in.
"I was commander of a fleet built to explore the complex astrophysical X-ray source known as Cygnus X-1," he started. "Since the 1970s, over three millennia ago, this was suspected to be a binary star system 6,000 light-years from Earth. The two members of the binary system, thought to be a blue giant star about 20-30 solar masses and a black hole about 7-15 solar masses, orbited so close together that the black hole frantically sucked matter from his huge companion into a spiraling oblivion. This mad swirling heated the in-falling stellar matter to enormous temperatures, producing the X-rays astronomers on Earth observed. Even though the data indicated that the smaller object of the pair had a mass much larger than the maximum mass for neutron stars, it was still not clear if it was a black hole. Since other attempts to identify it had failed, the League of Planets decided that the only way to know for sure was to go there.
"The fleet consisted of three vessels, each under the command of a Ström, a great honor to my family. I led the vessel named CX1, my middle brother led CX2, and the youngest led CX3. I will spare you the details of how the mission was prepared, and how, after many problems with our hyper-relativistic plasma drive, we finally arrived to within one light-month of our destination. Through our telescopes we could see an enormous hot blue star being drained by an invisible hole in space.
"We were instructed to fly single file toward the black hole, keeping a very large distance from each other; my younger brother first, my mid-brother second, and me last. We knew that, from a large distance, a black hole behaves like any other massive object, as the differences general relativity predicted happen only fairly close to it. We also knew that every black hole has an imaginary limiting sphere around it known as the 'event horizon,' which marks the distance from which not even light could escape.
"My young brother's ship, the CX3, was to approach the hole, sending us periodic light flashes with a given frequency; we were to follow at a distance, measuring the frequency of the radiation emitted by my brother's ship as well as the time interval between the pulses, and then compare them with the theoretical predictions for gravitational redshift and time delay. The three vessels plunged to a distance of 10,000 kilometers from the hole; while CX1 and CX2 hovered at that distance, my brother closed in to 100 kilometers from the hole. He was instructed to send us infrared radiation, but we detected only radio waves. The gravitational redshift formula was indeed correct. Furthermore, the intervals between two pulses increased quite perceptibly; time was flowing slower for my brother, as viewed from our distant ships. He plunged to the dangerously close distance of ten kilometers from the hole, only seven from the event horizon; this was the closest distance the ship could stand, due to the enormous tidal forces around the hole, which stretch everything into spaghetti. (Numbers assume a one-solar-mass black hole.)
"From that close orbit, my brother was to send pulses of visible light, but all we detected were (invisible) radio waves; we could not see my brother's ship any longer, and I started to feel very uneasy. The theory was correct: a ship falling into a black hole will become invisible to a more distant ship (us) due to the red shifting of light. That also meant that we would never be able to see a star collapsing into a black hole, as it will become invisible before it meets its end. A related effect was the slowing of time. As my younger brother approached the black hole, the radiation pulses were arriving at increasingly long intervals. Thus, not only could we not see him, but we would also have to wait an enormous amount of time to receive any message from him. This confirmed the prediction that for a distant observer, the collapse of a star would take forever. Of course, for the unlucky traveler that freefalls into the black hole, nothing unusual with the passage of time would happen, as explained by the equivalence principle: gravity is neutralized in free fall. Unfortunately, his body would be horribly stretched.
"The turbulence and steady bombardment of matter swirling around the black hole caused my brother's spaceship to drift uncontrollably into the maelstrom. I had to try to rescue him. After all, this was a rotating black hole, and the theory predicted that instead of a crushing singularity at its center, there should be a wormhole connected to another point in the universe. A desperate maneuver to be sure.
"My mid-brother waited in a safe distant orbit around the black hole. As I plunged in, the whirling of space dragged me in as water into a drain. The combination of enormous gravitational pull and furious bombardment of radiation and particles took a toll on my ship; but its fuselage miraculously — what else could it be but a miracle? — survived, as I did, thanks to the once controversial anti-crunch shield. Outside, space seemed to convulse into infinitely many coexisting shapes. Inside a black hole, I realized, reality had no boundaries.
"I felt an enormous push, as if the spaceship was being coughed up by a giant. I must have remained unconscious for quite a while. When I looked into a mirror, I could hardly believe what I saw; my hair had turned completely white, and my face was covered with wrinkles I didn't have moments (moments?) ago. I checked my location in the computer and realized that, somehow, I re-emerged 2,000 light-years away from Cygnus X-1. The only possible explanation was that I traveled through a wormhole, which somehow was kept open inside the black hole and was tossed out by a white hole at a faraway point in space."
Apart from the sequence of facts inside the black hole — where we know very little — the rest is what we should expect from watching someone fall into a black hole. Reality, for these cosmic maelstroms, is definitely stranger than fiction.
Virtual reality continues to blur the line between the physical and the digital, and it will change our lives forever.
- Extended reality technologies — which include virtual reality, augmented reality, and mixed reality — have long captivated the public imagination, but have yet to become mainstream.
- Extended reality technologies are quickly becoming better and cheaper, suggesting they may soon become part of daily life.
- Over the long term, these technologies may usher in the "mirror world" — a digital layer "map" that lies atop the physical world and enables us to interact with internet-based technologies more seamlessly than ever.
What will the Disneyland of the future look like? | Hard Reset by Freethink www.youtube.com
Immersive technology aims to overlay a digital layer of experience atop everyday reality, changing how we interact with everything from medicine to entertainment. What that future will look like is anyone's guess. But immersive technology is certainly on the rise.
The extended reality (XR) industry — which includes virtual reality (VR), augmented reality (AR), and mixed reality (MR), which involves both virtual and physical spaces — is projected to grow from $43 billion in 2020 to $333 billion by 2025, according to a recent market forecast. Much of that growth will be driven by consumer technologies, such as VR video games, which are projected to be worth more than $90 billion by 2027, and AR glasses, which Apple and Facebook are currently developing.
But other sectors are adopting immersive technologies, too. A 2020 survey found that 91 percent of businesses are currently using some form of XR or plan to use it in the future. The range of XR applications seems endless: Boeing technicians use AR when installing wiring in airplanes. H&R Block service representatives use VR to boost their on-the-phone soft skills. And KFC developed an escape-room VR game to train employees how to make fried chicken.
XR applications not only train and entertain; they also have the unique ability to transform how people perceive familiar spaces. Take theme parks, which are using immersive technology to add a new experiential layer to their existing rides, such as roller coasters where riders wear VR headsets. Some parks, like China's $1.5 billion VR Star Theme Park, don't have physical rides at all.
One of the most novel innovations in theme parks is Disney's Star Wars: Galaxy's Edge attraction, which has multiple versions: physical locations in California and Florida and a near-identical virtual replica within the "Tales from the Galaxy's Edge" VR game.
"That's really the first instance of anything like this that's ever been done, where you can get a deeper dive, and a somewhat different view, of the same location by exploring its digital counterpart," game designer Michael Libby told Freethink.
Libby now runs Worldbuildr, a company that uses game-engine software to prototype theme park attractions before construction begins. The prototypes provide a real-time VR preview of everything riders will experience during the ride. It begs the question: considering that VR technology is constantly improving, will there come a point when there's no need for the physical ride at all?
Maybe. But probably not anytime soon.
"I think we're more than a few minutes from the future of VR," Sony Interactive Entertainment CEO Jim Ryan told the Washington Post in 2020. "Will it be this year? No. Will it be next year? No. But will it come at some stage? We believe that."
It could take years for XR to become mainstream. But that growth period is likely to be a brief chapter in the long history of XR technologies.
The evolution of immersive technology
The first crude example of XR technology came in 1838 when the English scientist Charles Wheatstone invented the stereoscope, a device through which people could view two images of the same scene but portrayed at slightly different angles, creating the illusion of depth and solidity. Yet it took another century before anything resembling our modern conception of immersive technology struck the popular imagination.
In 1935, the science fiction writer Stanley G. Weinbaum wrote a short story called "Pygmalion's Spectacles," which describes a pair of goggles that enables one to perceive "a movie that gives one sight and sound [...] taste, smell, and touch. [...] You are in the story, you speak to the shadows (characters) and they reply, and instead of being on a screen, the story is all about you, and you are in it."
The 1950s and 1960s saw some bold and crude forays into XR, such as the Sensorama, which was dubbed an "experience theater" that featured a movie screen complemented by fan-generated wind, a motional chair, and a machine that produced scents. There was also the Telesphere Mask, which packed most of the same features but in the form of a headset designed presciently similar to modern models.
The first functional AR device came in 1968 with Ivan Sutherland's The Sword of Damocles, a heavy headset through which viewers could see basic shapes and structures overlaid on the room around them. The 1980s brought interactive VR systems featuring goggles and gloves, like NASA's Virtual Interface Environment Workstation (VIEW), which let astronauts control robots from a distance using hand and finger movements.
1980's Virtual Reality - NASA Video youtu.be
That same technology led to new XR devices in the gaming industry, like Nintendo's Power Glove and Virtual Boy. But despite a ton of hype over XR in the 1980s and 1990s, these flashy products failed to sell. The technology was too clunky and costly.
In 2012, the gaming industry saw a more successful run at immersive technology when Oculus VR raised $2.5 million on Kickstarter to develop a VR headset. Unlike previous headsets, the Oculus model offered a 90-degree field of view, was priced reasonably, and relied on a personal computer for processing power.
In 2014, Facebook acquired Oculus for $2 billion, and the following years brought a wave of new VR products from companies like Sony, Valve, and HTC. The most recent market evolution has been toward standalone wireless VR headsets that don't require a computer, like the Oculus Quest 2, which last year received five times as many preorders as its predecessor did in 2019.
Also notable about the Oculus Quest 2 is its price: $299 — $100 cheaper than the first version. For years, market experts have said cost is the primary barrier to adoption of VR; the Valve Index headset, for example, starts at $999, and that price doesn't include the cost of games, which can cost $60 a piece. But as hardware gets better and prices get cheaper, immersive technology might become a staple in homes and industry.
Advancing XR technologies
Over the short term, it's unclear whether the recent wave of interest in XR technologies is just hype. But there's reason to think it's not. In addition to surging sales of VR devices and games, particularly amid the COVID-19 pandemic, Facebook's heavy investments into XR suggests there's plenty of space into which these technologies could grow.
A report from The Information published in March found that roughly 20 percent of Facebook personnel work in the company's AR/VR division called Facebook Reality Labs, which is "developing all the technologies needed to enable breakthrough AR glasses and VR headsets, including optics and displays, computer vision, audio, graphics, brain-computer interface, haptic interaction."
What would "breakthroughs" in XR technologies look like? It's unclear exactly what Facebook has in mind, but there are some well-known points of friction that the industry is working to overcome. For example, locomotion is a longstanding problem in VR games. Sure, some advanced systems — that is, ones that cost far more than $300 — include treadmill-like devices on which you move through the virtual world by walking, running, or tilting your center of gravity.
But for the consumer-grade devices, the options are currently limited to using a joystick, walking in place, leaning forward, or pointing and teleporting. (There's also these electronic boots that keep you in place as you walk, for what it's worth.) These solutions usually work fine, but it produces an inherent sensory contradiction: Your avatar is moving through the virtual world but your body remains still. The locomotion problem is why most VR games don't require swift character movements and why designers often compensate by having the player sit in a cockpit or otherwise limiting the game environment to a confined space.
For AR, one key hurdle is fine-tuning the technology to ensure that the virtual content you see through, say, a pair of smart glasses is optically consistent with physical objects and spaces. Currently, AR often appears clunky, unrooted from the real world. Incorporating LiDAR (Light Detection and Ranging) into AR devices may do the trick. The futurist Bernard Marr elaborated on his blog:
"[LIDAR] is essentially used to create a 3D map of surroundings, which can seriously boost a device's AR capabilities. It can provide a sense of depth to AR creations — instead of them looking like a flat graphic. It also allows for occlusion, which is where any real physical object located in front of the AR object should, obviously, block the view of it — for example, people's legs blocking out a Pokémon GO character on the street."
Another broad technological upgrade to XR technologies, especially AR, is likely to be 5G, which will boost the transmission rate of wireless data over networks.
"The adoption of 5G will make a difference in terms of new types of content being able to be viewed by more people." Irena Cronin, CEO of Infinite Retina, a research and advisory firm that helps companies implement spatial computing technologies, said in a 2020 XR survey report. "5G is going to make a difference for more sophisticated, heavy content being viewed live when needed by businesses."
Beyond technological hurdles, the AR sector still has to answer some more abstract questions on the consumer side: From a comfort and style perspective, do people really want to walk around wearing smart glasses or other wearable AR tech? (The failure of Google Glass suggests people were not quite ready to in 2014.) What is the value proposition of AR for consumers? How will companies handle the ethical dilemmas associated with AR technology, such as data privacy, motion sickness, and the potential safety hazards created by tinkering with how users see, say, a busy intersection?
Despite the hurdles, it seems likely that the XR industry will steadily — if clumsily — continue to improve these technologies, weaving them into more aspects of our personal and professional lives. The proof is in your pocket: Smartphones can already run AR applications that let you see prehistoric creatures, true-to-size IKEA furniture in your living room, navigation directions overlaid on real streets, paintings at the Vincent Van Gogh exhibit, and, of course, Pokémon. So, what's next?
The future of immersive experiences
When COVID-19 struck, it not only brought a surge in sales of XR devices and applications but also made a case for rethinking how workers interact in physical spaces. Zoom calls quickly became the norm for office jobs. But for some, prolonged video calls became annoying and exhausting; the term "Zoom fatigue" caught on and was even researched in a 2021 study published in Technology, Mind, and Behavior.
The VR company Spatial offered an alternative to Zoom. Instead of talking to 2D images of coworkers on a screen, Spatial virtually recreates office environments where workers — more specifically, their avatars — can talk and interact. The experience isn't perfect: your avatar, which is created by uploading a photo of yourself, looks a bit awkward, as do the body movements. But the experience is good enough to challenge the idea that working in a physical office is worth the trouble.
Cyberspace illustrationtampatra via Adobe Stock
That's probably the most relatable example of an immersive environment people may soon encounter. But the future is wide open. Immersive environments may also be used on a wide scale to:
- Conduct job interviews, potentially with gender- and race-neutral avatars to eliminate possibilities of discriminatory hiring practices
- Ease chronic pain
- Help people overcome phobias through exposure therapy
- Train surgeons to conduct complex procedures, which may be especially beneficial to doctors in nations with weaker healthcare systems
- Prepare inmates for release into society
- Educate students, particularly in ways that cut down on distractions
- Enable people to go on virtual dates
But the biggest transformation XR technologies are likely to bring us is a high-fidelity connection to the "mirror world." The mirror world is essentially a 1:1 digital map of our world, created by the fusion of all the data collected through satellite imagery, cameras, and other modeling techniques. It already exists in crude form. For example, if you were needing directions on the street, you could open Google Maps AR, point your camera in a certain direction, and your screen will show you that Main Street is 223 feet in front of you. But the mirror world will likely become far more sophisticated than that.
Through the looking glass of AR devices, the outside world could be transformed in any number of ways. Maybe you are hiking through the woods and you notice a rare flower; you could leave a digital note suspended in the air so the next passerby can check it out. Maybe you encounter something like an Amazon Echo in public and, instead of it looking like a cylindrical tube, it appears as an avatar. You could be touring Dresden in Germany and choose to see a flashback representation of how the city looked after the bombings of WWII. You might also run into your friends — in digital avatar form — at the local bar.
Of course, this future poses no shortage of troubling aspects, ranging from privacy, pollution from virtual advertisements, and the currently impossible-to-answer psychological consequences of creating such an immersive environment. But despite all the uncertainties, the foundations of the mirror world are being built today.
As for what may lie beyond it? Ivan Sutherland, the creator of The Sword of Damocles, once described his idea of an "ultimate" immersive display:
"...a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal. With appropriate programming such a display could literally be the Wonderland into which Alice walked."