The great free will debate
Philosophers, theoretical physicists, psychologists, and others consider what or who is really in control.
DANIEL DENNETT: For billions of years on this planet there was life, but no free will. Physics hasn't changed, but now we have free will. The difference is not in physics. It has to do with, ultimately, with biology. Particularly evolutionary biology. What has happened over those billions of years, is that greater and greater competences have been designed and have evolved. And the competence of a dolphin, or of a chimpanzee, the cognitive competence, the sort of mental competence, is hugely superior to the competence of a lobster, or a starfish. But ours dwarfs the competence of a dolphin or a chimpanzee, perhaps to an even greater extent. And there's an entirely naturalistic story to say, to tell about how we came to have that competence, or those competences. And it's that, "Can do." It's that power that we have which is natural, but it's that power which sets us aside from every other species. And the key to it is that we don't just act for reasons. We represent our reasons to ourselves and to others. The business of asking somebody, "Why did you do that?" And the person being able to answer, it is the key to responsibility. And in fact, the word, "responsibility," sort of wears its meaning on its sleeve. We are responsible because we can respond to challenges to our reasons. Why? Because we don't just act for reasons, we act for reasons that we consciously represent to ourselves. And this is what gives us the power and the obligation to think ahead, to anticipate, to see the consequences of our action. To be able to evaluate those consequences in the light of what other people tell us. To share our wisdom with each other. No other species can do anything like it. And it's because we can share our wisdom that we have a special responsibility.
That's what makes us free in a way that no bird is free, for instance. There's a very sharp limit to the depth that we as conscious agents can probe our own activities. This sort of superficial access that we have to what's going on, that's what consciousness is. Now, when I say, who's this, "we," who's got this access? That's itself part of the illusion because there isn't a, sort of, boss part of the brain that's sitting there with this limited access. That itself is part of the illusion. What it is, is a bunch of different subsystems, which have varying access to varying things and that conspire in a sort of competitive way to execute whatever projects it is that they're, in their, sort of, mindless way executing.
STEVEN PINKER: I don't believe there's such a thing as free will in the sense of a ghost in the machine, a spirit, or soul that somehow reads the TV screen of the senses and pushes buttons and pulls levers of behavior. There's no sense that we can make of that. I think we are...our behavior is the product of physical processes in the brain. On the other hand, when you have a brain that consists of a hundred billion neurons, connected by a hundred trillion synapses, there is a vast amount of complexity. That means that human choices will not be predictable in any simple way from the stimuli that have impinged on it beforehand. We also know that that brain is set up so that there are at least two kinds of behavior. There's what happens when I shine a light in your eye and your iris contracts, or I hit your knee with a hammer and your leg jerks upward. We also know that there's a part of the brain that does things like choose what to have for dinner, whether to order chocolate, or vanilla ice cream. How to move the next chess piece. Whether to pick up the paper, or put it down. That is very different from your iris closing when I shine a light in your eye. It's that second kind of behavior, one that engages vast amounts of the brain, particularly the frontal lobes, that incorporates an enormous amount of information in the causation of the behavior, that has some mental model of the world, that can predict the consequences of possible behavior and select them on the basis of those consequences. All of those things carve out the realm of behavior that we call free will. Which it is useful to distinguish from brute involuntary reflexes, but which doesn't necessarily have to involve some mysterious soul.
ROBERT SAPOLSKY: The polite thing that I've sort of said for decades, is that, well, if there's free will, it's in all the boring places and those places were getting more and more cramped. If you want to insist that today you decided to floss your teeth starting on your upper teeth, rather than your lower teeth, rather than the other way around, that that was an act of free will, whatever, I'll grant that one to you, that's where the free will is. In reality, I don't think there's any free will at all. If you look at the things that come into account as to whether or not someone is going to do the right thing in the next two seconds amid a temptation to do otherwise, and the variables in there reflect everything from whether they are having gas pains that day, because of something unpleasant they ate that morning that makes us more selfish, more impulsive, et cetera, to what epigenetic effects occurred to them than when they were a first trimester fetus. When you look at the number of things we recognize now that are biological organic, where 500 years ago, or five years ago, we would have had a harsh moral judgment about it. And instead we now know, "Oh, that's a biological phenomenon." And when we're we gonna get to the point is recognizing, "Yeah, we're biological organisms." This notion of free will, for want of a less provocative word, is nothing but a myth.
BILL NYE: Our brains are complicated and they got this big, or as big as they are organically through evolution, with layer being added upon layer. So our ability to choose is often confused. Our ability to make choices and is often affected by the environment, by our experiences and by biochemistry. The shape of our brain.
MICHIO KAKU: Well, you ask one of the deepest philosophical questions of physics. The question of free will. First of all, there's something called, Newtonian determinism. Newtonian determinism says that the universe is a clock. A gigantic clock that's wound up at the beginning of time and it's been ticking ever since, according to Newton's laws of motion. So, what you're gonna eat 10 years from now on January 1st has already been fixed. It's already known using Newton's laws of motion. Einstein believed in that. Einstein was a determinist. And some people asked Einstein, "Well, does that mean "that a murderer, a horrible mass murderer "isn't really guilty of his works "'cause it was already preordained billions of years ago?" And Einstein said, "Well, yeah, in some sense, that's true. "That even mass murderers were predetermined. "But," he said, "They should still be placed in jail," okay? Heisenberg then comes along and proposes the Heisenberg uncertainty principle. And says, "Nonsense. "There's uncertainty. "You don't know where the electron is. "It could be here, here, or many places simultaneously." And this of course, Einstein hated because he said, "God doesn't play dice with the universe." Well, hey, get used to it. Einstein was wrong. God does play dice. Every time we look at an electron, it moves. There's uncertainty with regards to the position of the electron. So what does that mean for free will? It means, in some sense, we do have some kind of free will. In the sense that no one can determine your future events given your past history. There's always the wild card. There's always the possibility of uncertainty in whatever we do. So this means that free will, determining the future? Hey, these are philosophical questions that seem to indicate that we have some kind of free will.
JOSCHA BACH: Like consciousness, free will is often misunderstood because we know it by reference. But it's difficult to know it by content, what you really mean by free will. A lot of people will immediately feel that free will is related to whether the universe is deterministic, or probabilistic. And while physics has some ideas about that, which change every now and then, it's not part of our experience. And I don't think it makes a difference if the universe forces you randomly to do things, or deterministically. The important thing seems to me that in free will you are responsible for your actions. And responsibility is a social interface. For instance, if I am told that if I do X I go to prison and this changes my decision to whether or not to do X, I'm obviously responsible for my decision because it was an appeal to my responsibility, in some sense. Or likewise, if I do a certain thing that it causes harm to other people and I don't want that harm to happen, that influences my decision. This is a discourse of decision-making that I would call it's a free will decision. Will is the representation that my nervous system, at any level of its functioning, has raised a motive to an intention. It has committed to a particular kind of goal and it gets integrated into the story of myself. This protocol that I experience as myself in this world. And that was what I experienced as well, as a real decision. And this decision is free in as much as this decision can be influenced by discourse.
MICHAEL GAZZANIGA: The essential part of free will that people wanna hold on to is the sense that that therefore makes you responsible for your actions. So, there is the idea of personal responsibility. And I think that's very important. And I don't think that all this mechanistic work on the brain in any way threatens that. You learn that responsibility is to be understood at the social level. The deal, the rules that we work out, living together. So the metaphor I like to use is cars and traffic. We can study cars and all their physical relationships and know exactly how that works. It in no way prepares us to understand traffic when they all get together and start interacting. That's another level of organization and description of these elements interacting. So the same is with brains. That we can understand brains to the nth degree and that's fine and that's what we're doing, but it's not going to, in any way, interfere with the fact that taking responsibility in a social network is done at that level. So, the way I sum it up is that brains are automatic, but people are free because people are gonna be...are joining the social group and in that group are laws to live by. And it's interesting, every social network, whether it's artifactual, internet, or people, that accountability is essential, or the whole thing just falls apart.
DENNETT: Intuition pumps are sometimes called thought experiments. More often, they're called thought experiments. But they're not really formal arguments. Typically, they're stories. They're little fables. In fact, I think they're similar to Aesop's fables in that they they're supposed to have a moral. They're supposed to teach us something. And what they do, is they lead the audience to an intuition, a conclusion, where you sort of pound your fist on the table and you say, "Oh yeah, it's gotta be that way, doesn't it?" And if it achieves that, then it's pumped the intuition that was designed to pump. These are persuasion machines. They're little persuasion machines that philosophers have been using for several thousand years.
One of my recent favorites, which I devised to jangle the nerves of neuroscientists who've been going around saying that neuroscience shows that we don't have free will. I think their reasons for saying that are ill-considered and moreover that what they're doing is apt to be mischievous and doing some real harm. So, I concocted a little thought experiment, a little intuition pump to suggest that. So this is the case of the nefarious neurosurgeon, who treats a patient who has obsessive compulsive disorder by inserting a little microchip in his brain, which controls the OCD, the obsessive compulsive disorder. Now, there is such a chip. It's been developed in the Netherlands and it works really quite well. That's science fact, but now here comes science fiction. So the neurosurgeon, after she's operated on the guy, sewed him all up, say, "Okay, your OCD is under control now you'll be happy to learn, but moreover our team here will be monitoring you 24/7 and we're going to be controlling everything you do from now on. You will think you have free will. You'll think you're making your own decisions, but really you won't have free will at all. Free will is an illusion that we will maintain while controlling you. Goodbye, have a nice life." Sends him out the door. Well, he believes her. She has a shiny lab and, you know, lots of degrees and diplomas and all that. So, what does he do? Well, he, thinking he doesn't have free will anymore, he gets a little self-indulgent, a little bit aggressive, little negligent in how he decides what to do. And pretty soon, by indulging some of his worst features, he's got himself in trouble with the law. And he's arrested and he's put on trial. And at the trial he says, "But your honor, I don't have free will. I'm under the control of the team at the neurosurgery clinic." They say, "What's this?" And they call the neurosurgeon to the stand. And say, "Did you tell this man that you are controlling his every move, he didn't have free will?" She says, "Yeah, I did, yeah, but I was just messing with his head. That was just a joke. I didn't think he'd believe me." Now, right there, I think we can stop, take a deep breath and say, well she did something really bad. That was...that was really, she really harmed that man. In fact, her little "joke," telling him that, actually accomplished non-surgically pretty much what she claimed to accomplish surgically. She disabled him by telling him he didn't have free will. She pretty much turned his free will off and turned him into a morally incompetent person.
Now, if we agree that she did a bad thing, if nobody recommends people play jokes like this, what are we to say about the neuroscientists who are telling the public every day, "We've shown in our neuroscience labs that nobody has free will." I think if the neuroscientists recognize that what my imaginary neurosurgeon did was irresponsible, they should think seriously about whether it's irresponsible of them to make these claims about free will. And it's not just a fantasy. Vohs and Schooler, in an important paper, which has been replicated in several different ways, set up an experiment, really to test this with college students, who were given two texts to read. One was a text. They were both from Francis Crick's book, "The Astonishing Hypothesis," and one was not about free will. And the other was about free will. And basically it said, "Free will is an illusion. All your decisions are actually determined by causes that neuroscience is investigating. You don't have free will. That's just an illusion." All right, so there we have two groups. The group that read that passage and the group that read another passage from that book of the same length. After they've read the passage, they are given a puzzle to solve where they can earn some money by solving it. And the experimenters cleverly made the puzzles slightly defective, so there was a way of cheating on the puzzle. That was, oops, inadvertently revealed to the subjects. And guess what? The subjects who'd read the passage where Crick says, "Free will is an illusion," cheated at a much higher rate than the other ones. In other words, just reading that passage did have the effect of making them less concerned about the implications of their actions and they became, as it were, negligent, or worse, in their own decision-making.
- What does it mean to have—or not have—free will? Were the actions of mass murderers pre-determined billions of years ago? Do brain processes trump personal responsibility? Can experiments prove that free will is an illusion?
- Bill Nye, Steven Pinker, Daniel Dennett, Michio Kaku, Robert Sapolsky, and others approach the topic from their unique fields and illustrate how complex and layered the free will debate is.
- From Newtonian determinism, to brain chemistry, to a Dennett thought experiment, explore the arguments that make up the free will landscape.
- Free Will or Free Won't? Neuroscience on the Choices We Can (and ... ›
- Do we have free will? - Big Think ›
- How philosophy blends physics with the idea of free will - Big Think ›
Once a week.
Subscribe to our weekly newsletter.
Gain-of-function mutation research may help predict the next pandemic — or, critics argue, cause one.
This article was originally published on our sister site, Freethink.
"I was intrigued," says Ron Fouchier, in his rich, Dutch-accented English, "in how little things could kill large animals and humans."
It's late evening in Rotterdam as darkness slowly drapes our Skype conversation.
This fascination led the silver-haired virologist to venture into controversial gain-of-function mutation research — work by scientists that adds abilities to pathogens, including experiments that focus on SARS and MERS, the coronavirus cousins of the COVID-19 agent.
If we are to avoid another influenza pandemic, we will need to understand the kinds of flu viruses that could cause it. Gain-of-function mutation research can help us with that, says Fouchier, by telling us what kind of mutations might allow a virus to jump across species or evolve into more virulent strains. It could help us prepare and, in doing so, save lives.
Many of his scientific peers, however, disagree; they say his experiments are not worth the risks they pose to society.
A virus and a firestorm
The Dutch virologist, based at Erasmus Medical Center in Rotterdam, caused a firestorm of controversy about a decade ago, when he and Yoshihiro Kawaoka at the University of Wisconsin-Madison announced that they had successfully mutated H5N1, a strain of bird flu, to pass through the air between ferrets, in two separate experiments. Ferrets are considered the best flu models because their respiratory systems react to the flu much like humans.
The mutations that gave the virus its ability to be airborne transmissible are gain-of-function (GOF) mutations. GOF research is when scientists purposefully cause mutations that give viruses new abilities in an attempt to better understand the pathogen. In Fouchier's experiments, they wanted to see if it could be made airborne transmissible so that they could catch potentially dangerous strains early and develop new treatments and vaccines ahead of time.
The problem is: their mutated H5N1 could also cause a pandemic if it ever left the lab. In Science magazine, Fouchier himself called it "probably one of the most dangerous viruses you can make."
Just three special traits
Recreated 1918 influenza virionsCredit: Cynthia Goldsmith / CDC / Dr. Terrence Tumpey / Public domain via Wikipedia
For H5N1, Fouchier identified five mutations that could cause three special traits needed to trigger an avian flu to become airborne in mammals. Those traits are (1) the ability to attach to cells of the throat and nose, (2) the ability to survive the colder temperatures found in those places, and (3) the ability to survive in adverse environments.
A minimum of three mutations may be all that's needed for a virus in the wild to make the leap through the air in mammals. If it does, it could spread. Fast.
Fouchier calculates the odds of this happening to be fairly low, for any given virus. Each mutation has the potential to cripple the virus on its own. They need to be perfectly aligned for the flu to jump. But these mutations can — and do — happen.
"In 2013, a new virus popped up in China," says Fouchier. "H7N9."
H7N9 is another kind of avian flu, like H5N1. The CDC considers it the most likely flu strain to cause a pandemic. In the human outbreaks that occurred between 2013 and 2015, it killed a staggering 39% of known cases; if H7N9 were to have all five of the gain-of-function mutations Fouchier had identified in his work with H5N1, it could make COVID-19 look like a kitten in comparison.
H7N9 had three of those mutations in 2013.
Gain-of-function mutation: creating our fears to (possibly) prevent them
Flu viruses are basically eight pieces of RNA wrapped up in a ball. To create the gain-of-function mutations, the research used a DNA template for each piece, called a plasmid. Making a single mutation in the plasmid is easy, Fouchier says, and it's commonly done in genetics labs.
If you insert all eight plasmids into a mammalian cell, they hijack the cell's machinery to create flu virus RNA.
"Now you can start to assemble a new virus particle in that cell," Fouchier says.
One infected cell is enough to grow many new virus particles — from one to a thousand to a million; viruses are replication machines. And because they mutate so readily during their replication, the new viruses have to be checked to make sure it only has the mutations the lab caused.
The virus then goes into the ferrets, passing through them to generate new viruses until, on the 10th generation, it infected ferrets through the air. By analyzing the virus's genes in each generation, they can figure out what exact five mutations lead to H5N1 bird flu being airborne between ferrets.
And, potentially, people.
"This work should never have been done"
The potential for the modified H5N1 strain to cause a human pandemic if it ever slipped out of containment has sparked sharp criticism and no shortage of controversy. Rutgers molecular biologist Richard Ebright summed up the far end of the opposition when he told Science that the research "should never have been done."
"When I first heard about the experiments that make highly pathogenic avian influenza transmissible," says Philip Dormitzer, vice president and chief scientific officer of viral vaccines at Pfizer, "I was interested in the science but concerned about the risks of both the viruses themselves and of the consequences of the reaction to the experiments."
In 2014, in response to researchers' fears and some lab incidents, the federal government imposed a moratorium on all GOF research, freezing the work.
Some scientists believe gain-of-function mutation experiments could be extremely valuable in understanding the potential risks we face from wild influenza strains, but only if they are done right. Dormitzer says that a careful and thoughtful examination of the issue could lead to processes that make gain-of-function mutation research with viruses safer.
But in the meantime, the moratorium stifled some research into influenzas — and coronaviruses.
The National Academy of Science whipped up some new guidelines, and in December of 2017, the call went out: GOF studies could apply to be funded again. A panel formed by Health and Human Services (HHS) would review applications and make the decision of which studies to fund.
As of right now, only Kawaoka and Fouchier's studies have been approved, getting the green light last winter. They are resuming where they left off.
Pandora's locks: how to contain gain-of-function flu
Here's the thing: the work is indeed potentially dangerous. But there are layers upon layers of safety measures at both Fouchier's and Kawaoka's labs.
"You really need to think about it like an onion," says Rebecca Moritz of the University of Wisconsin-Madison. Moritz is the select agent responsible for Kawaoka's lab. Her job is to ensure that all safety standards are met and that protocols are created and drilled; basically, she's there to prevent viruses from escaping. And this virus has some extra-special considerations.
The specific H5N1 strain Kawaoka's lab uses is on a list called the Federal Select Agent Program. Pathogens on this list need to meet special safety considerations. The GOF experiments have even more stringent guidelines because the research is deemed "dual-use research of concern."
There was debate over whether Fouchier and Kawaoka's work should even be published.
"Dual-use research of concern is legitimate research that could potentially be used for nefarious purposes," Moritz says. At one time, there was debate over whether Fouchier and Kawaoka's work should even be published.
While the insights they found would help scientists, they could also be used to create bioweapons. The papers had to pass through a review by the U.S. National Science Board for Biosecurity, but they were eventually published.
Intentional biowarfare and terrorism aside, the gain-of-function mutation flu must be contained even from accidents. At Wisconsin, that begins with the building itself. The labs are specially designed to be able to contain pathogens (BSL-3 agricultural, for you Inside Baseball types).
They are essentially an airtight cement bunker, negatively pressurized so that air will only flow into the lab in case of any breach — keeping the viruses pushed in. And all air in and out of the lap passes through multiple HEPA filters.
Inside the lab, researchers wear special protective equipment, including respirators. Anyone coming or going into the lab must go through an intricate dance involving stripping and putting on various articles of clothing and passing through showers and decontamination.
And the most dangerous parts of the experiment are performed inside primary containment. For example, a biocontainment cabinet, which acts like an extra high-security box, inside the already highly-secure lab (kind of like the radiation glove box Homer Simpson is working in during the opening credits).
"Many people behind the institution are working to make sure this research can be done safely and securely." — REBECCA MORITZ
The Federal Select Agent program can come and inspect you at any time with no warning, Moritz says. At the bare minimum, the whole thing gets shaken down every three years.
There are numerous potential dangers — a vial of virus gets dropped; a needle prick; a ferret bite — but Moritz is confident that the safety measures and guidelines will prevent any catastrophe.
"The institution and many people behind the institution are working to make sure this research can be done safely and securely," Moritz says.
No human harm has come of the work yet, but the potential for it is real.
"Nature will continue to do this"
They were dead on the beaches.
In the spring of 2014, another type of bird flu, H10N7, swept through the harbor seal population of northern Europe. Starting in Sweden, the virus moved south and west, across Denmark, Germany, and the Netherlands. It is estimated that 10% of the entire seal population was killed.
The virus's evolution could be tracked through time and space, Fouchier says, as it progressed down the coast. Natural selection pushed through gain-of-function mutations in the seals, similarly to how H5N1 evolved to better jump between ferrets in his lab — his lab which, at the time, was shuttered.
"We did our work in the lab," Fouchier says, with a high level of safety and security. "But the same thing was happening on the beach here in the Netherlands. And so you can tell me to stop doing this research, but nature will continue to do this day in, day out."
Critics argue that the knowledge gained from the experiments is either non-existent or not worth the risk; Fouchier argues that GOF experiments are the only way to learn crucial information on what makes a flu virus a pandemic candidate.
"If these three traits could be caused by hundreds of combinations of five mutations, then that increases the risk of these things happening in nature immensely," Fouchier says.
"With something as crucial as flu, we need to investigate everything that we can," Fouchier says, hoping to find "a new Achilles' heel of the flu that we can use to stop the impact of it."
From "mutilated males" to "wandering wombs," dodgy science affects how we view the female body still today.
- The history of medicine and biology often has been embarrassingly wrong when it comes to female anatomy and was surprisingly resistant to progress.
- Aristotle and the ancient Greeks are much to blame for the mistaken notion of women as cold, passive, and little more than a "mutilated man."
- Thanks to this dubious science, and the likes of Sigmund Freud, we live today with a legacy that judges women according to antiquated biology and psychology.
The story of medicine has not been particularly kind to women. Not only was little anatomical or scientific research done on women or on women-specific issues, doctors often treated them differently.
Even today, women are up to ten times more likely to have their symptoms explained away as being psychological or psychosomatic than men. Worryingly, women are 50 percent more likely to be misdiagnosed after a heart attack, and drugs designed for "everyone" are actually much less effective (for pain) or too effective (for sleeping) in women.
Are these differences real or imagined? And what can the history of female medicine teach us about where we are today?
A mutilated male
Aristotle is rightly considered one of the greatest minds of all time and is recognized as the founding father of many disciplines, including biology. He was one of the most rigorous and comprehensive scientists and field researchers the world had known. He categorized a large number of species based on a wide range of traits, such as movement, longevity, and sensory capacity. His views on women, then, stemmed from what he thought of as good, proper study. The problem is that he got pretty much all of it wrong.
According to Aristotle, during pregnancy, it was the man who, alone, contributed the all-important "form" of a fetus (that is, its defining nature and personality), whereas the woman provided only the matter (that is, the environment and sustenance to grow the fetus, which was provided by the menstrual blood).
From this, Aristotle extrapolated all sorts of dubious conclusions. He ventured that the man was superior, active, and dominant, and the woman inferior, passive, and submissive. As such, the woman's role was to nurture children, run a household, and be silent and obedient — political and cultural manifestations of dodgy biology. If women did not provide a child's form and nature, how important could they really be?
Given this passivity, Aristotle argued that the woman must be associated with other passive things, like being cold and slow. The man, being dynamic and energetic, must be hot and fast. From this, Aristotle concluded that any defects or problems in childbirth can only be due to the sluggishness of the female womb. Even the positive biological aspects of being female, such as greater longevity, were put down to this cold rigidity — a lack of metabolism and spirit. Most notorious of all, since Aristotle believed that female children were themselves the result of an incomplete and underdeveloped gestation, women were simply "mutilated males" whose mothers' cold wombs had overpowered the warm, vital, male sperm.
Aristotle can still be counted as a great mind, but when it came to women, his ideas have not aged well in just how far they negatively influenced what came after. Given that his works were seen as the authority well into the 16th century, he left quite the pernicious legacy.
A wandering womb
But, how much can we really blame Aristotle? Without the aid of modern scientific equipment, physicians and biologists were left to guess about female anatomy. Unfortunately, the damage was done, and Aristotle's ideas of a troublesome uterus became so mainstream that they led to one of the more bizarre ideas in medical history: the wandering womb.
The "wandering womb" is the idea that the womb is actually some kind of roaming parasite in the body, possibly even a separate organism. According to this theory, after a woman menstruates, her womb becomes hot and dry and so becomes extra mobile. It is transformed into a voracious hunter. The womb will dart from organ to organ, seeking to steal its moisture and other vital fluids. This parasitic behavior caused all sorts of (female only) illnesses.
If a woman had asthma, the womb was leeching the lungs. Stomach aches, it was in the gut. And if it attacked the heart (which the ancients thought was the source of our thoughts), then it would cause all manner of mental health issues. In fact, the Greek word for womb is "hystera," and so when we call someone (often a woman) hysterical, we are saying that their womb is causing mischief.
The "solutions" or "remedies" for a wandering womb were as strange as the theory. Since the womb was supposed to be attracted to sweet smells, placing flowers or perfumes around the vagina would "lure" it down. On the flip side, if you smoked noxious substances or ate disgusting foods, it would "repel" the womb away. By using all manner of smells, you could make the womb move wherever you wanted.
The oddest "remedy" — and most male-centric of all — is that, since the wandering womb was said to be caused by heat and dryness, a good solution would be male semen, which was thought of as cooling and wet. And so, the ancient and highly inaccurate myth was born that sex could cure a woman of her "hysteria."
A lingering problem
We live today with the legacy of this kind of thinking. Freud was much taken with the idea of "hysteria," and although he did accept that men could be subject to it as well, he believed it was overwhelmingly a female problem caused by female biology. The woman, for Freud, is mostly defined by her "sexual function." What Freud calls "normal femininity" (the preferred and best outcome) is defined by passivity. A woman's ideal development is one which moves from being active and "phallic" to passive and vaginal.
Nowadays, Freud and Aristotle's legacy lies in just how easily women are defined by their sexuality. Given that men and women, both, are equally dependent on their biology, it is curious how much more often women are reduced to theirs. The idea that women are more emotional or slaves to their hormones than men is still a depressingly familiar trope. It is an idea that goes back to the Greeks.
If we think biology is important to who we are (as it most certainly is), we ought to make sure that the biology is as good and accurate as it can be.
A global survey shows the majority of countries favor Android over iPhone.
- When Android was launched soon after Apple's own iPhone, Steve Jobs threatened to "destroy" it.
- Ever since, and across the world, the rivalry between both systems has animated users.
- Now the results are in: worldwide, consumers clearly prefer one side — and it's not Steve Jobs'.
A woman on her phone in Havana, Cuba. Mobile phones have become ubiquitous the world over — and so has the divide between Android and iPhone users.Credit: Yamil Lage / AFP via Getty Images.
Us versus them: it's the archetypal binary. It makes the world understandable by dividing it into two competing halves: labor against capital, West against East, men against women.
These maps are the first to show the dividing lines between one of the world's more recent binaries: Android vs. Apple. Published by Electronics Hub, they are based on a qualitative analysis of almost 350,000 tweets worldwide that presented positive, neutral, and negative attitudes toward Android and/or Apple.
Steve Jobs wanted to go "thermonuclear"
Feelings between Android and Apple were pretty tribal from the get-go. It was Steve Jobs himself who said, when Google rolled out Android a mere ten months after Apple launched the iPhone, "I'm going to destroy Android, because it's a stolen product. I'm willing to go thermonuclear war on this."
Buying a phone is like picking a side in the eternal feud between the Hatfields and the McCoys. Each choice for automatically comes with an in-built arsenal of arguments against.
If you are an iPhone person, you appreciate the sleekness and simplicity of its design, and you are horrified by the confusing mess that is the Android operating system. If you are an Android aficionado, you pity the iPhone user, a captive of an overly expensive closed ecosystem, designed to extract money from its users.
Even without resorting to those extremes, many of us will recognize which side of the dividing line that we are on. Like the American Civil War, that line runs through families and groups of friends, but that would be a bit confusing to chart geographically. To un-muddle the information, these maps zoom out to state and country level.
If the contest is based on the number of countries, Android wins. In all, 74 of the 142 countries surveyed prefer Android (in green on the map). Only 65 favor Apple (colored grey). That's a 52/48 split, which may not sound like a decisive vote, but it was good enough for Boris Johnson to get Brexit done (after he got breakfast done, of course).
And yes, math-heads: 74 plus 65 is three short of 142. Belarus, Fiji, and Peru (in yellow on the map) could not decide which side to support in the Global Phone War.
What about the United States, home of both the Android and the iPhone? Another victory for the former, albeit a slightly narrower one: 30.16 percent of the tweets about Android were positive versus just 29.03 percent of the ones about Apple.
United States: Texas surrounded!
Credit: Electronics Hub
There can be only one winner per state, though, and that leads to this preponderance of Android logos. Frankly, it's a relief to see a map showing a visceral divide within the United States that is not the coasts versus the heartland.
- Apple dominates in 19 states: a solid Midwestern bloc, another of states surrounding Texas, the Dakotas and California, plus North Carolina, New Hampshire, and Rhode Island.
- And that's it. The other 32 are the United States of Android. You can drive from Seattle to Miami without straying into iPhone territory. But no stopovers in Dallas or Houston – both are behind enemy lines!
North America: strongly leaning toward Android
Credit: Electronics Hub
Only eight of North America's 21 countries surveyed fall into the Apple category.
- The U.S. and Canada lean Android, while Mexico goes for the iPhone.
- Central America is divided, but here too Android wins hands down, 5-2.
Europe: Big Five divided
Credit: Electronics Hub
In Europe, Apple wins, with 20 countries preferring the iPhone, 17 going for Android, and Belarus sitting on the fence.
- Of Western Europe's Big Five markets, three (UK, Germany, Spain) are pro-Android, and two (France, Italy) are pro-Apple.
- Czechia and Slovakia are an Apple island in the Android sea that is Central Europe. Glad to see there is still something the divorcees can agree on.
South America: almost even
Credit: Electronics Hub
In South America, the divide is almost even.
- Five countries prefer Android, four Apple, and one is undecided.
- In Peru, both Android- and Apple-related tweets were 25 percent positive.
Africa: watch out for Huawei
Credit: Electronics Hub
In Africa, Android wins by 17 countries versus Apple's 15.
- There's a solid Android bloc running from South Africa via DR Congo all the way to Ethiopia.
- iPhone countries are scattered throughout the north (Algeria), west (Guinea), east (Somalia), and south (Namibia).
Huawei — increasingly popular across the continent — could soon dramatically change the picture in Africa. Currently still running on Android, the Chinese phone manufacturer has just launched its own operating system, called Harmony.
Middle East: Iran vs. Saudi Arabia (again)
Credit: Electronics Hub
In the Middle East and Central Asia, Android wins 8 countries to Apple's 6.
- But it's complicated. One Turkish tweeter wondered how it is that iPhones seem more popular in the Asian half of Istanbul, while Android phones prevailed in the European part of the city.
- The phone divide matches up with the region's main geopolitical one: Iran prefers Android, Saudi Arabia the iPhone.
Asia-Pacific: Apple on the periphery
Credit: Electronics Hub
Another wafer-thin majority for Android in the Asia-Pacific region: 13 countries versus 12 for Apple — and one abstention (Fiji).
- The two giants of the Asian mainland, India and China, are both Android countries. Apple countries are on the periphery.
- And if India is Android, its rival Pakistan must be Apple. Same with North and South Korea.
Experts point to the fact that both operating systems are becoming more alike with every new generation as a potential resolution to the conflict. But as any student of human behavior will confirm: smaller differences will only exacerbate the rivalry between both camps.
Maps taken from Electronics Hub, reproduced with kind permission.
Strange Maps #1096
Got a strange map? Let me know at firstname.lastname@example.org.
People tend to reflexively assume that fun events – like vacations – will go by really quickly.
For many people, summer vacation can't come soon enough – especially for the half of Americans who canceled their summer plans last year due to the pandemic.
But when a vacation approaches, do you ever get the feeling that it's almost over before it starts?
If so, you're not alone.
In some recent studies Gabriela Tonietto, Sam Maglio, Eric VanEpps and I conducted, we found that about half of the people we surveyed indicated that their upcoming weekend trip felt like it would end as soon as it started.
This feeling can have a ripple effect. It can change the way trips are planned – you might, for example, be less likely to schedule extra activities. At the same time, you might be more likely to splurge on an expensive dinner because you want to make the best of the little time you think you have.
Where does this tendency come from? And can it be avoided?
Not all events are created equal
When people look forward to something, they usually want it to happen as soon as possible and last as long as possible.
We first explored the effect of this attitude in the context of Thanksgiving.
We chose Thanksgiving because almost everyone in the U.S. celebrates it, but not everyone looks forward to it. Some people love the annual family get-together. Others – whether it's the stress of cooking, the tedium of cleaning or the anxiety of dealing with family drama – dread it.
So on the Monday before Thanksgiving in 2019, we surveyed 510 people online and asked them to tell us whether they were looking forward to the holiday. Then we asked them how far away it seemed, and how long they felt it would last. We had them move a 100-point slider – 0 meaning very short and 100 meaning very long – to a location that reflected their feelings.
As we suspected, the more participants looked forward to their Thanksgiving festivities, the farther away it seemed and shorter it felt. Ironically, longing for something seems to shrink its duration in the mind's eye.
Winding the mind's clock
Most people believe the idiom “time flies when you're having fun," and research has, indeed, shown that when time seems to pass by quickly, people assume the task must have been engaging and enjoyable.
We reasoned that people might be over-applying their assumption about the relationship between time and fun when judging the duration of events yet to happen.
As a result, people tend to reflexively assume that fun events – like vacations – will go by really quickly. Meanwhile, pining for something can make the time leading up to the event seem to drag. The combination of its beginning pushed farther away in their minds – with its end pulled closer – resulted in our participants' anticipating that something they looked forward would feel as if it had almost no duration at all.
In another study, we asked participants to imagine going on a weekend trip that they either expected to be fun or terrible. We then asked them how far away the start and end of this trip felt like using a similar 0 to 100 scale. 46% of participants evaluated the positive weekend as feeling like it had no duration at all: They marked the beginning and the end of the vacation virtually at the same location when using the slider scale.
Thinking in hours and days
Our goal was to show how these two judgments of an event – the fact that it simultaneously seems farther away and is assumed to last for less time – can nearly eliminate the event's duration in the mind's eye.
We reasoned that if we didn't explicitly highlight these two separate pieces – and instead directly asked them about the duration of the event – a smaller portion of people would indicate virtually no duration for something they looked forward to.
We tested this theory in another study, in which we told participants that they would watch two five-minute-long videos back-to-back. We described the second video as either humorous or boring, and then asked them how long they thought each video would feel like it lasted.
We found that the participants predicted that the funny video would still feel shorter and was farther away than the boring one. But we also found that participants believed it would last a bit longer than the responses we received in the earlier studies.
This finding gives us a way to overcome this biased perception: focus on the actual duration. Because in this study, participants directly reported how long the funny video would last – and not the perceived distance of its beginning and its end – they were far less likely to assume it would be over just as it started.
While it sounds trivial and obvious, we often rely on our subjective feelings – not objective measures of time – when deciding how long a period of time will feel and how to best use it.
So when looking forward to much-anticipated events like vacations, it's important to remind yourself just how many days it will last.
You'll get more out of the experience – and, hopefully, put yourself in a better position to take advantage of the time you do have.