Space is dead: A challenge to the standard model of quantum mechanics
Since the idea of locality is dead, space itself may not be an aloof vacuum: Something welds things together, even at great distances.
Lee attended Harvard University for graduate school receiving a Ph.D. in theoretical physics in 1979. He held postdoctoral positions at the Institute for Advanced Study in Princeton, The Institute for Theoretical Physics (now KITP) in Santa Barbara and the Enrico Fermi Institute at the University of Chicago. This was followed by faculty positions at Yale, Syracuse and Penn State Universities, where he helped to found the Center for Gravitational Physics and Geometry. In September of 2001 he moved to Canada to be a founding member of the Perimeter Institute for Theoretical Physics, where he has been ever since.
Lee's main contributions to research are so far to the field of quantum gravity. He was, with Abhay Ashtekar and Carlo Rovelli, a founder of the approach known as loop quantum gravity, but he has contributed to other approaches including string theory and causal dynamical triangulations. He is also known for proposing the notion of the landscape of theories, based on his application of Darwinian methods to Cosmology. He has contributed also to the foundations of quantum mechanics, elementary particle physics and theoretical biology. He also has a strong interest in philosophy and his three books, Life of the Cosmos, Three Roads to Quantum Gravity and The Trouble with Physics are in part philosophical explorations of issues raised by contemporary physics.
LEE SMOLIN: So let's first define what we mean by being a realist. What I mean when I say I'm a realist is that I believe that there is an exactly understandable way the world is, which is independent of our intervention, independent of our existence, our knowledge. It's possible for everything that happens in nature, to give a complete individual description to that process or those events.
And thence, to understand what -- the causal processes behind those events, to understand exactly what's behind, what goes on in a physical process. Now, for reasons which have a lot to do with World War I, and a lot to do with philosophy and things that I'm not a scholar of, the predominant view among educated people in Europe in the 1920s and 1930s was not-- was not realism. They didn't believe that realism was possible.
They thought that science was a way of speaking about our interactions within our interventions in nature. That the concepts that we use, like wave or particle or law or causality or energy, represent our own intuitions, and are useful to describe what happens when an atomic system interacts with a measurement device. But are not -- do not extend to concepts that stand alone, have meaning when applied just to the atomic system without the context of the experiment.
And I'm trying to talk here the way that Niels Bohr talked because he was the most radical of these anti-realist thinkers. Let's talk about what it means to be not a realist. What it means to be not a realist is that realism is too ambitious and too hard. We don't have it -- they would say something like, "Our concepts come from our experience." Our experiences of the world are the big things we throw them. They bounce back and forth. We play with balls. We sail, we have some intuition about water and wind and so forth.
And then we go into the laboratory and we try to take apart an atom and understand what its components are well, there are things called electrons and nuclei and protons and neutrons and quarks. But what are these things? What-- and we get very confused when we try to understand what they are. And it is indeed confusing. They're not balls like a baseball or a soccer ball. They're not waves like a breeze kicking up some ripples on a lake. But there's something that, to describe them, sometimes we can use the first kind of concept and sometimes it seems like we need to use the second kind of concept. So they're stretching the limits of what our concepts allow us to describe.
So it's natural that some people just get impatient with trying to understand what's really there, and trying to invent a concept that fits all the cases of what's really there. And instead says, "Well, let's lower our ambition. Let's just say we're pragmatically describing what happens when we measure and interact with these things. So we don't know what an electron is exactly, but we know that we put it in a certain kind of experiment and we can diffract and measure its wavelength just as if it was a water wave or a sound wave. We put it in another experiment and we can bounce and measure its energy and momentum.
So it seems to be a material particle. And then we wonder, can we contradict ourselves?" Again, I'm trying to fake being Neils Bohr, which is hard to do. Not just the accent, but the -- his whole way of thinking was, so he was certainly mystifying and mystical, in the proper sense of the word of being a mystic. But he was reaching for some accommodation and which concepts were useful, but he didn't claim that the concepts described nature. They described our interaction with nature. And they could contradict each other. Because something can't be both a wave and a particle because a wave is spread out and a particle is localized on a trajectory.
So it seems like you're in danger of contradicting yourself. And he said, well, but maybe the situation is so beautiful that there are subtle barriers that prevent you from ever saying it's a wave-like thing and it's a particle-like thing at the same time. Then he said-- because he generalized right away because he saw bigger applications. And he said, well, when people talk about God's love, they -- then they talk about God's justice the next Sunday. [INAUDIBLE] between God's love and God's justice, there is a contradiction. If God's justice is true and strict, then-- or as a parent, we run into this all the time. How do we be just and loving at the same time?
And so Neils Bohr said, "Maybe there's a general concept to carry these way -- to express the way that these contradictory concepts seem part of the whole. And he called that complementarity. And he proposed that this was a new principle for science, for human life, for religion, for art, for everything." He got quite ambitious, [INAUDIBLE]. The story that I tell in the book, and that first 2/3 of the book is built around, is the battle between the realists and the non-realist, or the anti-realists.
And in the way that quantum mechanics was textbook formulated, and the way that we teach it now, was finalized in 1927, roughly. And that was done along an anti-realist line. But simultaneously, there were a handful of physicists -- and these were not quacks. These were good physicists. Some of them were Nobel Prize winners, or shortly to be, who invented realist alternatives. And one of those that I talk about is what's called the pilot wave theory of Louis de Broglie, who was a Frenchman, which apparently put him socially a little bit in the outgroup anyway at this period. And he was wealthy.
In fact, he was an aristocrat, which also put out of the social group. So he wasn't really hanging out. And there's a whole lot to say about how that separation let him have an idea that nobody else had, which was an obvious idea. First, an idea that eventually everybody embraced, that the wave particle paradox -- the wave particle duality, as Einstein called it, applies not only just to light, but to electrons, and he was the first person to say that the duality between waves and particles applies to everything -- all material particles, as well as radiation and light. And that was accepted, and for that, he was given the Nobel Prize.
But he went on to say that he could explain the paradox that Bohr wrestled with by saying there are waves and there are particles. Both exist, both are real. The waves propagate and the particles follow the waves. And this is what he called the Pilot Wave theory because the waves are guiding, or piloting, the particles around. And this works. He developed it in the late '20s at the same time as the other theory was developed. It predicts -- it gives all the same predictions, and explains at least as much, if not more, than the standard version of quantum mechanics. It was done by somebody who was quite famous, and to his other work, and yet it was ignored -- completely ignored. So there's no textbook.
Except for a small school of accolades in France, in Paris, around him, there was virtually nobody who developed his idea. And in fact, he even came to disbelieve in it. Everything that ordinary quantum mechanics explains, the Pilot Wave theory explains, but the reverse is not true because there's a long list of things that the regular theory does not explain. Regular theory does not explain, if you have a radioactive nuclei, exactly when it will decay. It just predicts that there is a certain probability per unit time that the atom or the nuclei will decay. But it doesn't tell you why and when it decays.
It chooses some moment, but it doesn't tell you why that moment rather than another moment. If you want to describe an electron and its motion within an atom, the standard theory just says there is a wave function, which is a probability distribution, to find the electron somewhere in the vicinity of the atom. But it doesn't tell you where it is or how it's moving. And then a photon comes and knocks it out of the atom. It doesn't tell you exactly how it absorbed the energy of that photon, how it reacted to it, and how it jumped out of the atom. The "it" being the electron. The pilot wave theory tells you, in detail, everything. Exactly how, why, and when. So the regular theory is probabilistic.
Chance is fundamental. Uncertainty is fundamental. Statistics is necessary. The pilot wave theory is deterministic, like old-fashioned Newtonian physics. And if you employ it, you can explain everything that happens, and exactly how and why and when and how. In physics, we have something where you think of a particle or a photon, and we ascribe properties to it, like energy or position or momentum or wavelength or polarization. We have various properties we ascribe to particles. Then, supposing we have two particles, and they're in some kind of interaction together. They maybe are charged in each and are repelling each other or attracting each other.
They each have a long list of properties. But in quantum mechanics, something new happens. In quantum mechanics, the first thing that happens is you take away from the description. So an important principle of quantum mechanics is that if you have the list of all the properties some particle has, you can only know half of them at any one time. That's called the uncertainty principle. That's part of the uncertainty principle. But you can choose which half it will. Now, if you have two particles making up a system, that's true of a description of each of them and the two put together. And it turns out that you can choose properties to describe the two put together that are not properties of either of them separately.
For example, there is a property called opposite. If you measure the same thing on each particle, it's totally random. It's totally unpredictable what you'll get for each of those measurements. But you can guarantee, in the state opposite, that they'll be opposite. So if you measure the momentum of particle one and it's going that way, particle two will be going this way. Or if particle one is going this way, particle two will be going this way. But if you just look at one of them, the probability is completely random which way it will be going. So in this state called opposite, there's no thing which is true for certain about particle one or particle two separately, but there are things which are true about their combination. And we call this entanglement. It's a property of quantum systems that Einstein first chanced upon in the 1930s as part of his mission to basically find a flaw a logical inconsistency in the theory.
And it turned out that what he was on the trail of was not a logical inconsistency, but this new feature of nature which he chanced upon and wrote about in the 1930s. Locality means that to affect a system, you have to go over and tickle it or touch it. I can't affect something in Alpha Centauri here just by something that I do here. I have to send a light signal or a piece of information, or a bomb or something to Alpha Centauri. And then, when it gets there, it has the effect. So that principle is called locality. And John Bell made a precise formulation of locality as follows. If you have two particles and you want to measure particle A and particle B, then the probabilities for the outcome of measuring probability B -- whatever you measure probability B -- can't depend on what you choose to do with particle A.
That's Bell's version of locality. Bell derived a consequence from that, a certain mathematical inequality that we don't need to get into. And then he proved one thing about that consequence. It was contradicted by quantum mechanics. So if quantum mechanics is true, that assumption of Bell Locality, as Bell formulated it, is false. Then some experimentalist got in the game. So Bell is the 1960s. His theorem is 1964. And by the early 1970s, there were some experimentalists who realized they could test this consequence of Bell's theorem in a real experiment. You could make a pair of particles or a pair of photons, let them separate and study their polarizations or their energy, or the direction of their flight. And you could actually test the principle of locality very directly.
And so these were experiments carried out first at Berkeley, I think, and they were muddy, and the results weren't reliable. And then at Harvard. Then finally, a group of people in Paris led by [INAUDIBLE] -- I think we're talking about roughly 1980 now -- got definitive results. And the definitive results are that locality is false. This has nothing to do with whether you believe in quantum mechanics or not. Nothing to do with anything because Bell's theorem has nothing in it but a few obvious assumptions, like the probabilities are numbers between 0 and 1. And this one assumption of locality. It's a beautiful theorem in its sparseness.
And the experiments, together with that theorem, tell us that locality is false. That is, there are states of pairs of particles where what the properties of particle B over here turn out to do depend on what you choose to measure or manipulate with particle A. And that's just true about nature. That's the most -- if that's not the most crazy shocking thing you've ever heard in science, I got to repeat everything I just said. In quantum mechanics, there is no explanation of it. It just comes out that way. If you want to have a theory like Pilot Wave theory that gives a complete description-- in other words, if you're not a realist, it's set up so that one of the questions you can't answer is how the information is transferred back and forth.
Quantum mechanics is structurally built so you can't get at that question. But Pilot Wave theory, and other alternatives to quantum mechanics that are realistic, have to address that question explicitly. They have to show you how the two systems talk to each other. And when they do, yes, they do violate the possibility that nothing can travel faster than light. Is that testable? The answer comes in two parts. At first, no, because these measurements are all random. Just taking what you measure on this side or what you measure on this side, just taken by themselves just look like random distribution, so you can't get any information about them.
About anything, let alone what happening over here. So it looks bad for sending information faster than light. But a few physicists -- for example, a guy called Anthony [? Valentini-- ?] have speculated you could throw the pilot wave theory into a kind of phase, like those complex systems people talk about different phases, order and chaotic, and equilibrium and non-equilibrium. He speculates that you can throw the particle wave system into a kind of disordered, far from equilibrium phase, where all of a sudden, you can send real signals between the two particles.
And it proves that you always have to be able to do that. Although practically, we may not be able to throw it into a non-equilibrium state, that it is a possible state of the system. And that tells you that if you're a realist, then locality is dead, and the idea, basically, that space is dead. Space is now a discredited concept. And something deeper is going on. Something welds the world together, which has to do with the histories of who talked with who. It doesn't have to do with how far away things are from each other. That logic of that histories telling you who talks to you, rather than locality telling you who talks to who, is the real lesson here, many of us think.
- Realists believe that there is an exactly understandable way the world is — one that describes processes independent of our intervention. Anti-realists, however, believe realism is too ambitious — too hard. They believe we pragmatically describe our interactions with nature — not truths that are independent of us.
- In nature, properties of Particle B may depend on what we choose to measure or manipulate with Particle A, even at great distances.
- In quantum mechanics, there is no explanation for this. "It just comes out that way," says Smolin. Realists struggle with this because it would imply certain things can travel faster than light, which still seems improbable.
Once a week.
Subscribe to our weekly newsletter.
China has reached a new record for nuclear fusion at 120 million degrees Celsius.
This article was originally published on our sister site, Freethink.
China wants to build a mini-star on Earth and house it in a reactor. Many teams across the globe have this same bold goal --- which would create unlimited clean energy via nuclear fusion.
But according to Chinese state media, New Atlas reports, the team at the Experimental Advanced Superconducting Tokamak (EAST) has set a new world record: temperatures of 120 million degrees Celsius for 101 seconds.
Yeah, that's hot. So what? Nuclear fusion reactions require an insane amount of heat and pressure --- a temperature environment similar to the sun, which is approximately 150 million degrees C.
If scientists can essentially build a sun on Earth, they can create endless energy by mimicking how the sun does it.
If scientists can essentially build a sun on Earth, they can create endless energy by mimicking how the sun does it. In nuclear fusion, the extreme heat and pressure create a plasma. Then, within that plasma, two or more hydrogen nuclei crash together, merge into a heavier atom, and release a ton of energy in the process.
Nuclear fusion milestones: The team at EAST built a giant metal torus (similar in shape to a giant donut) with a series of magnetic coils. The coils hold hot plasma where the reactions occur. They've reached many milestones along the way.
According to New Atlas, in 2016, the scientists at EAST could heat hydrogen plasma to roughly 50 million degrees C for 102 seconds. Two years later, they reached 100 million degrees for 10 seconds.
The temperatures are impressive, but the short reaction times, and lack of pressure are another obstacle. Fusion is simple for the sun, because stars are massive and gravity provides even pressure all over the surface. The pressure squeezes hydrogen gas in the sun's core so immensely that several nuclei combine to form one atom, releasing energy.
But on Earth, we have to supply all of the pressure to keep the reaction going, and it has to be perfectly even. It's hard to do this for any length of time, and it uses a ton of energy. So the reactions usually fizzle out in minutes or seconds.
Still, the latest record of 120 million degrees and 101 seconds is one more step toward sustaining longer and hotter reactions.
Why does this matter? No one denies that humankind needs a clean, unlimited source of energy.
We all recognize that oil and gas are limited resources. But even wind and solar power --- renewable energies --- are fundamentally limited. They are dependent upon a breezy day or a cloudless sky, which we can't always count on.
Nuclear fusion is clean, safe, and environmentally sustainable --- its fuel is a nearly limitless resource since it is simply hydrogen (which can be easily made from water).
With each new milestone, we are creeping closer and closer to a breakthrough for unlimited, clean energy.
The symbol for love is the heart, but the brain may be more accurate.
- How love makes us feel can only be defined on an individual basis, but what it does to the body, specifically the brain, is now less abstract thanks to science.
- One of the problems with early-stage attraction, according to anthropologist Helen Fisher, is that it activates parts of the brain that are linked to drive, craving, obsession, and motivation, while other regions that deal with decision-making shut down.
- Dr. Fisher, professor Ted Fischer, and psychiatrist Gail Saltz explain the different types of love, explore the neuroscience of love and attraction, and share tips for sustaining relationships that are healthy and mutually beneficial.
We explore the history of blood types and how they are classified to find out what makes the Rh-null type important to science and dangerous for those who live with it.
- Fewer than 50 people worldwide have 'golden blood' — or Rh-null.
- Blood is considered Rh-null if it lacks all of the 61 possible antigens in the Rh system.
- It's also very dangerous to live with this blood type, as so few people have it.
Golden blood sounds like the latest in medical quackery. As in, get a golden blood transfusion to balance your tantric midichlorians and receive a free charcoal ice cream cleanse. Don't let the New-Agey moniker throw you. Golden blood is actually the nickname for Rh-null, the world's rarest blood type.
As Mosaic reports, the type is so rare that only about 43 people have been reported to have it worldwide, and until 1961, when it was first identified in an Aboriginal Australian woman, doctors assumed embryos with Rh-null blood would simply die in utero.
But what makes Rh-null so rare, and why is it so dangerous to live with? To answer that, we'll first have to explore why hematologists classify blood types the way they do.
A (brief) bloody history
Our ancestors understood little about blood. Even the most basic of blood knowledge — blood inside the body is good, blood outside is not ideal, too much blood outside is cause for concern — escaped humanity's grasp for an embarrassing number of centuries.
Absence this knowledge, our ancestors devised less-than-scientific theories as to what blood was, theories that varied wildly across time and culture. To pick just one, the physicians of Shakespeare's day believed blood to be one of four bodily fluids or "humors" (the others being black bile, yellow bile, and phlegm).
Handed down from ancient Greek physicians, humorism stated that these bodily fluids determined someone's personality. Blood was considered hot and moist, resulting in a sanguine temperament. The more blood people had in their systems, the more passionate, charismatic, and impulsive they would be. Teenagers were considered to have a natural abundance of blood, and men had more than women.
Humorism lead to all sorts of poor medical advice. Most famously, Galen of Pergamum used it as the basis for his prescription of bloodletting. Sporting a "when in doubt, let it out" mentality, Galen declared blood the dominant humor, and bloodletting an excellent way to balance the body. Blood's relation to heat also made it a go-to for fever reduction.
While bloodletting remained common until well into the 19th century, William Harvey's discovery of the circulation of blood in 1628 would put medicine on its path to modern hematology.
Soon after Harvey's discovery, the earliest blood transfusions were attempted, but it wasn't until 1665 that first successful transfusion was performed by British physician Richard Lower. Lower's operation was between dogs, and his success prompted physicians like Jean-Baptiste Denis to try to transfuse blood from animals to humans, a process called xenotransfusion. The death of human patients ultimately led to the practice being outlawed.4
The first successful human-to-human transfusion wouldn't be performed until 1818, when British obstetrician James Blundell managed it to treat postpartum hemorrhage. But even with a proven technique in place, in the following decades many blood-transfusion patients continued to die mysteriously.
Enter Austrian physician Karl Landsteiner. In 1901 he began his work to classify blood groups. Exploring the work of Leonard Landois — the physiologist who showed that when the red blood cells of one animal are introduced to a different animal's, they clump together — Landsteiner thought a similar reaction may occur in intra-human transfusions, which would explain why transfusion success was so spotty. In 1909, he classified the A, B, AB, and O blood groups, and for his work he received the 1930 Nobel Prize for Physiology or Medicine.
What causes blood types?
It took us a while to grasp the intricacies of blood, but today, we know that this life-sustaining substance consists of:
- Red blood cells — cells that carry oxygen and remove carbon dioxide throughout the body;
- White blood cells — immune cells that protect the body against infection and foreign agents;
- Platelets — cells that help blood clot; and
- Plasma — a liquid that carries salts and enzymes.6,7
Each component has a part to play in blood's function, but the red blood cells are responsible for our differing blood types. These cells have proteins* covering their surface called antigens, and the presence or absence of particular antigens determines blood type — type A blood has only A antigens, type B only B, type AB both, and type O neither. Red blood cells sport another antigen called the RhD protein. When it is present, a blood type is said to be positive; when it is absent, it is said to be negative. The typical combinations of A, B, and RhD antigens give us the eight common blood types (A+, A-, B+, B-, AB+, AB-, O+, and O-).
Blood antigen proteins play a variety of cellular roles, but recognizing foreign cells in the blood is the most important for this discussion.
Think of antigens as backstage passes to the bloodstream, while our immune system is the doorman. If the immune system recognizes an antigen, it lets the cell pass. If it does not recognize an antigen, it initiates the body's defense systems and destroys the invader. So, a very aggressive doorman.
While our immune systems are thorough, they are not too bright. If a person with type A blood receives a transfusion of type B blood, the immune system won't recognize the new substance as a life-saving necessity. Instead, it will consider the red blood cells invaders and attack. This is why so many people either grew ill or died during transfusions before Landsteiner's brilliant discovery.
This is also why people with O negative blood are considered "universal donors." Since their red blood cells lack A, B, and RhD antigens, immune systems don't have a way to recognize these cells as foreign and so leaves them well enough alone.
How is Rh-null the rarest blood type?
Let's return to golden blood. In truth, the eight common blood types are an oversimplification of how blood types actually work. As Smithsonian.com points out, "[e]ach of these eight types can be subdivided into many distinct varieties," resulting in millions of different blood types, each classified on a multitude of antigens combinations.
Here is where things get tricky. The RhD protein previously mentioned only refers to one of 61 potential proteins in the Rh system. Blood is considered Rh-null if it lacks all of the 61 possible antigens in the Rh system. This not only makes it rare, but this also means it can be accepted by anyone with a rare blood type within the Rh system.
This is why it is considered "golden blood." It is worth its weight in gold.
As Mosaic reports, golden blood is incredibly important to medicine, but also very dangerous to live with. If a Rh-null carrier needs a blood transfusion, they can find it difficult to locate a donor, and blood is notoriously difficult to transport internationally. Rh-null carriers are encouraged to donate blood as insurance for themselves, but with so few donors spread out over the world and limits on how often they can donate, this can also put an altruistic burden on those select few who agree to donate for others.
Some bloody good questions about blood types
A nurse takes blood samples from a pregnant woman at the North Hospital (Hopital Nord) in Marseille, southern France.
Photo by BERTRAND LANGLOIS / AFP
There remain many mysteries regarding blood types. For example, we still don't know why humans evolved the A and B antigens. Some theories point to these antigens as a byproduct of the diseases various populations contacted throughout history. But we can't say for sure.
In this absence of knowledge, various myths and questions have grown around the concept of blood types in the popular consciousness. Here are some of the most common and their answers.
Do blood types affect personality?
Japan's blood type personality theory is a contemporary resurrection of humorism. The idea states that your blood type directly affects your personality, so type A blood carriers are kind and fastidious, while type B carriers are optimistic and do their own thing. However, a 2003 study sampling 180 men and 180 women found no relationship between blood type and personality.
The theory makes for a fun question on a Cosmopolitan quiz, but that's as accurate as it gets.
Should you alter your diet based on your blood type?
Remember Galen of Pergamon? In addition to bloodletting, he also prescribed his patients to eat certain foods depending on which humors needed to be balanced. Wine, for example, was considered a hot and dry drink, so it would be prescribed to treat a cold. In other words, belief that your diet should complement your blood type is yet another holdover of humorism theory.
Created by Peter J. D'Adamo, the Blood Type Diet argues that one's diet should match one's blood type. Type A carriers should eat a meat-free diet of whole grains, legumes, fruits, and vegetables; type B carriers should eat green vegetables, certain meats, and low-fat dairy; and so on.
However, a study from the University of Toronto analyzed the data from 1,455 participants and found no evidence to support the theory. While people can lose weight and become healthier on the diet, it probably has more to do with eating all those leafy greens than blood type.
Are there links between blood types and certain diseases?
There is evidence to suggest that different blood types may increase the risk of certain diseases. One analysis suggested that type O blood decreases the risk of having a stroke or heart attack, while AB blood appears to increase it. With that said, type O carriers have a greater chance of developing peptic ulcers and skin cancer.
None of this is to say that your blood type will foredoom your medical future. Many factors, such as diet and exercise, hold influence over your health and likely to a greater extent than blood type.
What is the most common blood type?
In the United States, the most common blood type is O+. Roughly one in three people sports this type of blood. Of the eight well-known blood types, the least common is AB-. Only one in 167 people in the U.S. have it.
Do animals have blood types?
They most certainly do, but they are not the same as ours. This difference is why those 17th-century patients who thought, "Animal blood, now that's the ticket!" ultimately had their tickets punched. In fact, blood types are distinct between species. Unhelpfully, scientists sometimes use the same nomenclature to describe these different types. Cats, for example, have A and B antigens, but these are not the same A and B antigens found in humans.
Interestingly, xenotransfusion is making a comeback. Scientists are working to genetically engineer the blood of pigs to potentially produce human compatible blood.
Scientists are also looking into creating synthetic blood. If they succeed, they may be able to ease the current blood shortage, while also devising a way to create blood for rare blood type carriers. While this may make golden blood less golden, it would certainly make it easier to live with.* While antigens are typically proteins, they can be other molecules as well, such as polysaccharides.
A new study suggests that reports of the impending infertility of the human male are greatly exaggerated.
- A new review of a famous study on declining sperm counts finds several flaws.
- The old report makes unfounded assumptions, has faulty data, and tends toward panic.
- The new report does not rule out that sperm counts are going down, only that this could be quite normal.
Several years ago, a meta-analysis of studies on human fertility came out warning us about the declining sperm counts of Western men. It was widely shared, and its findings were featured on the covers of popular magazines. Indeed, its findings were alarming: a nearly 60 percent decline in sperm per milliliter since 1973 with no end in sight. It was only a matter of time, the authors argued, until men were firing blanks, literally.
Well… never mind.
It turns out that the impending demise of humanity was greatly exaggerated. As the predicted infertility wave crashed upon us, there was neither a great rush of men to fertility clinics nor a sudden dearth of new babies. The only discussions about population decline focus on urbanization and the fact that people choose not to have kids rather than not being able to have them.
Now, a new analysis of the 2017 study says that lower sperm counts is nothing to be surprised by. Published in Human Fertility, its authors point to flaws in the original paper's data and interpretation. They suggest a better and smarter reanalysis.
Counting tiny things is difficult
The original 2017 report analyzed 185 studies on 43,000 men and their reproductive health. Its findings were clear: "a significant decline in sperm counts… between 1973 and 2011, driven by a 50-60 percent decline among men unselected by fertility from North America, Europe, Australia and New Zealand."
However, the new analysis points out flaws in the data. As many as a third of the men in the studies were of unknown age, an important factor in reproductive health. In 45 percent of cases, the year of the sample collection was unknown- a big detail to miss in a study measuring change over time. The quality controls and conditions for sample collection and analysis vary widely from study to study, which likely influenced the measured sperm counts in the samples.
Another study from 2013 also points out that the methods for determining sperm count were only standardized in the 1980s, which occurred after some of the data points were collected for the original study. It is entirely possible that the early studies gave inaccurately high sperm counts.
This is not to say that the 2017 paper is entirely useless; it had a much more rigorous methodology than previous studies on the subject, which also claimed to identify a decline in sperm counts. However, the original study had more problems.
Garbage in, garbage out
Predictable as always, the media went crazy. Discussions of the decline of masculinity took off, both in mainstream and less-than-reputable forums; concerns about the imagined feminizing traits of soy products continued to increase; and the authors of the original study were called upon to discuss the findings themselves in a number of articles.
However, as this new review points out, some of the findings of that meta-analysis are debatable at best. For example, the 2017 report suggests that "declining mean [sperm count] implies that an increasing proportion of men have sperm counts below any given threshold for sub-fertility or infertility," despite little empirical evidence that this is the case.
The WHO offers a large range for what it considers to be a healthy sperm count, from 15 to 250 million sperm per milliliter. The benefits to fertility above a count of 40 million are seen as minimal, and the original study found a mean sperm concentration of 47 million sperm per milliliter.
Healthy sperm, healthy man?
The claim that sperm count is evidence of larger health problems is also scrutinized in this new article. While it is true that many major health problems can impact reproductive health, there is little evidence that it is the "canary in the coal mine" for overall well-being. A number of studies suggest that any relation between lifestyle choices and this part of reproductive health is limited at best.
Lastly, ideas that environmental factors could be at play have been debunked since 2017. While the original paper considered the idea that pollutants, especially from plastics, could be at fault, it is now known that this kind of pollution is worse in the parts of the world that the original paper observed higher sperm counts in (i.e., non-Western nations).
There never was a male fertility crisis
The authors of the new review do not deny that some measurements are showing lower sperm counts, but they do question the claim that this is catastrophic or part of a larger pathological issue. They propose a new interpretation of the data. Dubbed the "Sperm Count Biovariability hypothesis," it is summarized as:
"Sperm count varies within a wide range, much of which can be considered non-pathological and species-typical. Above a critical threshold, more is not necessarily an indicator of better health or higher probability of fertility relative to less. Sperm count varies across bodies, ecologies, and time periods. Knowledge about the relationship between individual and population sperm count and life-historical and ecological factors is critical to interpreting trends in average sperm counts and their relationships to human health and fertility."
Still, the authors note that lower sperm counts "could decline due to negative environmental exposures, or that this may carry implications for men's health and fertility."
However, they disagree that the decline in absolute sperm count is necessarily a bad sign for men's health and fertility. We aren't at civilization ending catastrophe just yet.