Once a week.
Subscribe to our weekly newsletter.
Why nature vs. nurture is ‘zombie idea’ we need to kill
Why do some people still believe that behavior is caused solely by genes or environment? A new paper offers some answers.
- Despite the fact that scientists have long known that behavior is caused by complex interactions between genes and environment, the debate still persists in the culture today.
- A new paper outlines three reasons why this debate persists, and why behavior isn't special — it abides by the same evolutionary processes as other traits.
- The authors say rejecting the false nature-nurture dichotomy can help kill this "zombie idea."
Which determines traits like sexual orientation, intelligence and behavior: genes or environment?
Many modern debates center on this question, from the #MeToo movement to transgender rights, to academic performance, to crime. But is the nature-nurture conversation even worth having? After all, it was more than three decades ago that the American biochemist Daniel Koshland wrote in an editorial published in Science, "The debate on nature and nurture in regard to behavior is basically over. Both are involved."
Now, a paper recently published in BioScience argues it's finally time to kill the "zombie" that is the nature-nurture debate. The authors—Marlene Zuk and Hamish G. Spencer of the University of Otago's Department of Zoology—note that behaviors aren't determined solely by genes or environment.
Zuk and Spencer divide their argument into three parts.
Behavior is not special in its evolution
Behavior, the authors write, evolves in the same manner as other traits. People often mistakenly think that behavior — particularly human behavior — exists apart from the principles of evolution, in a separate realm from other characteristics, such as height.
The authors note the Venus flytrap as an example.
"The motor cells that close the trap need exactly two signals within 20 seconds to activate. Then, at least three—not one, not four—flicks of a trigger hair are needed to signal the production of digestive enzymes. Only then can successful consumption of the prey commence."
Does this precise predatory process count as behavior? It's a tricky question, sure. But the authors raise it because:
"If we can't draw a hard and fast line separating behavior from other traits, then the same rules apply to both, and behavior evolves the same way that leg length or other physical characteristics do. That is an important conclusion, because it means that we can't invoke culture as a get-out-of-evolution-free card."
Behavior is not explained solely by genes or environment
That might be obvious enough. But the authors also argue that behaviors aren't even the result of an additive combination of the two. In other words, you can't look at a world-class sprinter and say that their skill comes from 68 percent genetics, 32 percent environment.
Rather, behaviors stem from the complex and fluid interaction between the two.
"The effect of an organism's genes depends on the organism's environment and does so just as much as the effect of an organism's environment depends on its genes," the authors write. "Genes and environment interact. The philosopher of science Evelyn Fox Keller calls this the entanglement of genotype and environment, which also conveys the inextricable nature of the relationship between the two."
Genes do not code for behavior
Zuk and Spencer suggest that the way people talk about genes tends to confuse the public about the role genetics play in influencing behavior. For example, you might read a study saying that scientists have "found the gene for" intelligence, criminality, or whatever trait.
"What scientists mean when they talk about a gene for a trait is that variation at that gene (e.g., differences in the DNA sequence of that gene) leads, in a certain range of environments, to variation in that trait, and the concept involved is one called heritability," the authors write.
But a gene for a trait does not act as an off-on switch that produces behavior.
"The crucial point is that, regardless of the heritability of a trait, a change in the range of environments (or, for that matter, the genetic variation affecting the trait) can change the heritability. Everything is context dependent."
Killing the zombie
So, why do we need to kill the nature-nurture zombie? Zuk and Spencer suggest that these misguided beliefs can cause us to think certain behaviors are inevitable. For example, if people with anorexia read articles saying the condition is caused solely by genetics, they might feel like there's nothing they can do to improve their health. In this way, people may feel like they have an "out" to continue these behaviors, when, in reality, environmental interventions could benefit them.
Similarly, the belief that genes determine traits like intelligence or social mobility may influence public officials not to spend as much money on certain public programs. In this way, the nature-nurture dichotomy causes people to do nothing at all.
The authors say it's time to break our conceptual link between genetics and fate.
"A rejection of that equivalence, along with a view of the nature of the entanglement of genes and the environment, would be real progress, and just might kill the zombie."
- The Pessimistic Brain: Wired to Be Negative? - Big Think ›
- Why Some Apples Fall Far From The Tree - Big Think ›
- Where Does Happiness Come From: Nature or Nurture? - Big Think ›
A Harvard professor's study discovers the worst year to be alive.
- Harvard professor Michael McCormick argues the worst year to be alive was 536 AD.
- The year was terrible due to cataclysmic eruptions that blocked out the sun and the spread of the plague.
- 536 ushered in the coldest decade in thousands of years and started a century of economic devastation.
The past year has been nothing but the worst in the lives of many people around the globe. A rampaging pandemic, dangerous political instability, weather catastrophes, and a profound change in lifestyle that most have never experienced or imagined.
But was it the worst year ever?
Nope. Not even close. In the eyes of the historian and archaeologist Michael McCormick, the absolute "worst year to be alive" was 536.
Why was 536 so bad? You could certainly argue that 1918, the last year of World War I when the Spanish Flu killed up to 100 million people around the world, was a terrible year by all accounts. 1349 could also be considered on this morbid list as the year when the Black Death wiped out half of Europe, with up to 20 million dead from the plague. Most of the years of World War II could probably lay claim to the "worst year" title as well. But 536 was in a category of its own, argues the historian.
It all began with an eruption...
According to McCormick, Professor of Medieval History at Harvard University, 536 was the precursor year to one of the worst periods of human history. It featured a volcanic eruption early in the year that took place in Iceland, as established by a study of a Swiss glacier carried out by McCormick and the glaciologist Paul Mayewski from the Climate Change Institute of The University of Maine (UM) in Orono.
The ash spewed out by the volcano likely led to a fog that brought an 18-month-long stretch of daytime darkness across Europe, the Middle East, and portions of Asia. As wrote the Byzantine historian Procopius, "For the sun gave forth its light without brightness, like the moon, during the whole year." He also recounted that it looked like the sun was always in eclipse.
Cassiodorus, a Roman politician of that time, wrote that the sun had a "bluish" color, the moon had no luster, and "seasons seem to be all jumbled up together." What's even creepier, he described, "We marvel to see no shadows of our bodies at noon."
...that led to famine...
The dark days also brought a period of coldness, with summer temperatures falling by 1.5° C. to 2.5° C. This started the coldest decade in the past 2300 years, reports Science, leading to the devastation of crops and worldwide hunger.
...and the fall of an empire
In 541, the bubonic plague added considerably to the world's misery. Spreading from the Roman port of Pelusium in Egypt, the so-called Plague of Justinian caused the deaths of up to one half of the population of the eastern Roman Empire. This, in turn, sped up its eventual collapse, writes McCormick.
Between the environmental cataclysms, with massive volcanic eruptions also in 540 and 547, and the devastation brought on by the plague, Europe was in for an economic downturn for nearly all of the next century, until 640 when silver mining gave it a boost.
Was that the worst time in history?
Of course, the absolute worst time in history depends on who you were and where you lived.
Native Americans can easily point to 1520, when smallpox, brought over by the Spanish, killed millions of indigenous people. By 1600, up to 90 percent of the population of the Americas (about 55 million people) was wiped out by various European pathogens.
Like all things, the grisly title of "worst year ever" comes down to historical perspective.
What's the difference between brainwashing and rehabilitation?
- The book and movie, A Clockwork Orange, powerfully asks us to consider the murky lines between rehabilitation, brainwashing, and dehumanization.
- There are a variety of ways, from hormonal treatment to surgical lobotomies, to force a person to be more law abiding, calm, or moral.
- Is a world with less free will but also with less suffering one in which we would want to live?
Alex is a criminal. A violent and sadistic criminal. So, we decide to do something about it. We're going to "rehabilitate" him.
Using a new and exciting "Ludovico" technique, we'll change his brain chemistry to make him an upstanding, moral citizen. Alex will be forced to watch violent movies as his body is pumped with nausea-inducing drugs. After a while, he'll come to associate violence with this horrible sickness. And, after a course of Ludovico, Alex can happily return to society, never again doing an immoral or illegal act. He'll no longer be a danger to himself or anyone else.
This is the story of A Clockwork Orange by Anthony Burgess, and it raises important questions about the nature of moral decisions, free will, and the limits of rehabilitation.
Today's Clockwork Orange
This might seem like unbelievable science fiction, but it might be truer — and nearer — than we think. In 2010, Dr. Molly Crockett did a series of experiments on moral decision-making and serotonin levels. Her results showed that people with more serotonin were less aggressive or confrontational and much more easy-going and forgiving. When we're full of serotonin, we let insults pass, are more empathetic, and are less willing to do harm.
As Fydor Dostoyevsky wrote in The Brothers Karamazov, if the "entrance fee" for having free will is the horrendous suffering we see all around us, then "I hasten to return my ticket."
The idea that biology affects moral decisions is obvious. Most of us are more likely to be short-tempered and spiteful if we're tired or hungry, for instance. Conversely, we have the patience of a saint if we just have received some good news, had half a bottle of wine, or had sex.
If our decision-making can be manipulated or determined by our biology, should we not try various interventions to prevent the criminally inclined from harming others?
What is the point of prison? This is itself no easy question, and it's one with a rich philosophical debate. Surely one of the biggest reasons is to protect society by preventing criminals from reoffending. This might be achievable by manipulating a felon's serotonin levels, but why not go even further?
Today, we know enough about the brain to have identified a very particular part of the prefrontal cortex responsible for aggressive behavior. We know that certain abnormalities in the amygdala can result in anti-social behavior and rule breaking. If the purpose of the penal system is to rehabilitate, then why not "edit" these parts of the brain in some way? This could be done in a variety of ways.
Credit: Otis Historical Archives National Museum of Health and Medicine via Flickr / Wikipedia
Electroconvulsive therapy (ECT) is a surprisingly common practice in much of the developed world. Its supporters say that it can help relieve major mental health issues such as depression or bipolar disorder as well as alleviate certain types of seizures. Historically, and controversially, it has been used to "treat" homosexuality and was used to threaten those misbehaving in hospitals in the 1950s (as notoriously depicted in One Flew Over the Cuckoo's Nest). Of course, these early and crude efforts at ECT were damaging, immoral, and often left patients barely able to function as humans. Today, neuroscience and ECT are much more sophisticated. If we could easily "treat" those with aggressive or anti-social behavior, then why not?
Ideally, we might use techniques such as ECT or hormonal supplementation, but failing that, why not go even further? Why not perform a lobotomy? If the purpose of the penal system is to change the felon for the better, we should surely use all the tools at our disposal. With one fairly straightforward surgery to the prefrontal cortex, we could turn a violent, murderous criminal into a docile and law-abiding citizen. Should we do it?
Is free will worth it?
As Burgess, who penned A Clockwork Orange, wrote, "Is a man who chooses to be bad perhaps in some way better than a man who has the good imposed upon him?"
Intuitively, many say yes. Moral decisions must, in some way, be our own. Even if we know that our brains determine our actions, it's still me who controls my brain, no one else. Forcing someone to be good, by molding or changing their brain, is not creating a moral citizen. It's creating a law-abiding automaton. And robots are not humans.
And yet, it begs the question: is "free choice" worth all the evil in the world?
If my being brainwashed or "rehabilitated" means children won't die malnourished or the Holocaust would never happen, then so be it. If lobotomizing or neuro-editing a serial killer will prevent them from killing again, is that not a sacrifice worth making? There's no obvious reason why we should value free will above morality or the right to life. A world without murder and evil — even if it meant a world without free choices for some — might not be such a bad place.
As Fyodor Dostoyevsky wrote in The Brothers Karamazov, if the "entrance fee" for having free will is the horrendous suffering we see all around us, then "I hasten to return my ticket." Free will's not worth it.
Do you think the Ludovico technique from A Clockwork Orange is a great idea? Should we turn people into moral citizens and shape their brains to choose only what is good? Or is free choice more important than all the evil in the world?
A simple trick allowed marine biologists to prove a long-held suspicion.
- It's long been suspected that sharks navigate the oceans using Earth's magnetic field.
- Sharks are, however, difficult to experiment with.
- Using magnetism, marine biologists figured out a clever way to fool sharks into thinking they're somewhere that they're not.
For some time, scientists have suspected that sharks belong among the growing number of animals known to navigate using Earth's magnetic field. Testing anything with a shark, though, requires some care.
The key was selecting the right candidate. Keller and his colleagues chose the bonnethead shark, Sphyrna tiburo, a small critter that summers at Turkey Point Shoal off the coast of the Florida State University Coastal and Marine Laboratory with which Keller is affiliated.
Bonnetheads elsewhere have been known to complete 620-mile roundtrip migrations. As the lab's Dean Grubbs puts it, "That's not bad for a shark that is only two to three feet long. The question is how do they find their way back to that same estuary year after year." There's a report of a great white shark migrating between two locations, one in South Africa and another in Australia, year after year.
The research is published in Current Biology.
Keller and his team rounded up 20 local juvenile bonnetheads and transported them into a holding tank at the marine lab. For the tests, the researchers simulated three real-world magnetic fields. As the various magnetic fields were activated, the sharks' movements were captured by GoPro cameras and their average swimming orientations calculated by software.
The first simulation, serving as a control, mimicked the magnetic field of the nearby shoal from which the sharks had been captured. When this field was activated, the sharks essentially acted like they were "home," just swimming around as they do.
A second field was the magnetic equivalent of a location 600 kilometers south of the lab within the Gulf of Mexico. When this field was activated, the sharks, apparently mistaking themselves for being far south in the Gulf, began swimming northward toward the shoal.
The opposite occurred with a field standing in for a location in continental North America 600 km north of their home shoal — the sharks began swimming southward.
"For 50 years," says Keller, "scientists have hypothesized that sharks use the magnetic field as a navigational aid. This theory has been so popular because sharks, skates, and rays have been shown to be very sensitive to magnetic fields. They have also been trained to react to unique geomagnetic signatures, so we know they are capable of detecting and reacting to variation in the magnetic field."
His team's experiments confirm what's long been suspected, Keller says: "Sharks use map-like information from the geomagnetic field as a navigational aid. This ability is useful for navigation and possibly maintaining population structure."