from the world's big
Did we evolve to see reality as it exists? No, says cognitive psychologist Donald Hoffman.
Cognitive psychologist Donald Hoffman hypothesizes we evolved to experience a collective delusion — not objective reality.
- Donald Hoffman theorizes experiencing reality is disadvantageous to evolutionary fitness.
- His hypothesis calls for ditching the objectivity of matter and space-time and replacing them with a mathematical theory of consciousness.
- If correct, it could help us progress such intractable questions as the mind-body problem and the conflict between general relativity and quantum mechanics.
What is reality and how do we know? For many the answer is simple: What you see — hear, feel, touch, and taste — is what you get.
Your skin feels warm on a summer day because the sun exists. That apple you just tasted sweet and that left juices on your fingers, it must have existed. Our senses tell us that reality is there, and we use reason to fill in the blanks — that is, we know the sun doesn't cease to exist at night even if we can't see it.
But cognitive psychologist Donald Hoffman says we're misunderstanding our relationship with objective reality. In fact, he argues that evolution has cloaked us in a perceptional virtual reality. For our own good.
Experiencing a virtual interface
Donald Hoffman says that we we perceive of as reality is an interface of symbols hiding vastly more complex interactions. He likens this to how desktop icons represent software. Image source: Pixabay
The idea that we can't perceive objective reality in totality isn't new. We know everyone comes installed with cognitive biases and ego defense mechanisms. Our senses can be tricked by mirages and magicians. And for every person who sees a duck, another sees a rabbit.
But Hoffman's hypothesis, which he wrote about in a recent issue of New Scientist, takes it a step further. He argues our perceptions don't contain the slightest approximation of reality; rather, they evolved to feed us a collective delusion to improve our fitness.
Using evolutionary game theory, Hoffman and his collaborators created computer simulations to observe how "truth strategies" (which see objective reality as is) compared with "pay-off strategies" (which focus on survival value). The simulations put organisms in an environment with a resource necessary to survival but only in Goldilocks proportions.
Consider water. Too much water, the organism drowns. Too little, it dies of thirst. Between these extremes, the organism slakes its thirst and lives on to breed another day.
Truth-strategy organisms who see the water level on a color scale — from red for low to green for high — see the reality of the water level. However, they don't know whether the water level is high enough to kill them. Pay-off-strategy organisms, conversely, simply see red when water levels would kill them and green for levels that won't. They are better equipped to survive.
"[E]volution ruthlessly selects against truth strategies and for pay-off strategies," writes Hoffman. "An organism that sees objective reality is always less fit than an organism of equal complexity that sees fitness pay-offs. Seeing objective reality will make you extinct."
Since humans aren't extinct, the simulation suggests we see an approximation of reality that shows us what we need to see, not how things really are.
Hoffman likens this approximation to a desktop interface. When a novelist boots up their computer, they see an icon on their desktop that represents their novel. It's green, rectangular, and sits on the screen, but the document has none of those qualities intrinsically. It's a complex string of 1s and 0s that manifests as software running as an electric current through a circuit board.
If writers had to manipulate binary to write a novel, or hunter-gatherers had to perceive physics to throw a spear, chances are both would have gone extinct a long time ago.
"In like manner, we create an apple when we look, and destroy it when we look away. Something exists when we don't look, but it isn't an apple, and is probably nothing like an apple," Hoffman writes. "The human perception of an apple is a data structure that indicates something edible (a fitness pay-off) and how to eat it. We create these data structures with a glance, and erase them with a blink. Physical objects, and indeed the space and time they exist in, are evolution's way of presenting fitness pay-offs in a compact and usable form."
Consciousness all the way down
At this point, you are likely wondering, "Well, then what is reality? If my dog is only a data structure indicating a furry creature that enjoys fetch and hates baths, then what lies beneath that representation?"
For Hoffman the answer is consciousness.
When neuroscientists and philosophers develop theories of consciousness, they traditionally look at the brain. If Hoffman is correct, they can't completely understand consciousness via brain activity, because they are looking at an icon of a material organ that exists in space and time. Not reality.
Hoffman wants to start with a mathematical theory of consciousness as a baseline — looking at consciousness outside of matter and the space-time it may not inhabit. His theory further calls for a potentially infinite interaction of conscious agents, from the simple to the complex. In this formulation, consciousness may even exist beyond the organic world, all the way down to electrons and protons.
"I'm denying that there is such a thing in objective reality as an electron with a position. I'm saying that the very framework of space and time and matter and spin is the wrong framework, it's the wrong language to describe reality," Hoffman told journalist Robert Wright in an interview. "I'm saying let's go all the way: It's consciousness, and only consciousness, all the way down."
Hoffman calls this view "conscious realism." If proven correct, he argues it could make headway on such intractable quandaries like the mind-body problem, the odd nature of the quantum world, and the much sought-after "theory of everything."
"Reality may never seem the same again," Hoffman writes.
Simulation tested, science approved?
Hoffman's hypothesis is fascinating, and if you need a subject for a bar-side bull session, you could do worse. But before anybody suffers an existential meltdown, it's worth noting that the hypothesis is just that. A hypothesis. It has a way to go before overturning the hypothesis that the brain manifests consciousness, and its detractors have thrown down a few gauntlets.
One such critique argues that while we may not perceive reality as it is that doesn't mean our perception is not reasonably accurate. Hoffman would argue we see an icon that represents a snake, not a snake. But then why do nonpoisonous snakes evolve colorings to match poisonous ones? If there is no objective reality to mimic, why would mimicry prove a useful adaptation, and why would the interfaces of multiple species be fooled by such tricks?
Another concern is a chicken-and-egg problem, as Wright pointed out in their discussion. Current orthodoxy argues the universe existed for billions of years before life emerged. This means the first living organisms began their evolutionary tracks by responding to a preexistent inorganic, unconscious environment.
If Hoffman's argument is correct and consciousness is primary, then why develop life and the illusion of reality? Why are some of these unreal symbols ultimately so harmful to consciousness? The network of consciousnesses, one assumes, got along without life for billions of years.
This is why Michael Shermer equates Hoffman's argument to something akin to the "God of the gaps." He writes:
"No one denies that consciousness is a hard problem. But before we reify consciousness to the level of an independent agency capable of creating its own reality, let's give the hypotheses we do have for how brains create mind more time. Because we know for a fact that measurable consciousness dies when the brain dies, until proved otherwise, the default hypothesis must be that brains cause consciousness. I am, therefore I think."
Then there's the issue of whether Hoffman's hypothesis is self-defeating. If our perceptions of reality are merely species-specific interfaces overlaid upon reality, how do we know consciousness is not simply another such icon? Maybe the "I" of everyday experience is a useful fantasy adapted to benefit the survival and reproduction of the gene and not part of the operating system of reality.
None of this is to say that Hoffman and others can't meet these challenges with further research. We'll see. It's just to say that there's a lot of room for exploration into some fascinating ideas. As Hoffman would agree:
"[This theory] has made life far more interesting," he told Wright. "There's lots to explore, a lot I don't know, and things that I thought I knew I had to give up. And so, it makes life far more interesting for me."
- Fully immersive virtual reality: What will it take? - Big Think ›
- Why nothing in the universe may be real - Big Think ›
- Have physicists proven objective reality doesn't exist? - Big Think ›
Andy Samberg and Cristin Milioti get stuck in an infinite wedding time loop.
- Two wedding guests discover they're trapped in an infinite time loop, waking up in Palm Springs over and over and over.
- As the reality of their situation sets in, Nyles and Sarah decide to enjoy the repetitive awakenings.
- The film is perfectly timed for a world sheltering at home during a pandemic.
Richard Feynman once asked a silly question. Two MIT students just answered it.
Here's a fun experiment to try. Go to your pantry and see if you have a box of spaghetti. If you do, take out a noodle. Grab both ends of it and bend it until it breaks in half. How many pieces did it break into? If you got two large pieces and at least one small piece you're not alone.
But science loves a good challenge<p>The mystery remained unsolved until 2005, when French scientists <a href="http://www.lmm.jussieu.fr/~audoly/" target="_blank">Basile Audoly</a> and <a href="http://www.lmm.jussieu.fr/~neukirch/" target="_blank">Sebastien Neukirch </a>won an <a href="https://www.improbable.com/ig/" target="_blank">Ig Nobel Prize</a>, an award given to scientists for real work which is of a less serious nature than the discoveries that win Nobel prizes, for finally determining why this happens. <a href="http://www.lmm.jussieu.fr/spaghetti/audoly_neukirch_fragmentation.pdf" target="_blank">Their paper describing the effect is wonderfully funny to read</a>, as it takes such a banal issue so seriously. </p><p>They demonstrated that when a rod is bent past a certain point, such as when spaghetti is snapped in half by bending it at the ends, a "snapback effect" is created. This causes energy to reverberate from the initial break to other parts of the rod, often leading to a second break elsewhere.</p><p>While this settled the issue of <em>why </em>spaghetti noodles break into three or more pieces, it didn't establish if they always had to break this way. The question of if the snapback could be regulated remained unsettled.</p>
Physicists, being themselves, immediately wanted to try and break pasta into two pieces using this info<p><a href="https://roheiss.wordpress.com/fun/" target="_blank">Ronald Heisser</a> and <a href="https://math.mit.edu/directory/profile.php?pid=1787" target="_blank">Vishal Patil</a>, two graduate students currently at Cornell and MIT respectively, read about Feynman's night of noodle snapping in class and were inspired to try and find what could be done to make sure the pasta always broke in two.</p><p><a href="http://news.mit.edu/2018/mit-mathematicians-solve-age-old-spaghetti-mystery-0813" target="_blank">By placing the noodles in a special machine</a> built for the task and recording the bending with a high-powered camera, the young scientists were able to observe in extreme detail exactly what each change in their snapping method did to the pasta. After breaking more than 500 noodles, they found the solution.</p>
The apparatus the MIT researchers built specifically for the task of snapping hundreds of spaghetti sticks.
(Courtesy of the researchers)
What possible application could this have?<p>The snapback effect is not limited to uncooked pasta noodles and can be applied to rods of all sorts. The discovery of how to cleanly break them in two could be applied to future engineering projects.</p><p>Likewise, knowing how things fragment and fail is always handy to know when you're trying to build things. Carbon Nanotubes, <a href="https://bigthink.com/ideafeed/carbon-nanotube-space-elevator" target="_self">super strong cylinders often hailed as the building material of the future</a>, are also rods which can be better understood thanks to this odd experiment.</p><p>Sometimes big discoveries can be inspired by silly questions. If it hadn't been for Richard Feynman bending noodles seventy years ago, we wouldn't know what we know now about how energy is dispersed through rods and how to control their fracturing. While not all silly questions will lead to such a significant discovery, they can all help us learn.</p>
The multifaceted cerebellum is large — it's just tightly folded.
- A powerful MRI combined with modeling software results in a totally new view of the human cerebellum.
- The so-called 'little brain' is nearly 80% the size of the cerebral cortex when it's unfolded.
- This part of the brain is associated with a lot of things, and a new virtual map is suitably chaotic and complex.
Just under our brain's cortex and close to our brain stem sits the cerebellum, also known as the "little brain." It's an organ many animals have, and we're still learning what it does in humans. It's long been thought to be involved in sensory input and motor control, but recent studies suggests it also plays a role in a lot of other things, including emotion, thought, and pain. After all, about half of the brain's neurons reside there. But it's so small. Except it's not, according to a new study from San Diego State University (SDSU) published in PNAS (Proceedings of the National Academy of Sciences).
A neural crêpe
A new imaging study led by psychology professor and cognitive neuroscientist Martin Sereno of the SDSU MRI Imaging Center reveals that the cerebellum is actually an intricately folded organ that has a surface area equal in size to 78 percent of the cerebral cortex. Sereno, a pioneer in MRI brain imaging, collaborated with other experts from the U.K., Canada, and the Netherlands.
So what does it look like? Unfolded, the cerebellum is reminiscent of a crêpe, according to Sereno, about four inches wide and three feet long.
The team didn't physically unfold a cerebellum in their research. Instead, they worked with brain scans from a 9.4 Tesla MRI machine, and virtually unfolded and mapped the organ. Custom software was developed for the project, based on the open-source FreeSurfer app developed by Sereno and others. Their model allowed the scientists to unpack the virtual cerebellum down to each individual fold, or "folia."
Study's cross-sections of a folded cerebellum
Image source: Sereno, et al.
A complicated map
Sereno tells SDSU NewsCenter that "Until now we only had crude models of what it looked like. We now have a complete map or surface representation of the cerebellum, much like cities, counties, and states."
That map is a bit surprising, too, in that regions associated with different functions are scattered across the organ in peculiar ways, unlike the cortex where it's all pretty orderly. "You get a little chunk of the lip, next to a chunk of the shoulder or face, like jumbled puzzle pieces," says Sereno. This may have to do with the fact that when the cerebellum is folded, its elements line up differently than they do when the organ is unfolded.
It seems the folded structure of the cerebellum is a configuration that facilitates access to information coming from places all over the body. Sereno says, "Now that we have the first high resolution base map of the human cerebellum, there are many possibilities for researchers to start filling in what is certain to be a complex quilt of inputs, from many different parts of the cerebral cortex in more detail than ever before."
This makes sense if the cerebellum is involved in highly complex, advanced cognitive functions, such as handling language or performing abstract reasoning as scientists suspect. "When you think of the cognition required to write a scientific paper or explain a concept," says Sereno, "you have to pull in information from many different sources. And that's just how the cerebellum is set up."
Bigger and bigger
The study also suggests that the large size of their virtual human cerebellum is likely to be related to the sheer number of tasks with which the organ is involved in the complex human brain. The macaque cerebellum that the team analyzed, for example, amounts to just 30 percent the size of the animal's cortex.
"The fact that [the cerebellum] has such a large surface area speaks to the evolution of distinctively human behaviors and cognition," says Sereno. "It has expanded so much that the folding patterns are very complex."
As the study says, "Rather than coordinating sensory signals to execute expert physical movements, parts of the cerebellum may have been extended in humans to help coordinate fictive 'conceptual movements,' such as rapidly mentally rearranging a movement plan — or, in the fullness of time, perhaps even a mathematical equation."
Sereno concludes, "The 'little brain' is quite the jack of all trades. Mapping the cerebellum will be an interesting new frontier for the next decade."
What happens if we consider welfare programs as investments?
- A recently published study suggests that some welfare programs more than pay for themselves.
- It is one of the first major reviews of welfare programs to measure so many by a single metric.
- The findings will likely inform future welfare reform and encourage debate on how to grade success.
Welfare as an investment<p>The <a href="https://scholar.harvard.edu/files/hendren/files/welfare_vnber.pdf" target="_blank">study</a>, carried out by Nathaniel Hendren and Ben Sprung-Keyser of Harvard University, reviews 133 welfare programs through a single lens. The authors measured these programs' "Marginal Value of Public Funds" (MVPF), which is defined as the ratio of the recipients' willingness to pay for a program over its cost.</p><p>A program with an MVPF of one provides precisely as much in net benefits as it costs to deliver those benefits. For an illustration, imagine a program that hands someone a dollar. If getting that dollar doesn't alter their behavior, then the MVPF of that program is one. If it discourages them from working, then the program's cost goes up, as the program causes government tax revenues to fall in addition to costing money upfront. The MVPF goes below one in this case. <br> <br> Lastly, it is possible that getting the dollar causes the recipient to further their education and get a job that pays more taxes in the future, lowering the cost of the program in the long run and raising the MVPF. The value ratio can even hit infinity when a program fully "pays for itself."</p><p> While these are only a few examples, many others exist, and they do work to show you that a high MVPF means that a program "pays for itself," a value of one indicates a program "breaks even," and a value below one shows a program costs more money than the direct cost of the benefits would suggest.</p> After determining the programs' costs using existing literature and the willingness to pay through statistical analysis, 133 programs focusing on social insurance, education and job training, tax and cash transfers, and in-kind transfers were analyzed. The results show that some programs turn a "profit" for the government, mainly when they are focused on children:
This figure shows the MVPF for a variety of polices alongside the typical age of the beneficiaries. Clearly, programs targeted at children have a higher payoff.
Nathaniel Hendren and Ben Sprung-Keyser<p>Programs like child health services and K-12 education spending have infinite MVPF values. The authors argue this is because the programs allow children to live healthier, more productive lives and earn more money, which enables them to pay more taxes later. Programs like the preschool initiatives examined don't manage to do this as well and have a lower "profit" rate despite having decent MVPF ratios.</p><p>On the other hand, things like tuition deductions for older adults don't make back the money they cost. This is likely for several reasons, not the least of which is that there is less time for the benefactor to pay the government back in taxes. Disability insurance was likewise "unprofitable," as those collecting it have a reduced need to work and pay less back in taxes. </p>