Once a week.
Subscribe to our weekly newsletter.
Here's how to prove that you are a simulation and nothing is real
How do you know you are real? A classic paper by philosopher Nick Bostrom argues you are likely a simulation.
- Philosopher Nick Bostrom argues that humans are likely computer simulations in the "Simulation Hypothesis".
- Bostrom thinks advanced civilizations of posthumans will have technology to simulate their ancestors.
- Elon Musk and others support this idea.
Are we living in a computer-driven simulation? That seems like an impossible hypothesis to prove. But let's just look at how impossible that really is.
For some machine to be able to conjure up our whole reality, it needs to be amazingly powerful, able to keep track of an incalculable number of variables. Consider the course of just one human lifetime, with all of the events it entails, all the materials, ideas and people that one interacts with throughout an average lifespan. Then multiply that by about a hundred billion souls that have graced this planet with their presence so far. The interactions between all these people, as well as the interactions between all the animals, plants, bacterium, planetary bodies, really all the elements we know and don't know to be a part of this world, is what constitutes the reality you encounter today.
Composing all that would require coordinating an almost unimaginable amount of data. Yet, it's just "almost" inconceivable. The fact that we can actually right now in this article attempt to come up with this number is what makes it potentially possible.
So how much data are we talking about? And how would such a machine work?
In 2003, the Swedish philosopher Nick Bostrom, who teaches at University of Oxford, wrote an influential paper on the subject called "Are you living in a computer simulation" that tackles just this subject.
In the paper, Bostrom argues that future people will likely have super-powerful computers on which they could run simulations of their "forebears". These simulations would be so good that the simulated people would think they are conscious. In that case, it's likely that we are among such "simulated minds" rather than "the original biological ones."
In fact, if we don't believe we are simulations, concludes Bostrom, then "we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears." If you accept one premise (that you'll have powerful super-computing descendants), you have to accept the other (you are simulation).
That's pretty heavy stuff. How to unpack it?
As he goes into the details of his argument, Bostrom writes that within the philosophy of mind, it is possible to conjecture that an artificially-created system could be made to have "conscious experiences" as long as it is equipped with "the right sort of computational structures and processes." It's presumptuous to assume that only experiences within "a carbon‐based biological neural networks inside a cranium" (your head) can gives rise to consciousness. Silicon processors in a computer can be potentially made to mimic the same thing.
Of course, at this point in time this isn't something our computers can do. But we can imagine that the current rate of progress and what we know of the constraints imposed by physical laws can lead to civilizations able to come up with such machines, even turning planets and stars into giant computers. These could be quantum or nuclear but whatever they would be, they could probably run amazingly detailed simulations.
In fact, there is number to represent the kind of power needed to emulate a human brain's functionality, which Bostrom gives as ranging from 1014 to 1017 operations per second. If you hit that kind of computer speed, you can run a reasonable enough human mind within the machine.
Simulating the whole universe, including all the details "down to the quantum level" requires more computing oomph, to the point that it may be "unfeasible," thinks Bostrom. But that may not really be necessary as all the future humans or post-humans would need to do is to simulate the human experience of the universe. They'd just need to make sure the simulated minds don't pick up on anything that doesn't look consistent or "irregularities". You wouldn't have to recreate things the human mind wouldn't ordinarily notice, like things happening at the microscopic level.
Representing the goings on among distant planetary bodies could also be compressed - no need to get into amazing detail among those, certainly not at this point. The machines just need to do a good enough job. As they would keep track of what all the simulated minds believe, they could just fill in the necessary details on demand. They could also edit out any errors if those happen to take place.
Bostrom even provides a number for simulating all of human history, which he puts at around ~1033 ‐ 1036 operations. That would be the goal for the sophisticated enough virtual reality program based on what we already know about their workings. In fact, it's likely just one computer with a mass of a planet can pull off such a task "by using less than one millionth of its processing power for one second," thinks the philosopher. A highly advanced future civilization could build a countless number of such machines.
What could counter such a proposal? Bostrom considers in his paper the possibility that humanity will destroy itself or be destroyed by an outside event like a giant meteor before it reaches this post-human simulated stage. There are actually many ways in which humanity could always be stuck in the primitive stages and not ever be able to create the hypothetical computers needed to simulate entire minds. He even allows for the possibility of our civilization becoming extinct courtesy of human-created self-replicating nanorobots which turn into "mechanical bacteria".
Another point against us living in a simulation would be that future posthumans might not care to or be allowed to run such programs at all. Why do it? What's the upside of creating "ancestor simulations"? He thinks that it's not likely the practice of running such simulations would be so widely assumed to be immoral that it would be banned everywhere. Also, knowing human nature, it's unlikely that there wouldn't be someone in the future who would not find such a project interesting. This is the kind of stuff we would do today if we could and chances are, we would continue to want to do in the far distant future.
"Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor‐simulation," writes Bostrom.
A fascinating outcome of all this speculation is that we have no way of knowing what the true reality of existence really is. Our minds are likely accessing just a small fraction of the "totality of physical existence." What we think we are may be run on virtual machines that are run on other virtual machines - it's like a nesting doll of simulations, making it nearly impossible for us to see beyond to the true nature of things. Even the posthumans simulating us could be themselves simulated. As such, there could be many levels of reality, concludes Bostrom. The future us might likely never know if they are at the "fundamental" or "basement" level.
Interestingly, this uncertainty gives rise to universal ethics. If you don't know you are the original, you better behave or the godlike beings above you will intervene.
What are other implications of these lines of reasoning? Ok, let's assume we are living in a simulation – now what? Bostrom doesn't think our behavior should be affected much, even with such heavy knowledge, especially as we don't know the true motivations of future humans behind creating the simulated minds. They might have entirely different value systems.
You can take the plunge and read the full paper by Nick Bostrom for yourself here.
Check out Nick Bostrom’s TED talk on superintelligencies:
- Is There Evidence That We're Living in a Computer Simulation? - Big ... ›
- 3 arguments why we live in a matrix and 3 arguments that refute ... ›
- There's a 20% Chance We're All Sims. - Big Think ›
- New hypothesis argues the universe simulates itself into existence - Big Think ›
- New hypothesis argues the universe simulates itself into existence - Big Think ›
- Are we living in a simulation? - Big Think ›
- Physicist creates AI algorithm that may prove reality is simulation - Big Think ›
- Physicist creates AI algorithm that may prove reality is simulation - Big Think ›
Geologists discover a rhythm to major geologic events.
- It appears that Earth has a geologic "pulse," with clusters of major events occurring every 27.5 million years.
- Working with the most accurate dating methods available, the authors of the study constructed a new history of the last 260 million years.
- Exactly why these cycles occur remains unknown, but there are some interesting theories.
Our hearts beat at a resting rate of 60 to 100 beats per minute. Lots of other things pulse, too. The colors we see and the pitches we hear, for example, are due to the different wave frequencies ("pulses") of light and sound waves.
Now, a study in the journal Geoscience Frontiers finds that Earth itself has a pulse, with one "beat" every 27.5 million years. That's the rate at which major geological events have been occurring as far back as geologists can tell.
A planetary calendar has 10 dates in red
Credit: Jagoush / Adobe Stock
According to lead author and geologist Michael Rampino of New York University's Department of Biology, "Many geologists believe that geological events are random over time. But our study provides statistical evidence for a common cycle, suggesting that these geologic events are correlated and not random."
The new study is not the first time that there's been a suggestion of a planetary geologic cycle, but it's only with recent refinements in radioisotopic dating techniques that there's evidence supporting the theory. The authors of the study collected the latest, best dating for 89 known geologic events over the last 260 million years:
- 29 sea level fluctuations
- 12 marine extinctions
- 9 land-based extinctions
- 10 periods of low ocean oxygenation
- 13 gigantic flood basalt volcanic eruptions
- 8 changes in the rate of seafloor spread
- 8 times there were global pulsations in interplate magmatism
The dates provided the scientists a new timetable of Earth's geologic history.
Tick, tick, boom
Credit: New York University
Putting all the events together, the scientists performed a series of statistical analyses that revealed that events tend to cluster around 10 different dates, with peak activity occurring every 27.5 million years. Between the ten busy periods, the number of events dropped sharply, approaching zero.
Perhaps the most fascinating question that remains unanswered for now is exactly why this is happening. The authors of the study suggest two possibilities:
"The correlations and cyclicity seen in the geologic episodes may be entirely a function of global internal Earth dynamics affecting global tectonics and climate, but similar cycles in the Earth's orbit in the Solar System and in the Galaxy might be pacing these events. Whatever the origins of these cyclical episodes, their occurrences support the case for a largely periodic, coordinated, and intermittently catastrophic geologic record, which is quite different from the views held by most geologists."
Assuming the researchers' calculations are at least roughly correct — the authors note that different statistical formulas may result in further refinement of their conclusions — there's no need to worry that we're about to be thumped by another planetary heartbeat. The last occurred some seven million years ago, meaning the next won't happen for about another 20 million years.
Brain cells snap strands of DNA in many more places and cell types than researchers previously thought.
The urgency to remember a dangerous experience requires the brain to make a series of potentially dangerous moves: Neurons and other brain cells snap open their DNA in numerous locations — more than previously realized, according to a new study — to provide quick access to genetic instructions for the mechanisms of memory storage.
The extent of these DNA double-strand breaks (DSBs) in multiple key brain regions is surprising and concerning, says study senior author Li-Huei Tsai, Picower Professor of Neuroscience at MIT and director of The Picower Institute for Learning and Memory, because while the breaks are routinely repaired, that process may become more flawed and fragile with age. Tsai's lab has shown that lingering DSBs are associated with neurodegeneration and cognitive decline and that repair mechanisms can falter.
"We wanted to understand exactly how widespread and extensive this natural activity is in the brain upon memory formation because that can give us insight into how genomic instability could undermine brain health down the road," says Tsai, who is also a professor in the Department of Brain and Cognitive Sciences and a leader of MIT's Aging Brain Initiative. "Clearly, memory formation is an urgent priority for healthy brain function, but these new results showing that several types of brain cells break their DNA in so many places to quickly express genes is still striking."
In 2015, Tsai's lab provided the first demonstration that neuronal activity caused DSBs and that they induced rapid gene expression. But those findings, mostly made in lab preparations of neurons, did not capture the full extent of the activity in the context of memory formation in a behaving animal, and did not investigate what happened in cells other than neurons.
In the new study published July 1 in PLOS ONE, lead author and former graduate student Ryan Stott and co-author and former research technician Oleg Kritsky sought to investigate the full landscape of DSB activity in learning and memory. To do so, they gave mice little electrical zaps to the feet when they entered a box, to condition a fear memory of that context. They then used several methods to assess DSBs and gene expression in the brains of the mice over the next half-hour, particularly among a variety of cell types in the prefrontal cortex and hippocampus, two regions essential for the formation and storage of conditioned fear memories. They also made measurements in the brains of mice that did not experience the foot shock to establish a baseline of activity for comparison.
The creation of a fear memory doubled the number of DSBs among neurons in the hippocampus and the prefrontal cortex, affecting more than 300 genes in each region. Among 206 affected genes common to both regions, the researchers then looked at what those genes do. Many were associated with the function of the connections neurons make with each other, called synapses. This makes sense because learning arises when neurons change their connections (a phenomenon called "synaptic plasticity") and memories are formed when groups of neurons connect together into ensembles called engrams.
"Many genes essential for neuronal function and memory formation, and significantly more of them than expected based on previous observations in cultured neurons … are potentially hotspots of DSB formation," the authors wrote in the study.
In another analysis, the researchers confirmed through measurements of RNA that the increase in DSBs indeed correlated closely with increased transcription and expression of affected genes, including ones affecting synapse function, as quickly as 10-30 minutes after the foot shock exposure.
"Overall, we find transcriptional changes are more strongly associated with [DSBs] in the brain than anticipated," they wrote. "Previously we observed 20 gene-associated [DSB] loci following stimulation of cultured neurons, while in the hippocampus and prefrontal cortex we see more than 100-150 gene associated [DSB] loci that are transcriptionally induced."
Snapping with stress
In the analysis of gene expression, the neuroscientists looked at not only neurons but also non-neuronal brain cells, or glia, and found that they also showed changes in expression of hundreds of genes after fear conditioning. Glia called astrocytes are known to be involved in fear learning, for instance, and they showed significant DSB and gene expression changes after fear conditioning.
Among the most important functions of genes associated with fear conditioning-related DSBs in glia was the response to hormones. The researchers therefore looked to see which hormones might be particularly involved and discovered that it was glutocortocoids, which are secreted in response to stress. Sure enough, the study data showed that in glia, many of the DSBs that occurred following fear conditioning occurred at genomic sites related to glutocortocoid receptors. Further tests revealed that directly stimulating those hormone receptors could trigger the same DSBs that fear conditioning did and that blocking the receptors could prevent transcription of key genes after fear conditioning.
Tsai says the finding that glia are so deeply involved in establishing memories from fear conditioning is an important surprise of the new study.
"The ability of glia to mount a robust transcriptional response to glutocorticoids suggest that glia may have a much larger role to play in the response to stress and its impact on the brain during learning than previously appreciated," she and her co-authors wrote.
Damage and danger?
More research will have to be done to prove that the DSBs required for forming and storing fear memories are a threat to later brain health, but the new study only adds to evidence that it may be the case, the authors say.
"Overall we have identified sites of DSBs at genes important for neuronal and glial functions, suggesting that impaired DNA repair of these recurrent DNA breaks which are generated as part of brain activity could result in genomic instability that contribute to aging and disease in the brain," they wrote.
The National Institutes of Health, The Glenn Foundation for Medical Research, and the JPB Foundation provided funding for the research.
Research shows that those who spend more time speaking tend to emerge as the leaders of groups, regardless of their intelligence.
- A new study proposes the "babble hypothesis" of becoming a group leader.
- Researchers show that intelligence is not the most important factor in leadership.
- Those who talk the most tend to emerge as group leaders.
If you want to become a leader, start yammering. It doesn't even necessarily matter what you say. New research shows that groups without a leader can find one if somebody starts talking a lot.
This phenomenon, described by the "babble hypothesis" of leadership, depends neither on group member intelligence nor personality. Leaders emerge based on the quantity of speaking, not quality.
Researcher Neil G. MacLaren, lead author of the study published in The Leadership Quarterly, believes his team's work may improve how groups are organized and how individuals within them are trained and evaluated.
"It turns out that early attempts to assess leadership quality were found to be highly confounded with a simple quantity: the amount of time that group members spoke during a discussion," shared MacLaren, who is a research fellow at Binghamton University.
While we tend to think of leaders as people who share important ideas, leadership may boil down to whoever "babbles" the most. Understanding the connection between how much people speak and how they become perceived as leaders is key to growing our knowledge of group dynamics.
The power of babble
The research involved 256 college students, divided into 33 groups of four to ten people each. They were asked to collaborate on either a military computer simulation game (BCT Commander) or a business-oriented game (CleanStart). The players had ten minutes to plan how they would carry out a task and 60 minutes to accomplish it as a group. One person in the group was randomly designated as the "operator," whose job was to control the user interface of the game.
To determine who became the leader of each group, the researchers asked the participants both before and after the game to nominate one to five people for this distinction. The scientists found that those who talked more were also more likely to be nominated. This remained true after controlling for a number of variables, such as previous knowledge of the game, various personality traits, or intelligence.
How leaders influence people to believe | Michael Dowling | Big Think www.youtube.com
In an interview with PsyPost, MacLaren shared that "the evidence does seem consistent that people who speak more are more likely to be viewed as leaders."
Another find was that gender bias seemed to have a strong effect on who was considered a leader. "In our data, men receive on average an extra vote just for being a man," explained MacLaren. "The effect is more extreme for the individual with the most votes."
The great theoretical physicist Steven Weinberg passed away on July 23. This is our tribute.