How do you know you are real? A classic paper by philosopher Nick Bostrom argues you are likely a simulation.
- Philosopher Nick Bostrom argues that humans are likely computer simulations in the "Simulation Hypothesis".
- Bostrom thinks advanced civilizations of posthumans will have technology to simulate their ancestors.
- Elon Musk and others support this idea.
Are we living in a computer-driven simulation? That seems like an impossible hypothesis to prove. But let's just look at how impossible that really is.
For some machine to be able to conjure up our whole reality, it needs to be amazingly powerful, able to keep track of an incalculable number of variables. Consider the course of just one human lifetime, with all of the events it entails, all the materials, ideas and people that one interacts with throughout an average lifespan. Then multiply that by about a hundred billion souls that have graced this planet with their presence so far. The interactions between all these people, as well as the interactions between all the animals, plants, bacterium, planetary bodies, really all the elements we know and don't know to be a part of this world, is what constitutes the reality you encounter today.
Composing all that would require coordinating an almost unimaginable amount of data. Yet, it's just "almost" inconceivable. The fact that we can actually right now in this article attempt to come up with this number is what makes it potentially possible.
So how much data are we talking about? And how would such a machine work?
In 2003, the Swedish philosopher Nick Bostrom, who teaches at University of Oxford, wrote an influential paper on the subject called "Are you living in a computer simulation" that tackles just this subject.
In the paper, Bostrom argues that future people will likely have super-powerful computers on which they could run simulations of their "forebears". These simulations would be so good that the simulated people would think they are conscious. In that case, it's likely that we are among such "simulated minds" rather than "the original biological ones."
In fact, if we don't believe we are simulations, concludes Bostrom, then "we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears." If you accept one premise (that you'll have powerful super-computing descendants), you have to accept the other (you are simulation).
That's pretty heavy stuff. How to unpack it?
As he goes into the details of his argument, Bostrom writes that within the philosophy of mind, it is possible to conjecture that an artificially-created system could be made to have "conscious experiences" as long as it is equipped with "the right sort of computational structures and processes." It's presumptuous to assume that only experiences within "a carbon‐based biological neural networks inside a cranium" (your head) can gives rise to consciousness. Silicon processors in a computer can be potentially made to mimic the same thing.
Of course, at this point in time this isn't something our computers can do. But we can imagine that the current rate of progress and what we know of the constraints imposed by physical laws can lead to civilizations able to come up with such machines, even turning planets and stars into giant computers. These could be quantum or nuclear but whatever they would be, they could probably run amazingly detailed simulations.
In fact, there is number to represent the kind of power needed to emulate a human brain's functionality, which Bostrom gives as ranging from 1014 to 1017 operations per second. If you hit that kind of computer speed, you can run a reasonable enough human mind within the machine.
Simulating the whole universe, including all the details "down to the quantum level" requires more computing oomph, to the point that it may be "unfeasible," thinks Bostrom. But that may not really be necessary as all the future humans or post-humans would need to do is to simulate the human experience of the universe. They'd just need to make sure the simulated minds don't pick up on anything that doesn't look consistent or "irregularities". You wouldn't have to recreate things the human mind wouldn't ordinarily notice, like things happening at the microscopic level.
Representing the goings on among distant planetary bodies could also be compressed - no need to get into amazing detail among those, certainly not at this point. The machines just need to do a good enough job. As they would keep track of what all the simulated minds believe, they could just fill in the necessary details on demand. They could also edit out any errors if those happen to take place.
Bostrom even provides a number for simulating all of human history, which he puts at around ~1033 ‐ 1036 operations. That would be the goal for the sophisticated enough virtual reality program based on what we already know about their workings. In fact, it's likely just one computer with a mass of a planet can pull off such a task "by using less than one millionth of its processing power for one second," thinks the philosopher. A highly advanced future civilization could build a countless number of such machines.
What could counter such a proposal? Bostrom considers in his paper the possibility that humanity will destroy itself or be destroyed by an outside event like a giant meteor before it reaches this post-human simulated stage. There are actually many ways in which humanity could always be stuck in the primitive stages and not ever be able to create the hypothetical computers needed to simulate entire minds. He even allows for the possibility of our civilization becoming extinct courtesy of human-created self-replicating nanorobots which turn into "mechanical bacteria".
Another point against us living in a simulation would be that future posthumans might not care to or be allowed to run such programs at all. Why do it? What's the upside of creating "ancestor simulations"? He thinks that it's not likely the practice of running such simulations would be so widely assumed to be immoral that it would be banned everywhere. Also, knowing human nature, it's unlikely that there wouldn't be someone in the future who would not find such a project interesting. This is the kind of stuff we would do today if we could and chances are, we would continue to want to do in the far distant future.
"Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor‐simulation," writes Bostrom.
A fascinating outcome of all this speculation is that we have no way of knowing what the true reality of existence really is. Our minds are likely accessing just a small fraction of the "totality of physical existence." What we think we are may be run on virtual machines that are run on other virtual machines - it's like a nesting doll of simulations, making it nearly impossible for us to see beyond to the true nature of things. Even the posthumans simulating us could be themselves simulated. As such, there could be many levels of reality, concludes Bostrom. The future us might likely never know if they are at the "fundamental" or "basement" level.
Interestingly, this uncertainty gives rise to universal ethics. If you don't know you are the original, you better behave or the godlike beings above you will intervene.
What are other implications of these lines of reasoning? Ok, let's assume we are living in a simulation – now what? Bostrom doesn't think our behavior should be affected much, even with such heavy knowledge, especially as we don't know the true motivations of future humans behind creating the simulated minds. They might have entirely different value systems.
You can take the plunge and read the full paper by Nick Bostrom for yourself here.
Check out Nick Bostrom’s TED talk on superintelligencies:
Philosopher and cognitive scientist David Chalmers warns about an AI-dominated future world without consciousness at a recent conference on artificial intelligence that also included Elon Musk, Ray Kurzweil, Sam Harris, Demis Hassabis and others.
Recently, a conference on artificial intelligence, tantalizingly titled “Superintelligence: Science or Fiction?”, was hosted by the Future of Life Institute, which works to promote “optimistic visions of the future”.
The conference offered a range of opinions on the subject from a variety of experts, including Elon Musk of Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of Google's DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark.
The conversation's topics centered on the future benefits and risks of artificial superintelligence, with everyone generally agreeing that it’s only a matter of time before AI becomes paramount in our lives. Eventually, AI will surpass human intelligence, with the ensuing risks and transformations. And Elon Musk, for one, thinks it’s rather pointless to be concerned as we are already cyborgs, considering all the technological extensions of ourselves that we depend on a daily basis.
A worry for Australian philosopher and cognitive scientist David Chalmers is creating a world devoid of consciousness. He sees the discussion of future superintelligences often presume that eventually AIs will become conscious. But what if that kind of sci-fi possibility that we will create completely artificial humans is not going to come to fruition? Instead, we could be creating a world endowed with artificial intelligence but not actual consciousness.
David Chalmers speaking. Credit: Future of Life Institute.
Here’s how Chalmers describes this vision (starting at 22:27 in Youtube video below):
“For me, that raising the possibility of a massive failure mode in the future, the possibility that we create human or super human level AGI and we've got a whole world populated by super human level AGIs, none of whom is conscious. And that would be a world, could potentially be a world of great intelligence, no consciousness no subjective experience at all. Now, I think many many people, with a wide variety of views, take the view that basically subjective experience or consciousness is required in order to have any meaning or value in your life at all. So therefore, a world without consciousness could not possibly a positive outcome. maybe it wouldn't be a terribly negative outcome, it would just be a 0 outcome, and among the worst possible outcomes.”
Chalmers is known for his work on the philosophy of mind and has delved particularly into the nature of consciousness. He famously formulated the idea of a “hard problem of consciousness” which he describes in his 1995 paper “Facing up to the problem of consciousness” as the question of ”why does the feeling which accompanies awareness of sensory information exist at all?"
His solution to this issue of an AI-run world without consciousness? Create a world of AIs with human-like consciousness:
“I mean, one thing we ought to at least consider doing there is making, given that we don't understand consciousness, we don't have a complete theory of consciousness, maybe we can be most confident about consciousness when it's similar to the case that we know about the best, namely human human consciousness... So, therefore maybe there is an imperative to create human-like AGI in order that we can be maximally confident that there is going to be consciousness,” says Chalmers (starting at 23:51).
By making it our clear goal to fully recreate ourselves in all of our human characteristics, we may be able to avoid a soulless world of machines becoming our destiny. A warning and an objective worth considering while we can. Yet it sounds from Chalmers’s words that as we don’t understand consciousness, perhaps this is a goal doomed to failure.
Please check out the excellent conference in full here:
Robots ready to produce the new Mini Cooper are pictured during a tour of the BMW's plant at Cowley in Oxford, central England, on November 18, 2013. (Photo credit: ANDREW COWIE/AFP/Getty Images)
We cannot rule out the possibility that a superintelligence will do some very bad things, says AGI expert Ben Goertzel. But we can't stop the research now – even if we wanted to.
Let’s just go ahead and address the question on everyone’s mind: will AI kill us? What is the negative potential of transhuman superintelligence? Once its cognitive power surpasses our own, will it give us a leg-up in 'the singularity', or will it look at our collective track record of harming our own species, other species, the world that gave us life, etc., and exterminate us like pests? AI expert Ben Goertzel believes we’ve been at this point of uncertainty many times before in our evolution. When we stepped out of our caves, it was a risk – no one knew it would lead to cities and space flight. When we spoke the first word, took up agriculture, invented the printing press, flicked the internet on-switch – all of these things could have led to our demise, and in some sense, our eventual demise can be traced all the way back to the day that ancient human learnt how to make fire. Progress helps us, until the day it kills us. That said, fear of negative potential cannot stop us from attempting forward motion – and by now, says Goertzel, it’s too late anyway. Even if the U.S. decided to pull the plug on superhuman intelligence research, China would keep at it. Even if China pulled out, Russia, Australia, Brazil, Nigeria would march on. We know there are massive benefits – both humanitarian and corporate – and we have latched to the idea. "The way we got to this point as a species and a culture has been to keep doing amazing new things that we didn’t fully understand," says Goertzel, and for better or worse, "that’s what we’re going to keep on doing." Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.
Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.
A recent conference on the future of artificial intelligence features visionary debate between Elon Musk, Ray Kurzweil, Sam Harris, Nick Bostrom, David Chalmers, Jaan Tallinn and others.
A fascinating conference on artificial intelligence was recently hosted by the Future of Life Institute, an organization aimed at promoting “optimistic visions of the future”. The conference “Superintelligence: Science or Fiction?” included such luminaries as Elon Musk of Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of MIT’s DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark.
The group touched on a number of topics about the future benefits and risks of coming artificial superintelligence, with everyone generally agreeing that it’s only a matter of time before AI becomes ubiquitous in our lives. Eventually, AI will surpass human intelligence, with the risks and transformations that such a seismic event would entail.
Elon Musk has not always been an optimistic voice for AI, warning of its dangers to humanity. But here he sounds more muted about the threat. He sees the AI future as inevitable, with dangers to be mitigated through government regulation, as much as he doesn’t like the idea of them being a “bit of a buzzkill”.
He also brings up an interesting perspective that our fears of the technological changes the future will bring are largely irrelevant. According to Musk, we are already cyborgs by utilizing “machine extensions” of ourselves like phones and computers.
“By far you have more power, more capability, than the President of the United States had 30 years ago. If you have an Internet link you have an article of wisdom, you can communicate to millions of people, you can communicate to the rest of Earth instantly. I mean, these are magical powers that didn’t exist, not that long ago. So everyone is already superhuman, and a cyborg,” says Musk [at 33:56].
He sees humans as information-processing machines that pale in comparison to the powers of a computer. What is necessary, according to Musk, is to create a greater integration between man and machine, specifically altering our brains with technology to make them more computer-like.
“I think the two things that are needed for a future that we would look at and conclude is good, most likely, is, we have to solve that bandwidth constraint with a direct neural interface. I think a high bandwidth interface to the cortex, so that we can have a digital tertiary layer that’s more fully symbiotic with the rest of us. We’ve got the cortex and the limbic system, which seem to work together pretty well - they’ve got good bandwidth, whereas the bandwidth to additional tertiary layer is weak,” explained Musk [at 35:05]
Once we solve that issue, AI will spread everywhere. It’s important to do so because, according to Musk, if only a smaller group would have such capabilities, they would become “dictators” with “dominion over Earth”.
What would a world filled with such cyborgs look like? Visions of Star Trek’s Borg come to mind.
Musk thinks it will be a society full of equals:
“And if we do those things, then it will be tied to our consciousness, tied to our will, tied to the sum of individual human will, and everyone would have it so it would be sort of still a relatively even playing field, in fact, it would be probably more egalitarian than today,” points out Musk [at 36:38].
The whole conference is immensely fascinating and worth watching in full. Check it out here: