The simulation hypothesis is fun to talk about, but believing it requires an act of faith.
- The simulation hypothesis posits that everything we experience was coded by an intelligent being, and we are part of that computer code.
- But we cannot accurately reproduce natural laws with computer simulations.
- Faith is fine, but science requires evidence and logic.
[Note: The following is a transcript of the video embedded at the bottom of this article.]
I quite like the idea that we live in a computer simulation. It gives me hope that things will be better on the next level. Unfortunately, the idea is unscientific. But why do some people believe in the simulation hypothesis? And just exactly what's the problem with it? That's what we'll talk about today.
According to the simulation hypothesis, everything we experience was coded by an intelligent being, and we are part of that computer code. That we live in some kind of computation in and by itself is not unscientific. For all we currently know, the laws of nature are mathematical, so you could say the universe is really just computing those laws. You may find this terminology a little weird, and I would agree, but it's not controversial. The controversial bit about the simulation hypothesis is that it assumes there is another level of reality where someone or some thing controls what we believe are the laws of nature, or even interferes with those laws.
The belief in an omniscient being that can interfere with the laws of nature, but for some reason remains hidden from us, is a common element of monotheistic religions. But those who believe in the simulation hypothesis argue they arrived at their belief by reason. The philosopher Nick Boström, for example, claims it's likely that we live in a computer simulation based on an argument that, in a nutshell, goes like this. If there are a) many civilizations, and these civilizations b) build computers that run simulations of conscious beings, then c) there are many more simulated conscious beings than real ones, so you are likely to live in a simulation.
Elon Musk is among those who have bought into it. He too has said "it's most likely we're in a simulation." And even Neil DeGrasse Tyson gave the simulation hypothesis "better than 50-50 odds" of being correct.
Are we living in a simulation? | Bill Nye, Joscha Bach, Donald Hoffman | Big Think www.youtube.com
Maybe you're now rolling your eyes because, come on, let the nerds have some fun, right? And, sure, some part of this conversation is just intellectual entertainment. But I don't think popularizing the simulation hypothesis is entirely innocent fun. It's mixing science with religion, which is generally a bad idea, and, really, I think we have better things to worry about than that someone might pull the plug on us. I dare you!
But before I explain why the simulation hypothesis is not a scientific argument, I have a general comment about the difference between religion and science. Take an example from Christian faith, like Jesus healing the blind and lame. It's a religious story, but not because it's impossible to heal blind and lame people. One day we might well be able to do that. It's a religious story because it doesn't explain how the healing supposedly happens. The whole point is that the believers take it on faith. In science, in contrast, we require explanations for how something works.
Let us then have a look at Boström's argument. Here it is again. If there are many civilizations that run many simulations of conscious beings, then you are likely to be simulated.
First of all, it could be that one or both of the premises is wrong. Maybe there aren't any other civilizations, or they aren't interested in simulations. That wouldn't make the argument wrong of course; it would just mean that the conclusion can't be drawn. But I will leave aside the possibility that one of the premises is wrong because really I don't think we have good evidence for one side or the other.
The point I have seen people criticize most frequently about Boström's argument is that he just assumes it is possible to simulate human-like consciousness. We don't actually know that this is possible. However, in this case it would require explanation to assume that it is not possible. That's because, for all we currently know, consciousness is simply a property of certain systems that process large amounts of information. It doesn't really matter exactly what physical basis this information processing is based on. Could be neurons or could be transistors, or it could be transistors believing they are neurons. So, I don't think simulating consciousness is the problematic part.
The problematic part of Boström's argument is that he assumes it is possible to reproduce all our observations using not the natural laws that physicists have confirmed to extremely high precision, but using a different, underlying algorithm, which the programmer is running. I don't think that's what Boström meant to do, but it's what he did. He implicitly claimed that it's easy to reproduce the foundations of physics with something else.
But nobody presently knows how to reproduce General Relativity and the Standard Model of particle physics from a computer algorithm running on some sort of machine. You can approximate the laws that we know with a computer simulation – we do this all the time – but if that was how nature actually worked, we could see the difference. Indeed, physicists have looked for signs that natural laws really proceed step by step, like in a computer code, but their search has come up empty handed. It's possible to tell the difference because attempts to algorithmically reproduce natural laws are usually incompatible with the symmetries of Einstein's theories of Special and General Relativity. I'll leave you a reference in the info below the video. The bottom line is it's not easy to outdo Einstein.
It also doesn't help, by the way, if you assume that the simulation would run on a quantum computer. Quantum computers, as I have explained earlier, are special purpose machines. Nobody currently knows how to put General Relativity on a quantum computer.
A second issue with Boström's argument is that, for it to work, a civilization needs to be able to simulate a lot of conscious beings, and these conscious beings will themselves try to simulate conscious beings, and so on. This means you have to compress the information that we think the universe contains. Boström therefore has to assume that it's somehow possible to not care much about the details in some parts of the world where no one is currently looking, and just fill them in case someone looks.
Again though, he doesn't explain how this is supposed to work. What kind of computer code can actually do that? What algorithm can identify conscious subsystems and their intention and then quickly fill in the required information without ever producing an observable inconsistency? That's a much more difficult issue than Boström seems to appreciate. You cannot in general just throw away physical processes on short distances and still get the long distances right.
Climate models are an excellent example. We don't currently have the computational capacity to resolve distances below something like 10 kilometers or so. But you can't just throw away all the physics below this scale. This is a non-linear system, so the information from the short scales propagates up into large scales. If you can't compute the short-distance physics, you have to suitably replace it with something. Getting this right even approximately is a big headache. And the only reason climate scientists do get it approximately right is that they have observations which they can use to check whether their approximations work. If you only have a simulation, like the programmer in the simulation hypothesis, you can't do that.
And that's my issue with the simulation hypothesis. Those who believe it make, maybe unknowingly, really big assumptions about what natural laws can be reproduced with computer simulations, and they don't explain how this is supposed to work. But finding alternative explanations that match all our observations to high precision is really difficult. The simulation hypothesis, therefore, just isn't a serious scientific argument. This doesn't mean it's wrong, but it means you'd have to believe it because you have faith, not because you have logic on your side.
The Simulation Hypothesis is Pseudoscience
Republished with permission of Dr. Sabine Hossenfelder. The original article is here.
A philosopher's guide to detecting nonsense and getting around it.
- A professor in Sweden has a bold on idea on what BS, pseudoscience, and pseudophilosophy actually are.
- He suggests they are defined by a lack of "epistemic conscientiousness" rather than merely being false.
- He offers suggestions on how to avoid producing nonsense and how to identify it on sight.
There is a lot of BS going around these days. Fake cures for disease are being passed off by unscrupulous hacks, the idea that the world is flat has a shocking amount of sincere support online, and plenty of people like to suggest there isn't a scientific consensus on climate change. It can be hard to keep track of it all.
Even worse, it can be difficult to easily define all of it in a way that lets people know what they're encountering is nonsense right away. Luckily for us, Dr. Victor Moberger recently published an essay in Theoria on what counts as bullsh*t, how it interacts with pseudoscience and pseudophilosophy, and what to do about it.
The Unified Theory of B.S.
The essay "Bullshit, Pseudoscience and Pseudophilosophy" considers much of the nonsense we encounter and offers a definition that allows us to move forward in dealing with it.
Dr. Moberger argues that what makes something bullshit is a "lack of epistemic conscientiousness," meaning that the person arguing for it takes no care to assure the truth of their statements. This typically manifests in systemic errors in reasoning and the frequent use of logical fallacies such as ad hominem, red herring, false dilemma, and cherry picking, among others.
This makes bullsh*t different from lying, which involves caring what the truth is and purposely moving away from it, or mere indifference to truth, as it is quite possible for people pushing nonsense to care about their nonsense being true. It also makes it different from making the occasional mistake with reasoning, occasional errors differ from a systemic reliance on them.
Importantly, nonsense is also dependent on the epistemic unconscientiousness of the person pushing it rather than its content alone. This means some of it may end up being true (consider cases where a person's personality does match up with their star sign), but they end up being true for reasons unrelated to the bad reasoning used by its advocates.
Two subcategories: pseudoscience and pseudophilosophy
Two commonly encountered kinds of bullsh*t are pseudoscience and pseudophilosophy. They can be easily defined as "bullshit with scientific pretensions" and "bullshit with philosophical pretensions."Here are a few examples which will clarify exactly what these things mean.
A form of pseudoscience would be flat-Earthism. While it takes on scientific pretensions and can be, and has been, proven false, supporters of the idea that the Earth is flat are well known for handwaving away any evidence that falsifies their stance and dismissing good arguments against their worldview.
An amusing and illustrative example is the case of the flat-Earthers who devised two experiments to determine if the earth was flat or spherical. When their experiments produced results exactly consistent with the Earth being spherical, they refused to accept the results and concluded that something went wrong; despite having no reason to do so. Clearly, these fellows lack epistemic conscientiousness.
Pseudophilosophy is less frequently considered, but can be explained with examples of its two most popular forms.
The first is dubbed "obscurantist pseudophilosophy." It often takes the form of nonsense posing as philosophy using copious amounts of jargon and arcane, frequently erroneous reasoning connecting a mundane truth to an exciting, fantastic falsehood.
As an example, there are more than a few cases where people have argued that physical reality is a social construct. This idea is based on the perhaps trivial notion that our beliefs about reality are social constructs. Often in cases like this, when challenged on the former point, advocates of the more fantastic point will retreat to the latter, as its is less controversial, and claim the issue was one of linguistic confusion caused by their obscure terminology. When the coast is clear, they frequently return to the original stance.
Dr. Moberger suggests that the humanities and social sciences seem to have a weakness for these seemingly profound pseudophilosophies without being nonsensical fields themselves.
The second is "scientistic pseudophilosophy" and is often seen in popular science writing. It frequently manifests when questions considered in scientific writing are topics of philosophy rather than science. Because science writers are often not trained in philosophy, they may produce pseudophilosophy when trying to interact with these questions.
A famous example is Sam Harris' attempt at reducing the problems of moral philosophy to scientific problems. His book "The Moral Landscape" is infamously littered with strawman arguments, a failure to interact with relevant philosophical literature, and bad philosophy in general.
In all of these cases, we see that the supporters of some kind of nonsense think that what they are supporting is true, but that they are willing to ignore the basic rules of science and philosophical reasoning in order to do so.
Okay, so there is plenty of nonsense in the world. What do we do about it?
While the first step to dealing with this nonsense is to understand what it is, many people would like to go a little farther than that.
Dr. Moberger explained that sometimes, the best thing we can do is show a little humility:
"One of the main points of the essay is that there is no sharp boundary between bullshit and non-bullshit. Pseudoscience, pseudophilosophy and other kinds of bullshit are very much continuous with the kind of epistemic irresponsibility or unconscientiousness that we all display in our daily lives. We all have biases and we all dislike cognitive dissonance, and so without realizing it we cherry-pick evidence and use various kinds of fallacious reasoning. This tendency is especially strong when it comes to emotionally sensitive areas, such as politics, where we may have built part of our sense of identity and worth around a particular stance. Well-educated, smart people are no exception. In fact, they are sometimes worse, since they are more adept at using sophistry to rationalize their biases. Thus, the first thing to realize, I think, is that all of us are prone to produce bullshit and that it is much easier to spot other people's bullshit than our own. Intellectual humility is first and foremost. To me it does not come naturally and I struggle with it all the time."
He also advises that people take the time to develop their critical thinking skills:
"I think it is also very helpful to develop the kind of critical thinking skills that are taught to undergraduates in philosophy. The best book I know of in the genre is Richard Feldman's 'Reason and Argument.' It provides the basic conceptual tools necessary for thinking clearly about philosophical issues, but those tools are certainly useful outside of philosophy too."
Lastly, he reminds us that looking at the facts of the matter can clear things up:
"Finally, no degree of intellectual humility or critical thinking skills is a substitute for gathering relevant information about the issue at hand. And this is where empirical science comes in. If we want to think rationally about any broadly speaking empirical issue, we need to inform ourselves about what empirical science has to say about it. We also need to remember that individual scientists are often unreliable and that scientific consensus is what we should look for. (Indeed, it is a common theme in pseudoscience to appeal to individual scientists whose views do not reflect scientific consensus.)"
A great deal of the pseudoscience and pseudophilosophy we deal with is characterized not by being false or even unfalsifiable, but rather by a lack of concern for assuring that something is true by the person pushing it. Oftentimes, it is presented with fairly common logical fallacies and bold claims of rejecting the scientific consensus.
While having this definition doesn't remove bullshit from the world, it might help you avoid stepping in it. In the end, isn't that what matters?
Homeopathic manufacturers take advantage of sick and vulnerable populations in criminal ways—and the FDA is, after much absence, starting to crack down.
Samuel Hahnemann discovered cinchona, Peruvian bark, while translating Scottish physician William Cullen’s book on malaria. The German doctor left a career in medicine because he objected to practices like bloodletting, which he considered ineffectual and barbaric. Upon leaving the establishment Hahnemann supported his family by translating medical textbooks—a linguist, he spoke nine languages. Cullen’s 1789 book, A Treatise on the Materia Medica, set off a light bulb that would forever change the trajectory of Hahnemann’s career.
Inspired by the possibilities of a less dangerous approach to healing, Hahnemann slathered cinchona over his body to induce malaria-like symptoms. It remains unproven that he really developed malaria; an inflammatory reaction is possible. Undaunted, he reasoned that it would be the same for any healthy individual. This experience became the foundation of homeopathy. The idea that “like cures like” was not new, but by being its champion Hahnemann created what would become a multi-billion dollar industry.
Samuel Hahnemann, the father of homeopathy. (Getty Images)
Hahnemann’s method, potentization, remains the basis of homeopathy today. To understand this process, let’s consider the most popular homeopathic flu remedy on the shelves, Oscillococcinum. One of France’s top-selling medicines, it rakes in $20 million a year in America. Yet few people know what it actually is.
Oscillococcinum is based on French physician Joseph Roy’s 1917 discovery of an oscillating bacterium he termed Oscillococcus in the blood of flu victims. Roy speculated that this bacterium was responsible for a host of diseases, ranging from eczema to cancer. Years later he discovered the same bacterium in the blood of a Long Island duckling. Never mind that no one else ever verified its existence; it was likely dust on the microscope slide.
Roy had a vision and stuck with it, evidence be damned. Today the process of potentization to manufacture Oscillococcinum begins with the heart and liver of the Muscovy duck. Technicians mix one part duck heart and liver with one hundred parts sugar in water. Then the process is repeated two hundred times. Well, lactose is continually added in; that one part duck is the only active ingredient blended in at any point.
Dilutions—a proving—for various remedies range in intensity. A homeopathic prescription claiming a potency of 6x means there’s one part active ingredient per million bits of sugar. By the time you get to 6c there is one part in ten trillion. By 13c no parts remain. A typical homeopathic medicine is 30c. Former U.S. Air Force flight surgeon and family physician Harriet Hall points out that at this level you’d need a container thirty times the sizes of the earth just to find one molecule of the initial substance. Oscillococcinum is 200c—pure sugar water.
In homeopathy, the less of an active ingredient, the stronger it is. Skirting the basic precepts of biology, most homeopathic remedies don’t only work because of placebo, they are placebo. Despite the fact that this knowledge hides in plain sight, the FDA continues to turn a blind eye, to the point where the FTC stepped in last year to require stricter regulations of claims being made on homeopathic packaging, which pull in nearly $3 billion per year in America.
Until now, that is. The FDA announced that, in a rare recent instance of consumer protection, it's cracking down on marketing claims by manufacturers of homeopathic products. Homeopathy has long enjoyed a lack of surveillance, even though these products are legally subject to the same approval and misbranding requirements as pharmaceuticals.
The problem with current regulation is two-fold. First, claiming sugar water is responsible for curing colds or any other ailments is irresponsible. The second is more dangerous: some products do contain active ingredients—ironic, since in homeopathic lore it is a weak proving—which have not been clinically tested.
As FDA commissioner Scott Gottlieb puts it:
In recent years, we’ve seen a large uptick in products labeled as homeopathic that are being marketed for a wide array of diseases and conditions, from the common cold to cancer. In many cases, people may be placing their trust and money in therapies that may bring little to no benefit in combating serious ailments, or worse – that may cause significant and even irreparable harm because the products are poorly manufactured, or contain active ingredients that aren’t adequately tested or disclosed to patients.
Most homeopathic products will come through this unscathed, however. The FDA is targeting products with reported safety concerns, containing potentially dangerous ingredients, administered in ways other than oral or topical, marketed for the prevention of life-threatening diseases, aimed at vulnerable populations, or not meeting standards of quality, strength, or purity as those required under the law.
FDA Commissioner Scott Gottlieb. (Getty Images)
This could lead to serious problems, says Timothy Caulfield, the Canada Research Chair in Health Law and Policy at the University of Alberta and author of Is Gwyneth Paltrow Wrong About Everything? While he’s happy the FDA is finally regulating homeopathy, he says that by overlooking clinically benign products it is inadvertently endorsing their efficacy. As he told me:
Do to resource constraints, they are prioritizing the high risk products. I understand the motivation for this policy. But I fear they are implicitly endorsing the low risk stuff—which is likely most of the homeopathic products. What is a needed is a clear, blanket policy that is built on the reality that homeopathy is science-free nonsense and that any claims of efficacy— at least beyond a placebo effect—are not founded on good evidence.
The rigors and constraints pharmaceuticals have to endure to get to market are for our protection, even if the pharmaceutical industry itself has been devious when extending patents and marketing ineffective products. Homeopathic medicine today in large part thrives due to consumer concern about Big Pharma—homeopathy was nearly decimated until magical thinking made a huge comeback in the 1960s. The response to growing corporate and governmental power created a mindset that anything coming from those camps must be tainted.
That we search for solutions beyond pharmaceuticals is understandable. Yet homeopathic manufacturers are taking advantage of us in similarly criminal ways. We’re literally being sold sugar water to coat the bitter reality of the vitamin aisle’s most ineffective products. As with the rest of the sugar in the market, we gladly pay for it, ignorant of the damage it causes.
Derek Beres is the author of Whole Motion: Training Your Brain and Body For Optimal Health. Based in Los Angeles, he is working on a new book about spiritual consumerism. Stay in touch on Facebook and Twitter.
Is misinformation causing outbreaks of diseases long thought curable? A recent study found that just a simple "heads up" about fake news can help save thousands of lives.
It is easier to fool a person than it is to convince a person that they’ve been fooled. This is one of the great curses of humanity.
Given the incredible amount of information we process each day, it is difficult for any of us to critically analyze all of it. This is made even more difficult by the natural tendency to be overly critical of any information that threatens our worldview and under-critical of information that supports it.
The menace of misinformation can plague a society with grave consequences. For instance, the failure of people to understand that HIV causes AIDS killed an estimated 300,000 people in South Africa at the turn of the millennium. The state of Minnesota is battling a measles outbreak caused by anti-vaccination propaganda. And discussion over the effects of misinformation on recent elections in Austria, Germany, and the United States is still ongoing.
If only we had a way to prevent our seduction by misinformation. A vaccine of some kind perhaps…
A recent set of experiments shows us that there is a way to help reduce the effects of misinformation on people: the authors amusingly call it the “inoculation.”
In two experiments, groups of test subjects were exposed to misinformation after having been exposed to an “inoculation”. This inoculation was given in the form of either a warning of future misinformation or a review of why the misinformation they were about to read was a fallacy, with an additional group being given both. The control group was merely given misinformation.
The tests showed that this “pre-bunking” was extremely effective. While members of the control group saw a significant decrease in acceptance of the scientific consensus on climate change, members of all other groups saw minor drops at worst, which even then were heavily influenced by their pre-existing worldviews.
The most effective of these methods was an explanation of how the misinformation would be presented and how it would attempt to mislead them. This method was effective not only at slowing the pace of false information taking hold, but also worked across all worldviews and even reduced the polarization of all test subjects.
So, we can help prevent rampant misinformation now? Where do I sign up?
Research into how this works is still ongoing, though it is well known that suspicious people are less likely to be taken in by fraudulent action. The researchers also pointed out that several studies support the notion that teaching about misconceptions leads to greater learning overall then just telling somebody the truth. While the topic used in the study was climate change consensus, the researchers saw no reason to suppose these methods function differently with other subjects.
We all know somebody who has been taken in by a bad argument or by information they wanted to wanted to believe is true. Sometimes, it is even ourselves that can be fooled. A method to help prevent being taken in by bad arguments and false narratives could be a powerful tool for educating ourselves and others.
The full study can be read here.
There is censorship in science, admits Bill Nye – but not nearly as much as there should be.
There are certainly some science labs and military research projects that are for classified eyes only, says Bill Nye, but in his view the more pressing issue regarding science and censorship is the proliferation of exaggerated and twisted science studies, and outright pseudoscience on the internet. It's a topic particularly relevant in the wake of heightened fake news awareness. We ordinary citizens may never crack the code of the secret projects government scientists may or may not be working on, but we can get busy educating ourselves and fine-tuning our critical thinking skills so we aren't led astray by false stories.
Bill Nye's most recent book is Unstoppable: Harnessing Science to Change the World.