- The history of AI shows boom periods (AI summers) followed by busts (AI winters).
- The cyclical nature of AI funding is due to hype and promises not fulfilling expectations.
- This time, we might enter something resembling an AI autumn rather than an AI winter, but fundamental questions remain if true AI is even possible.
The dream of building a machine that can think like a human stretches back to the origins of electronic computers. But ever since research into artificial intelligence (AI) began in earnest after World War II, the field has gone through a series of boom and bust cycles called "AI summers" and "AI winters."
Each cycle begins with optimistic claims that a fully, generally intelligent machine is just a decade or so away. Funding pours in and progress seems swift. Then, a decade or so later, progress stalls and funding dries up. Over the last ten years, we've clearly been in an AI summer as vast improvements in computing power and new techniques like deep learning have led to remarkable advances. But now, as we enter the third decade of the 21st century, some who follow AI feel the cold winds at their back leading them to ask, "Is Winter Coming?" If so, what went wrong this time?
How to build an A.I. brain that can conceive of itself | Joscha Bach | Big Think www.youtube.com
A brief history of AI
To see if the winds of winter are really coming for AI, it is useful to look at the field's history. The first real summer can be pegged to 1956 and the famous Dartmouth University Workshop where one of the field's pioneers, John McCarthy, coined the term "artificial intelligence." The conference was attended by scientists like Marvin Minsky and H. A. Simon, whose names would go on to become synonymous with the field. For those researchers, the task ahead was clear: capture the processes of human reasoning through the manipulation of symbolic systems (i.e., computer programs).
Unless we are talking about very specific tasks, any 6-year-old is infinitely more flexible and general in his or her intelligence than the "smartest" Amazon robot.
Throughout the 1960s, progress seemed to come swiftly as researchers developed computer systems that could play chess, deduce mathematical theorems, and even engage in simple discussions with a person. Government funding flowed generously. Optimism was so high that, in 1970, Minsky famously proclaimed, "In three to eight years we will have a machine with the general intelligence of a human being."
By the mid 1970s, however, it was clear that Minsky's optimism was unwarranted. Progress stalled as many of the innovations of the previous decade proved too narrow in their applicability, seeming more like toys than steps toward a general version of artificial intelligence. Funding dried up so completely that researchers soon took pains not to refer to their work as AI, as the term carried a stink that killed proposals.
The cycle repeated itself in the 1980s with the rise of expert systems and the renewed interest in what we now call neural networks (i.e., programs based on connectivity architectures that mimic neurons in the brain). Once again, there was wild optimism and big increases in funding. What was novel in this cycle was the addition of significant private funding as more companies began to rely on computers as essential components of their business. But, once again, the big promises were never realized, and funding dried up again.
AI: Hype vs. reality
The AI summer we're currently experiencing began sometime in the first decade of the new millennium. Vast increases in both computing speed and storage ushered in the era of deep learning and big data. Deep learning methods use stacked layers of neural networks that pass information to each other to solve complex problems like facial recognition. Big data provides these systems with vast oceans of examples (like images of faces) to train on. The applications of this progress are all around us: Google Maps give you near-perfect directions; you can talk with Siri anytime you want; IBM's Deep Think computer beat Jeopardy's greatest human champions.
In response, the hype rose again. True AI, we were told, must be just around the corner. In 2015, for example, The Guardian reported that self-driving cars, the killer app of modern AI, was close at hand. Readers were told, "By 2020 you will become a permanent backseat driver." And just two years ago, Elon Musk claimed that by 2020 "we'd have over a million cars with full self-driving software."
The general intelligence — i.e., the understanding — we humans exhibit may be inseparable from our experiencing. If that's true, then our physical embodiment, enmeshed in a context-rich world, may be difficult if not impossible to capture in symbolic processing systems.
By now, it's obvious that a world of fully self-driving cars is still years away. Likewise, in spite of the remarkable progress we've made in machine learning, we're still far from creating systems that possess general intelligence. The emphasis is on the term general because that's what AI really has been promising all these years: a machine that's flexible in dealing with any situation as it comes up. Instead, what researchers have found is that, despite all their remarkable progress, the systems they've built remain brittle, which is a technical term meaning "they do very wrong things when given unexpected inputs." Try asking Siri to find "restaurants that aren't McDonald's." You won't like the results.
Unless we are talking about very specific tasks, any 6-year-old is infinitely more flexible and general in his or her intelligence than the "smartest" Amazon robot.
Even more important is the sense that, as remarkable as they are, none of the systems we've built understand anything about what they are doing. As philosopher Alva Noe said of Deep Think's famous Jeopardy! victory, "Watson answered no questions. It participated in no competition. It didn't do anything. All the doing was on our side. We played Jeapordy! with Watson." Considering this fact, some researchers claim that the general intelligence — i.e., the understanding — we humans exhibit may be inseparable from our experiencing. If that's true, then our physical embodiment, enmeshed in a context-rich world, may be difficult if not impossible to capture in symbolic processing systems.
Not the (AI) winter of our discontent
Thus, talk a of a new AI winter is popping up again. Given the importance of deep learning and big data in technology, it's hard to imagine funding for these domains drying up any time soon. What we may be seeing, however, is a kind of AI autumn when researchers wisely recalibrate their expectations and perhaps rethink their perspectives.
If you ask your maps app to find "restaurants that aren't McDonald's," you won't like the result.
- The Chinese Room thought experiment is designed to show how understanding something cannot be reduced to an "input-process-output" model.
- Artificial intelligence today is becoming increasingly sophisticated thanks to learning algorithms but still fails to demonstrate true understanding.
- All humans demonstrate computational habits when we first learn a new skill, until this somehow becomes understanding.
It's your first day at work, and a new colleague, Kendall, catches you over coffee.
"You watch the game last night?" she says. You're desperate to make friends, but you hate football.
"Sure, I can't believe that result," you say, vaguely, and it works. She nods happily and talks at you for a while. Every day after that, you live a lie. You listen to a football podcast on the weekend and then regurgitate whatever it is you hear. You have no idea what you're saying, but it seems to impress Kendall. You somehow manage to come across as an expert, and soon she won't stop talking football with you.
The question is: do you actually know about football, or are you imitating knowledge? And what's the difference? Welcome to philosopher John Searle's "Chinese Room."
The Chinese Room
Searle's argument was designed as a critique of what's called a "functionalist" view of mind. This is the philosophy that argues that our mind can be explained fully by what role it plays, or in other words, what it does or what "function" it has.
One form of functionalism sees the human mind as following an "input-process-output" model. We have the input of our senses, the process of our brains, and a behavioral output. Searle thought this was at best an oversimplification, and his Chinese Room thought experiment goes to show how human minds are not simply biological computers. It goes like this:
Imagine a room, and inside is John, who can't speak a word of Chinese. Outside the room, a Chinese person sends a message into the room in Chinese. Luckily, John has an "if-then" book for Chinese characters. For instance, if he gets <你好吗>, the proper reply is <我还好>. All John has to do is follow his instruction book.
The Chinese speaker outside of the room thinks they're talking to someone inside who knows Chinese. But in reality, it's just John with his fancy book.
What is understanding?
Does John understand Chinese? The Chinese Room is, by all accounts, a computational view of the mind, yet it seems that something is missing. Truly understanding something is not an "if-then" automated response. John is missing that sinking in feeling, the absorption, the bit of understanding that's so hard to express. Understanding a language doesn't work like this. Humans are not Google Translate.
And yet, this is how AIs are programmed. A computer system is programmed to provide a certain output based on a finite list of certain inputs. If I double click the mouse, I open a file. If you type a letter, your monitor displays tiny black squiggles. If we press the right buttons in order, we win at Mario Kart. Input — Process — Output.
Can imitation become so fluid or competent that it is understanding.
But AIs don't know what they're doing, and Google Translate doesn't really understand what it's saying, does it? They're just following a programmer's orders. If I say, "Will it rain tomorrow?" Siri can look up the weather. But if I ask, "Will water fall from the clouds tomorrow?" it'll be stumped. A human would not (although they might look at you oddly).
A fun way to test just how little an AI understands us is to ask your maps app to find "restaurants that aren't McDonald's." Unsurprisingly, you won't get what you want.
The Future of AI
To be fair, the field of artificial intelligence is just getting started. Yes, it's easy right now to trick our voice assistant apps, and search engines can be frustratingly unhelpful at times. But that doesn't mean AI will always be like that. It might be that the problem is only one of complexity and sophistication, rather than anything else. It might be that the "if-then" rule book just needs work. Things like "the McDonald's test" or AI's inability to respond to original questions reveal only a limitation in programming. Given that language and the list of possible questions is finite, it's quite possible that AI will be able to (at the very least) perfectly mimic a human response in the not too distant future.
What's more, AIs today have increasingly advanced learning capabilities. Algorithms are no longer simply input-process-output but rather allow systems to search for information and adapt anew to what they receive.
A notorious example of this occurred when a Microsoft chat bot started spouting bigotry and racism after "learning" from what it read on Twitter. (Although, this might just say more about Twitter than AI.) Or, more sinister perhaps, two Facebook chat bots were shut down after it was discovered that they were not only talking to each other but were doing so in an invented language. Did they understand what they were doing? Who's to say that, with enough learning and enough practice, an AI "Chinese Room" might not reach understanding?
Can imitation become understanding?
We've all been a "Chinese Room" at times — be it talking about sports at work, cramming for an exam, using a word we didn't entirely know the meaning of, or calculating math problems. We can all mimic understanding, but it also begs the question: can imitation become so fluid or competent that it is understanding.
The old adage "fake it, 'till you make it" has been proven true over and over. If you repeat an action enough times, it becomes easy and habitual. For instance, when you practice a language, musical instrument, or a math calculation, then after a while, it becomes second nature. Our brain changes with repetition.
So, it might just be that we all start off as Chinese Rooms when we learn something new, but this still leaves us with a pertinent question: when, how, and at what point does John actually understand Chinese? More importantly, will Siri or Alexa ever understand you?
Companies can identify you from your music preferences, as well as influence and profit from your behavior.
- New research discovered that you can be identified from just three song choices.
- This type of information can be exploited by streaming services through targeted advertising.
- The researchers are calling for musical preference to be considered in regulations regarding online privacy.
While the focus on music piracy dominated the media for years, an equally important (and far less discussed) phenomenon occurred during the transition from broadcast radio to streaming. People were no longer beholden to the gatekeepers known as DJs. Today, listeners have the entire history of music at their fingertips. Each person is now their own DJ.
If it's free, you are the product
Though this might appear empowering, every advancement comes at a cost. Because listeners changed how they consumed music (namely, from radio broadcasts to personalized online streams), companies had to change their monetization strategy. Now, you are the product.
When you curate a playlist, you are inadvertently sending tons of data to different companies, with Spotify, YouTube, and Apple Music leading the way. As it turns out, according to a new study from Israeli researchers — Ariel University's Dr. Ron Hirschprung and Tel Aviv University's Dr. Ori Leshman — your musical tastes reveal more about your personality than you likely ever imagined.
Musical selection is a quasi-identifier
There are different ways in which you can be identified. Identifiers, such as your social security number, are highly specific and unique to you. But then there are quasi-identifiers — things like age, gender, and occupation — that can also give away your identity. The authors claim that musical selection is a quasi-identifier, and they argue that, as with other forms of sensitive data, our playlists should be considered when constructing privacy laws.
In their paper, they write, "[T]he combination of Big-Data, together with the availability of computational power — which is notoriously known for its potential of privacy violation — introduces a privacy threat from an unexpected angle: listening to music."
To prove their point, the researchers divided undergraduate students into four groups with roughly 35 volunteers in each. Every member submitted three songs from their playlist of favorite tracks. Then, the researchers picked five members at random in each group, and the remaining volunteers were asked to vote to determine if they could match the members with their playlists.
Photo: cherryandbees / Adobe Stock
Even to the surprise of the researchers, the participants were right between 80 percent and 100 percent of the time. Incredibly, these students did not know one another well and were not aware in advance of anyone's musical preferences.
There are many outward signs that mark us in the eyes of others: what we wear, what we eat, how we style our hair, our mannerisms and posture, and even where we stand at parties. Other people pick up on these subtle clues, which in turn allows them to predict our personalities. In this study, the volunteers were able to identify the musical preferences of strangers simply by observing their outward appearances.
Of course, companies notice similar things and are able to exploit what they learn about us. In a press release, the authors stated:
"Music can become a form of characterization, and even an identifier. It provides commercial companies like Google and Spotify with additional and more in-depth information about us as users of these platforms. In the digital world we live in today, these findings have far-reaching implications on privacy violations, especially since information about people can be inferred from a completely unexpected source, which is therefore lacking in protection against such violations."Musical preference isn't the only way in which you can be identified online. For instance, your browsing history can give away your identity. Listening to your favorite tunes while searching Google for a new recipe isn't as innocuous as you might think.
Stay in touch with Derek on Twitter and Facebook. His most recent book is "Hero's Dose: The Case For Psychedelics in Ritual and Therapy."
The simulation hypothesis is fun to talk about, but believing it requires an act of faith.
- The simulation hypothesis posits that everything we experience was coded by an intelligent being, and we are part of that computer code.
- But we cannot accurately reproduce natural laws with computer simulations.
- Faith is fine, but science requires evidence and logic.
[Note: The following is a transcript of the video embedded at the bottom of this article.]
I quite like the idea that we live in a computer simulation. It gives me hope that things will be better on the next level. Unfortunately, the idea is unscientific. But why do some people believe in the simulation hypothesis? And just exactly what's the problem with it? That's what we'll talk about today.
According to the simulation hypothesis, everything we experience was coded by an intelligent being, and we are part of that computer code. That we live in some kind of computation in and by itself is not unscientific. For all we currently know, the laws of nature are mathematical, so you could say the universe is really just computing those laws. You may find this terminology a little weird, and I would agree, but it's not controversial. The controversial bit about the simulation hypothesis is that it assumes there is another level of reality where someone or some thing controls what we believe are the laws of nature, or even interferes with those laws.
The belief in an omniscient being that can interfere with the laws of nature, but for some reason remains hidden from us, is a common element of monotheistic religions. But those who believe in the simulation hypothesis argue they arrived at their belief by reason. The philosopher Nick Boström, for example, claims it's likely that we live in a computer simulation based on an argument that, in a nutshell, goes like this. If there are a) many civilizations, and these civilizations b) build computers that run simulations of conscious beings, then c) there are many more simulated conscious beings than real ones, so you are likely to live in a simulation.
Elon Musk is among those who have bought into it. He too has said "it's most likely we're in a simulation." And even Neil DeGrasse Tyson gave the simulation hypothesis "better than 50-50 odds" of being correct.
Are we living in a simulation? | Bill Nye, Joscha Bach, Donald Hoffman | Big Think www.youtube.com
Maybe you're now rolling your eyes because, come on, let the nerds have some fun, right? And, sure, some part of this conversation is just intellectual entertainment. But I don't think popularizing the simulation hypothesis is entirely innocent fun. It's mixing science with religion, which is generally a bad idea, and, really, I think we have better things to worry about than that someone might pull the plug on us. I dare you!
But before I explain why the simulation hypothesis is not a scientific argument, I have a general comment about the difference between religion and science. Take an example from Christian faith, like Jesus healing the blind and lame. It's a religious story, but not because it's impossible to heal blind and lame people. One day we might well be able to do that. It's a religious story because it doesn't explain how the healing supposedly happens. The whole point is that the believers take it on faith. In science, in contrast, we require explanations for how something works.
Let us then have a look at Boström's argument. Here it is again. If there are many civilizations that run many simulations of conscious beings, then you are likely to be simulated.
First of all, it could be that one or both of the premises is wrong. Maybe there aren't any other civilizations, or they aren't interested in simulations. That wouldn't make the argument wrong of course; it would just mean that the conclusion can't be drawn. But I will leave aside the possibility that one of the premises is wrong because really I don't think we have good evidence for one side or the other.
The point I have seen people criticize most frequently about Boström's argument is that he just assumes it is possible to simulate human-like consciousness. We don't actually know that this is possible. However, in this case it would require explanation to assume that it is not possible. That's because, for all we currently know, consciousness is simply a property of certain systems that process large amounts of information. It doesn't really matter exactly what physical basis this information processing is based on. Could be neurons or could be transistors, or it could be transistors believing they are neurons. So, I don't think simulating consciousness is the problematic part.
The problematic part of Boström's argument is that he assumes it is possible to reproduce all our observations using not the natural laws that physicists have confirmed to extremely high precision, but using a different, underlying algorithm, which the programmer is running. I don't think that's what Boström meant to do, but it's what he did. He implicitly claimed that it's easy to reproduce the foundations of physics with something else.
But nobody presently knows how to reproduce General Relativity and the Standard Model of particle physics from a computer algorithm running on some sort of machine. You can approximate the laws that we know with a computer simulation – we do this all the time – but if that was how nature actually worked, we could see the difference. Indeed, physicists have looked for signs that natural laws really proceed step by step, like in a computer code, but their search has come up empty handed. It's possible to tell the difference because attempts to algorithmically reproduce natural laws are usually incompatible with the symmetries of Einstein's theories of Special and General Relativity. I'll leave you a reference in the info below the video. The bottom line is it's not easy to outdo Einstein.
It also doesn't help, by the way, if you assume that the simulation would run on a quantum computer. Quantum computers, as I have explained earlier, are special purpose machines. Nobody currently knows how to put General Relativity on a quantum computer.
A second issue with Boström's argument is that, for it to work, a civilization needs to be able to simulate a lot of conscious beings, and these conscious beings will themselves try to simulate conscious beings, and so on. This means you have to compress the information that we think the universe contains. Boström therefore has to assume that it's somehow possible to not care much about the details in some parts of the world where no one is currently looking, and just fill them in case someone looks.
Again though, he doesn't explain how this is supposed to work. What kind of computer code can actually do that? What algorithm can identify conscious subsystems and their intention and then quickly fill in the required information without ever producing an observable inconsistency? That's a much more difficult issue than Boström seems to appreciate. You cannot in general just throw away physical processes on short distances and still get the long distances right.
Climate models are an excellent example. We don't currently have the computational capacity to resolve distances below something like 10 kilometers or so. But you can't just throw away all the physics below this scale. This is a non-linear system, so the information from the short scales propagates up into large scales. If you can't compute the short-distance physics, you have to suitably replace it with something. Getting this right even approximately is a big headache. And the only reason climate scientists do get it approximately right is that they have observations which they can use to check whether their approximations work. If you only have a simulation, like the programmer in the simulation hypothesis, you can't do that.
And that's my issue with the simulation hypothesis. Those who believe it make, maybe unknowingly, really big assumptions about what natural laws can be reproduced with computer simulations, and they don't explain how this is supposed to work. But finding alternative explanations that match all our observations to high precision is really difficult. The simulation hypothesis, therefore, just isn't a serious scientific argument. This doesn't mean it's wrong, but it means you'd have to believe it because you have faith, not because you have logic on your side.
The Simulation Hypothesis is Pseudoscience
Republished with permission of Dr. Sabine Hossenfelder. The original article is here.
Creating an afterlife—or a simulation of one—would take vast amounts of energy. Some scientists think the best way to capture that energy is by building megastructures around stars.
- In a 2018 paper, researchers Alexey Turchin and Maxim Chernyakov published a paper outlining various ways humans might someday be able to achieve immortality or resurrection.
- One way involves creating a simulated afterlife, in which artificial intelligence would build simulations of past human lives.
- Getting the necessary power for the simulation might require building a Dyson sphere, which is a theoretical megastructure that orbits a star and captures its energy.
Is there an afterlife?
Despite centuries of inquiry, nobody's made progress on this fundamental question, and perhaps nobody ever will. So, maybe a better question is: Can humans create an afterlife?
Some scientists think so.
In 2018, Alexey Turchin and Maxim Chernyakov, both members of the Russian Transhumanist Movement, wrote a paper outlining the main ways science might someday make immortality and resurrection possible. Called the "Immortality Roadmap," the project describes the ways people might be able to extend lifespan or live forever, from using cryonics to freeze themselves, to constructing nanobots for "treatment of injuries and cell cyborgization."
But the Immortality Roadmap mentions one particularly grandiose road to immortality. Outlined in "Plan C" of the project, the idea is to create a simulation of humanity's past through artificial intelligence that's able to digitally reconstruct people.
The AI would use DNA and other information about individuals to create models of those individuals within a simulation, allowing recently deceased people to experience another chance at life — or, at least an approximation of life.
"The main idea of a resurrection-simulation is that if one takes the DNA of a past person and subjects it to the same developmental condition, as well as correcting the development based on some known outcomes, it is possible to create a model of a past person which is very close to the original," the researchers wrote.
"DNA samples of most people who lived in past 1 to 2 centuries could be extracted via global archeology. After the moment of death, the simulated person is moved into some form of the afterlife, perhaps similar to his religious expectations, where he meets his relatives."
But would that digital copy really be you, or rather a fundamentally different digital being that resembles you? What about the other "people" that inhabit the simulation, would they be "real"? And would people actually want to repeat their lives over again, perhaps forever?
Of course, these are questions that Immortality Roadmap can't answer. But what's clear is that, if technology ever becomes able to create a "resurrection simulation," it's going to require vast amounts of computing power — far more than what currently exists on Earth. That's where Dyson spheres come into play.
In 1960, the theoretical physicist Freeman Dyson published a paper describing a peculiar strategy scientists could use to detect signs of alien life: look for stars encompassed by gigantic megastructures.
Why? Dyson figured that if spacefaring alien civilizations do exist, then they must have figured out a way to generate vast amounts of energy. One theoretical way aliens could do that is through harnessing the power of stars: By surrounding a star with orbiting structures that capture solar energy, a civilization could theoretically generate far more energy than they could on a planet.
That's the basic idea behind Dyson spheres. Of course, modern science is far from being able to build such a complex megastructure, and it's unclear whether it'll ever be possible.
"An actual sphere around the sun is completely impractical," Stuart Armstrong, a research fellow at Oxford University's Future of Humanity Institute who has studied megastructure concepts, told Popular Mechanics in 2020.
There are many questions about and arguments against the feasibility of Dyson spheres. Obviously, our modern engineering capabilities wouldn't enable us to build a structure that big and complex, and then transport it to the sun. And even if engineers could build an enormous sun shell, we don't have materials with enough tensile strength to hold together the structure once it's surrounding the sun.
Other potential problems: space debris colliding with the sphere, inefficiencies in transporting the energy back to Earth, and having to perform maintenance on a megastructure that's dangerously close to the sun. In short, the Dyson sphere is a very theoretical concept.
Credit: vexworldwide via Adobe Stock
But some people think building a Dyson sphere is more feasible than it seems. In 2012, the bioethicist and transhumanist George Dvorsky published a blog post titled "How to build a Dyson sphere in five (relatively) easy steps." His strategy, in short, calls for sending autonomous robots into space, where they would:
- Get energy
- Mine Mercury
- Get materials into orbit
- Make solar collectors
- Extract energy
"The idea is to build the entire swarm in iterative steps and not all at once. We would only need to build a small section of the Dyson sphere to provide the energy requirements for the rest of the project. Thus, construction efficiency will increase over time as the project progresses," Dvorsky wrote.
"We're going to have to mine materials from Mercury. Actually, we'll likely have to take the whole planet apart. The Dyson sphere will require a horrendous amount of material—so much so, in fact, that, should we want to completely envelope the sun, we are going to have to disassemble not just Mercury, but Venus, some of the outer planets, and any nearby asteroids as well."
Credit: ALEXEY TURCHIN
Turchin echoed a similar idea to Popular Mechanics, acknowledging that while humans currently can't build a Dyson sphere, "nanorobots could do it."
Still, even if scientists someday manage to create a Dyson sphere that's able to power a resurrection simulation, there's a good chance many people won't take part: Surveys repeatedly show that most people would not opt to live forever if given the choice.