The future of AI lies in replicating our own neural networks

It’s tempting to think of the mind as a layer that sits on top of more primitive cognitive structures. We experience ourselves as conscious beings, after all, in a way that feels different to the rhythm of our heartbeat or the rumblings of our stomach. If the operations of the brain can be separated out and stratified, then perhaps we can construct something akin to just the top layer, and achieve human-like artificial intelligence (AI) while bypassing the messy flesh that characterizes organic life.


I understand the appeal of this view because I co-founded SwiftKey, a predictive-language software company that was bought by Microsoft. Our goal is to emulate the remarkable processes by which human beings can understand and manipulate language. We’ve made some decent progress: I was pretty proud of the elegant new communication system we built for the physicist Stephen Hawking between 2012 and 2014. But despite encouraging results, most of the time I’m reminded that we’re nowhere near achieving human-like AI. Why? Because the layered model of cognition is wrong. Most AI researchers are currently missing a central piece of the puzzle: embodiment.

Things took a wrong turn at the beginning of modern AI, back in the 1950s. Computer scientists decided to try to imitate conscious reasoning by building logical systems based on symbols. The method involves associating real-world entities with digital codes to create virtual models of the environment, which could then be projected back onto the world itself. For instance, using symbolic logic, you could instruct a machine to ‘learn’ that a cat is an animal by encoding a specific piece of knowledge using a mathematical formula such as ‘cat > is > animal’. Such formulae can be rolled up into more complex statements that allow the system to manipulate and test propositions – such as whether your average cat is as big as a horse, or likely to chase a mouse.

This method found some early success in simple contrived environments: in ‘SHRDLU’, a virtual world created by the computer scientist Terry Winograd at MIT between 1968-1970, users could talk to the computer in order to move around simple block shapes such as cones and balls. But symbolic logic proved hopelessly inadequate when faced with real-world problems, where fine-tuned symbols broke down in the face of ambiguous definitions and myriad shades of interpretation.

In later decades, as computing power grew, researchers switched to using statistics to extract patterns from massive quantities of data. These methods are often referred to as ‘machine learning’. Rather than trying to encode high-level knowledge and logical reasoning, machine learning employs a bottom-up approach in which algorithms discern relationships by repeating tasks, such as classifying the visual objects in images or transcribing recorded speech into text. Such a system might learn to identify images of cats, for example, by looking at millions of cat photos, or to make a connection between cats and mice based on the way they are referred to throughout large bodies of text. 

Machine learning has produced many tremendous practical applications in recent years. We’ve built systems that surpass us at speech recognition, image processing and lip reading; that can beat us at chess, Jeopardy! and Go; and that are learning to create visual art, compose pop music and write their own software programs. To a degree, these self-teaching algorithms mimic what we know about the subconscious processes of organic brains. Machine-learning algorithms start with simple ‘features’ (individual letters or pixels, for instance) and combine them into more complex ‘categories’, taking into account the inherent uncertainty and ambiguity in real-world data. This is somewhat analogous to the visual cortex, which receives electrical signals from the eye and interprets them as identifiable patterns and objects.

But algorithms are a long way from being able to think like us. The biggest distinction lies in our evolved biology, and how that biology processes information. Humans are made up of trillions of eukaryotic cells, which first appeared in the fossil record around 2.5 billion years ago. A human cell is a remarkable piece of networked machinery that has about the same number of components as a modern jumbo jet – all of which arose out of a longstanding, embedded encounter with the natural world. In Basin and Range (1981), the writer John McPhee observed that, if you stand with your arms outstretched to represent the whole history of the Earth, complex organisms began evolving only at the far wrist, while ‘in a single stroke with a medium-grained nail file you could eradicate human history’.

The traditional view of evolution suggests that our cellular complexity evolved from early eukaryotes via random genetic mutation and selection. But in 2005 the biologist James Shapiro at the University of Chicago outlined a radical new narrative. He argued that eukaryotic cells work ‘intelligently’ to adapt a host organism to its environment by manipulating their own DNA in response to environmental stimuli. Recent microbiological findings lend weight to this idea. For example, mammals’ immune systems have the tendency to duplicate sequences of DNA in order to generate effective antibodies to attack disease, and we now know that at least 43% of the human genome is made up of DNA that can be moved from one location to another, through a process of natural ‘genetic engineering’.

Now, it’s a bit of a leap to go from smart, self-organizing cells to the brainy sort of intelligence that concerns us here. But the point is that long before we were conscious, thinking beings, our cells were reading data from the environment and working together to mould us into robust, self-sustaining agents. What we take as intelligence, then, is not simply about using symbols to represent the world as it objectively is. Rather, we only have the world as it is revealed to us, which is rooted in our evolved, embodied needs as an organism. Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’, wrote the neuroscientist Antonio Damasio in Descartes’ Error (1994), his seminal book on cognition. In other words, we think with our whole body, not just with the brain.

I suspect that this basic imperative of bodily survival in an uncertain world is the basis of the flexibility and power of human intelligence. But few AI researchers have really embraced the implications of these insights. The motivating drive of most AI algorithms is to infer patterns from vast sets of training data – so it might require millions or even billions of individual cat photos to gain a high degree of accuracy in recognizing cats. By contrast, thanks to our needs as an organism, human beings carry with them extraordinarily rich models of the body in its broader environment. We draw on experiences and expectations to predict likely outcomes from a relatively small number of observed samples. So when a human thinks about a cat, she can probably picture the way it moves, hear the sound of purring, feel the impending scratch from an unsheathed claw. She has a rich store of sensory information at her disposal to understand the idea of a ‘cat’, and other related concepts that might help her interact with such a creature.

This means that when a human approaches a new problem, most of the hard work has already been done. In ways that we’re only just beginning to understand, our body and brain, from the cellular level upwards, have already built a model of the world that we can apply almost instantly to a wide array of challenges. But for an AI algorithm, the process begins from scratch each time. There is an active and important line of research, known as ‘inductive transfer’, focused on using prior machine-learned knowledge to inform new solutions. However, as things stand, it’s questionable whether this approach will be able to capture anything like the richness of our own bodily models.

On the same day that SwiftKey unveiled Hawking’s new communications system in 2014, he gave an interview to the BBC in which he warned that intelligent machines could end mankind. You can imagine which story ended up dominating the headlines. I agree with Hawking that we should take the risks of rogue AI seriously. But I believe we’re still very far from needing to worry about anything approaching human intelligence – and we have little hope of achieving this goal unless we think carefully about how to give algorithms some kind of long-term, embodied relationship with their environment.

—Ben Medlock

This article was originally published at Aeon and has been republished under Creative Commons.

The unusual way magic mushrooms evolved

It's got more to do with sending insects on terrifying trips than it does making Phish sound good.

Surprising Science
  • Fungi species that produce psilocybin—the main hallucinogenic ingredient in "magic" mushrooms—aren't closely related to one another.
  • Researchers have discovered that the way these fungi independently gained the ability to produce psilocybin is because of horizontal gene transfer.
  • Based on how uncommon horizontal gene transfer is in mushroom-producing fungi and the types of fungi that produce psilocybin, it seems likely that the hallucinogenic chemical is meant to be scrambled the brains of insects competing with fungi for food.
Keep reading Show less
(Photo by David Ryder/Getty Images)
Politics & Current Affairs
  • The minimum wage debate rages on
  • The same study authors in 2017 famously argued that raising the wage to $15/hr. in Seattle and Tacoma actually cost jobs
  • This study says something else, though study authors are quick to say they don't necessarily contradict each other. Ummm ...
Keep reading Show less

Take the Big Think survey for a chance to win :)

Calling all big thinkers!

  • Tell us a little bit about where you find Big Think's videos, articles, and podcasts.
  • Be entered for a chance to win 1 of 3 Amazon gift cards each worth $100.
  • All survey information is anonymous and will be used only for this survey.
Keep reading Show less

White House slams socialism in new report

The 72-page report makes a case against modern policy proposals like "Medicare for All" and free college tuition.

(Photo by Win McNamee/Getty Images)
Politics & Current Affairs
  • The report comes from the White House Council of Economic Advisers (CEA), which is run by professional economists.
  • It attempts to make direct connections between modern-day progressives and past socialist figures like Stalin and Mao.
  • The report comes in the wake of other explicitly anti-socialist sentiments expressed by the Trump administration.
Keep reading Show less

Sandra Day O’Connor, first woman on U.S. Supreme Court, has dementia

Her husband died in 2009 of the disease.

Politics & Current Affairs
  • Justice Sandra Day O'Connor was the first woman to serve on the U.S. Supreme Court.
  • She was a deciding vote on a number of cases that came before the court.
  • Watch her interview from 2015 about her upbringing and desire to see more women in all parts of government.
Keep reading Show less

Why the college dropout myth can hurt your prospects

The road from college dropout to billionaire is paved with an overwhelming amount of failures along the way.

(Photo by Justin Sullivan/Getty Images)
Personal Growth
  • Sensational news stories and anecdotes about people like Steve Jobs, Mark Zuckerberg, and Bill Gates would have you believe that quitting school is the answer.
  • Many of these dropouts were already attending elite universities and either had incredible family connections or other professional backing.
  • College dropouts make up a slim minority of the world's richest and most powerful.
Keep reading Show less

Helping others improves your mood in two different ways

Want to feel better? Try helping others, but your motivation matters.

(Photo by SAEED KHAN/AFP/Getty Images)
Mind & Brain
  • A meta-analysis of studies on altruism reveals that giving of any kind makes us feel good, but that our brain knows if we are being altruistic or are looking for a reward.
  • This is the first study to separate findings on the brain's response to giving based on motivation.
  • This has implications for how to best reward those who help you, as misjudging their motivations may have negative effects.
Keep reading Show less

For girls, video games are a gateway to STEM degrees

Turns out those violent video games might be a blessing in disguise.

pixabay.com
Culture & Religion
  • Looking at data in the U.K. suggests that the more girls play video games, the greater the chances they'll pursue a STEM degree, regardless of what kind of game they play.
  • Currently, there is a dearth of women taking up STEM degrees.
  • Although it isn't clear whether there is a causal relationship here, encouraging girls to play more video games may also encourage them to study STEM subjects.
Keep reading Show less