The future of AI lies in replicating our own neural networks
It’s tempting to think of the mind as a layer that sits on top of more primitive cognitive structures. We experience ourselves as conscious beings, after all, in a way that feels different to the rhythm of our heartbeat or the rumblings of our stomach. If the operations of the brain can be separated out and stratified, then perhaps we can construct something akin to just the top layer, and achieve human-like artificial intelligence (AI) while bypassing the messy flesh that characterizes organic life.
I understand the appeal of this view because I co-founded SwiftKey, a predictive-language software company that was bought by Microsoft. Our goal is to emulate the remarkable processes by which human beings can understand and manipulate language. We’ve made some decent progress: I was pretty proud of the elegant new communication system we built for the physicist Stephen Hawking between 2012 and 2014. But despite encouraging results, most of the time I’m reminded that we’re nowhere near achieving human-like AI. Why? Because the layered model of cognition is wrong. Most AI researchers are currently missing a central piece of the puzzle: embodiment.
Things took a wrong turn at the beginning of modern AI, back in the 1950s. Computer scientists decided to try to imitate conscious reasoning by building logical systems based on symbols. The method involves associating real-world entities with digital codes to create virtual models of the environment, which could then be projected back onto the world itself. For instance, using symbolic logic, you could instruct a machine to ‘learn’ that a cat is an animal by encoding a specific piece of knowledge using a mathematical formula such as ‘cat > is > animal’. Such formulae can be rolled up into more complex statements that allow the system to manipulate and test propositions – such as whether your average cat is as big as a horse, or likely to chase a mouse.
This method found some early success in simple contrived environments: in ‘SHRDLU’, a virtual world created by the computer scientist Terry Winograd at MIT between 1968-1970, users could talk to the computer in order to move around simple block shapes such as cones and balls. But symbolic logic proved hopelessly inadequate when faced with real-world problems, where fine-tuned symbols broke down in the face of ambiguous definitions and myriad shades of interpretation.
In later decades, as computing power grew, researchers switched to using statistics to extract patterns from massive quantities of data. These methods are often referred to as ‘machine learning’. Rather than trying to encode high-level knowledge and logical reasoning, machine learning employs a bottom-up approach in which algorithms discern relationships by repeating tasks, such as classifying the visual objects in images or transcribing recorded speech into text. Such a system might learn to identify images of cats, for example, by looking at millions of cat photos, or to make a connection between cats and mice based on the way they are referred to throughout large bodies of text.
Machine learning has produced many tremendous practical applications in recent years. We’ve built systems that surpass us at speech recognition, image processing and lip reading; that can beat us at chess, Jeopardy! and Go; and that are learning to create visual art, compose pop music and write their own software programs. To a degree, these self-teaching algorithms mimic what we know about the subconscious processes of organic brains. Machine-learning algorithms start with simple ‘features’ (individual letters or pixels, for instance) and combine them into more complex ‘categories’, taking into account the inherent uncertainty and ambiguity in real-world data. This is somewhat analogous to the visual cortex, which receives electrical signals from the eye and interprets them as identifiable patterns and objects.
But algorithms are a long way from being able to think like us. The biggest distinction lies in our evolved biology, and how that biology processes information. Humans are made up of trillions of eukaryotic cells, which first appeared in the fossil record around 2.5 billion years ago. A human cell is a remarkable piece of networked machinery that has about the same number of components as a modern jumbo jet – all of which arose out of a longstanding, embedded encounter with the natural world. In Basin and Range (1981), the writer John McPhee observed that, if you stand with your arms outstretched to represent the whole history of the Earth, complex organisms began evolving only at the far wrist, while ‘in a single stroke with a medium-grained nail file you could eradicate human history’.
The traditional view of evolution suggests that our cellular complexity evolved from early eukaryotes via random genetic mutation and selection. But in 2005 the biologist James Shapiro at the University of Chicago outlined a radical new narrative. He argued that eukaryotic cells work ‘intelligently’ to adapt a host organism to its environment by manipulating their own DNA in response to environmental stimuli. Recent microbiological findings lend weight to this idea. For example, mammals’ immune systems have the tendency to duplicate sequences of DNA in order to generate effective antibodies to attack disease, and we now know that at least 43% of the human genome is made up of DNA that can be moved from one location to another, through a process of natural ‘genetic engineering’.
Now, it’s a bit of a leap to go from smart, self-organizing cells to the brainy sort of intelligence that concerns us here. But the point is that long before we were conscious, thinking beings, our cells were reading data from the environment and working together to mould us into robust, self-sustaining agents. What we take as intelligence, then, is not simply about using symbols to represent the world as it objectively is. Rather, we only have the world as it is revealed to us, which is rooted in our evolved, embodied needs as an organism. Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’, wrote the neuroscientist Antonio Damasio in Descartes’ Error (1994), his seminal book on cognition. In other words, we think with our whole body, not just with the brain.
I suspect that this basic imperative of bodily survival in an uncertain world is the basis of the flexibility and power of human intelligence. But few AI researchers have really embraced the implications of these insights. The motivating drive of most AI algorithms is to infer patterns from vast sets of training data – so it might require millions or even billions of individual cat photos to gain a high degree of accuracy in recognizing cats. By contrast, thanks to our needs as an organism, human beings carry with them extraordinarily rich models of the body in its broader environment. We draw on experiences and expectations to predict likely outcomes from a relatively small number of observed samples. So when a human thinks about a cat, she can probably picture the way it moves, hear the sound of purring, feel the impending scratch from an unsheathed claw. She has a rich store of sensory information at her disposal to understand the idea of a ‘cat’, and other related concepts that might help her interact with such a creature.
This means that when a human approaches a new problem, most of the hard work has already been done. In ways that we’re only just beginning to understand, our body and brain, from the cellular level upwards, have already built a model of the world that we can apply almost instantly to a wide array of challenges. But for an AI algorithm, the process begins from scratch each time. There is an active and important line of research, known as ‘inductive transfer’, focused on using prior machine-learned knowledge to inform new solutions. However, as things stand, it’s questionable whether this approach will be able to capture anything like the richness of our own bodily models.
On the same day that SwiftKey unveiled Hawking’s new communications system in 2014, he gave an interview to the BBC in which he warned that intelligent machines could end mankind. You can imagine which story ended up dominating the headlines. I agree with Hawking that we should take the risks of rogue AI seriously. But I believe we’re still very far from needing to worry about anything approaching human intelligence – and we have little hope of achieving this goal unless we think carefully about how to give algorithms some kind of long-term, embodied relationship with their environment.
This article was originally published at Aeon and has been republished under Creative Commons.
Malcolm Gladwell teaches "Get over yourself and get to work" for Big Think Edge.
- Learn to recognize failure and know the big difference between panicking and choking.
- At Big Think Edge, Malcolm Gladwell teaches how to check your inner critic and get clear on what failure is.
- Subscribe to Big Think Edge before we launch on March 30 to get 20% off monthly and annual memberships.
It's one of the most consistent patterns in the unviverse. What causes it?
- Spinning discs are everywhere – just look at our solar system, the rings of Saturn, and all the spiral galaxies in the universe.
- Spinning discs are the result of two things: The force of gravity and a phenomenon in physics called the conservation of angular momentum.
- Gravity brings matter together; the closer the matter gets, the more it accelerates – much like an ice skater who spins faster and faster the closer their arms get to their body. Then, this spinning cloud collapses due to up and down and diagonal collisions that cancel each other out until the only motion they have in common is the spin – and voila: A flat disc.
The Oedipal complex, repressed memories, penis envy? Sigmund Freud's ideas are far-reaching, but few have withstood the onslaught of empirical evidence.
- Sigmund Freud stands alongside Charles Darwin and Albert Einstein as one of history's best-known scientists.
- Despite his claim of creating a new science, Freud's psychoanalysis is unfalsifiable and based on scant empirical evidence.
- Studies continue to show that Freud's ideas are unfounded, and Freud has come under scrutiny for fabricating his most famous case studies.
Few thinkers are as celebrated as Sigmund Freud, a figure as well-known as Charles Darwin and Albert Einstein. Neurologist and the founder of psychoanalysis, Freud's ideas didn't simply shift the paradigms in academia and psychotherapy. They indelibly disseminated into our cultural consciousness. Ideas like transference, repression, the unconscious iceberg, and the superego are ubiquitous in today's popular discourse.
Despite this renown, Freud's ideas have proven to be ill-substantiated. Worse, it is now believed that Freud himself may have fabricated many of his results, opportunistically disregarding evidence with the conscious aim of promoting preferred beliefs.
"[Freud] really didn't test his ideas," Harold Takooshian, professor of psychology at Fordham University, told ATI. "He was just very persuasive. He said things no one said before, and said them in such a way that people actually moved from their homes to Vienna and study with him."
Unlike Darwin and Einstein, Freud's brand of psychology presents the impression of a scientific endeavor but ultimately lack two of vital scientific components: falsification and empirical evidence.
Freud's therapeutic approach may be unfounded, but at least it was more humane than other therapies of the day. In 1903, this patient is being treated in "auto-conduction cage" as a part of his electrotherapy. (Photo: Wikimedia Commons)
The discipline of psychotherapy is arguably Freud's greatest contribution to psychology. In the post-World War II era, psychoanalysis spread through Western academia, influencing not only psychotherapy but even fields such as literary criticism in profound ways.
The aim of psychoanalysis is to treat mental disorders housed in the patient's psyche. Proponents believe that such conflicts arise between conscious thoughts and unconscious drives and manifest as dreams, blunders, anxiety, depression, or neurosis. To help, therapists attempt to unearth unconscious desires that have been blocked by the mind's defense mechanisms. By raising repressed emotions and memories to the conscious fore, the therapist can liberate and help the patient heal.
That's the idea at least, but the psychoanalytic technique stands on shaky empirical ground. Data leans heavily on a therapist's arbitrary interpretations, offering no safe guards against presuppositions and implicit biases. And the free association method offers not buttress to the idea of unconscious motivation.
Don't get us wrong. Patients have improved and even claimed to be cured thanks to psychoanalytic therapy. However, the lack of methodological rigor means the division between effective treatment and placebo effect is ill-defined.
Sigmund Freud, circa 1921. (Photo: Wikimedia Commons)
Nor has Freud's concept of repressed memories held up. Many papers and articles have been written to dispel the confusion surrounding repressed (aka dissociated) memories. Their arguments center on two facts of the mind neurologists have become better acquainted with since Freud's day.
First, our memories are malleable, not perfect recordings of events stored on a biological hard drive. People forget things. Childhood memories fade or are revised to suit a preferred narrative. We recall blurry gists rather than clean, sharp images. Physical changes to the brain can result in loss of memory. These realities of our mental slipperiness can easily be misinterpreted under Freud's model as repression of trauma.
Second, people who face trauma and abuse often remember it. The release of stress hormones imprints the experience, strengthening neural connections and rendering it difficult to forget. It's one of the reasons victims continue to suffer long after. As the American Psychological Association points out, there is "little or no empirical support" for dissociated memory theory, and potential occurrences are a rarity, not the norm.
More worryingly, there is evidence that people are vulnerable to constructing false memories (aka pseudomemories). A 1996 study found it could use suggestion to make one-fifth of participants believe in a fictitious childhood memory in which they were lost in a mall. And a 2007 study found that a therapy-based recollection of childhood abuse "was less likely to be corroborated by other evidence than when the memories came without help."
This has led many to wonder if the expectations of psychoanalytic therapy may inadvertently become a self-fulfilling prophecy with some patients.
"The use of various dubious techniques by therapists and counselors aimed at recovering allegedly repressed memories of [trauma] can often produce detailed and horrific false memories," writes Chris French, a professor of psychology at Goldsmiths, University of London. "In fact, there is a consensus among scientists studying memory that traumatic events are more likely to be remembered than forgotten, often leading to post-traumatic stress disorder."
The Oedipal complex
The Blind Oedipus Commending His Children to the Gods by Benigne Gagneraux. (Photo: Wikimedia Commons)
During the phallic stage, children develop fierce erotic feelings for their opposite-sex parent. This desire, in turn, leads them to hate their same-sex parent. Boys wish to replace their father and possess their mother; girls become jealous of their mothers and desire their fathers. Since they can do neither, they repress those feelings for fear of reprisal. If unresolved, the complex can result in neurosis later in life.
That's the Oedipal complex in a nutshell. You'd think such a counterintuitive theory would require strong evidence to back it up, but that isn't the case.
Studies claiming to prove the Oedipal complex look to positive sexual imprinting — that is, the phenomenon in which people choose partners with physical characteristics matching their same-sex parent. For example, a man's wife and mother have the same eye color, or woman's husband and father sport a similar nose.
But such studies don't often show strong correlation. One study reporting "a correction of 92.8 percent between the relative jaw width of a man's mother and that of [his] mates" had to be retracted for factual errors and incorrect analysis. Studies showing causation seem absent from the literature, and as we'll see, the veracity of Freud's own case studies supporting the complex is openly questioned today.
Better supported, yet still hypothetical, is the Westermarck effect. Also called reverse sexual imprinting, the effect predicts that people develop a sexual aversion to those they grow up in close proximity with, as a mean to avoid inbreeding. The effect isn't just shown in parents and siblings; even step-siblings will grow sexual averse to each other if they grow up from early childhood.
An analysis published in Behavioral Ecology and Sociobiology evaluated the literature on human mate choice. The analysis found little evidence for positive imprinting, citing study design flaws and an unwillingness of researchers to seek alternative explanations. In contrast, it found better support for negative sexual imprinting, though it did note the need for further research.
The Freudian slip
Mark notices Deborah enter the office whistling an upbeat tune. He turns to his coworker to say, "Deborah's pretty cheery this morning," but accidentally blunders, "Deborah's pretty cherry this morning." Simple slip up? Not according to Freud, who would label this a parapraxis. Today, it's colloquially known as a "Freudian slip."
"Almost invariably I discover a disturbing influence from something outside of the intended speech," Freud wrote in The Psychopathology of Everyday Life. "The disturbing element is a single unconscious thought, which comes to light through the special blunder."
In the Freudian view, Mark's mistaken word choice resulted from his unconscious desire for Deborah, as evident by the sexually-charged meanings of the word "cherry." But Rob Hartsuiker, a psycholinguist from Ghent University, says that such inferences miss the mark by ignoring how our brains process language.
According to Hartsuiker, our brains organize words by similarity and meaning. First, we must select the word in that network and then process the word's sounds. In this interplay, all sorts of conditions can prevent us from grasping the proper phonemes: inattention, sleepiness, recent activation, and even age. In a study co-authored by Hartsuiker, brain scans showed our minds can recognize and correct for taboo utterances internally.
"This is very typical, and it's also something Freud rather ignored," Hartsuiker told BBC. He added that evidence for true Freudian slips is scant.
Freud's case studies
Sergej Pankejeff, known as the "Wolf Man" in Freud's case study, claimed that Freud's analysis of his condition was "propaganda."
It's worth noting that there is much debate as to the extent that Freud falsified his own case studies. One famous example is the case of the "Wolf Man," real name Sergej Pankejeff. During their sessions, Pankejeff told Freud about a dream in which he was lying in bed and saw white wolves through an open window. Freud interpreted the dream as the manifestation of a repressed trauma. Specifically, he claimed that Pankejeff must have witnessed his parents in coitus.
For Freud this was case closed. He claimed Pankejeff successfully cured and his case as evidence for psychoanalysis's merit. Pankejeff disagreed. He found Freud's interpretation implausible and said that Freud's handling of his story was "propaganda." He remained in therapy on and off for over 60 years.
Many of Freud's other case studies, such "Dora" and "the Rat Man" cases, have come under similar scrutiny.
Sigmund Freud and his legacy
Freud's ideas may not live up to scientific inquiry, but their long shelf-life in film, literature, and criticism has created some fun readings of popular stories. Sometimes a face is just a face, but that face is a murderous phallic symbol. (Photo: Flickr)
Of course, there are many ideas we've left out. Homosexuality originating from arrested sexual development in anal phase? No way. Freudian psychosexual development theory? Unfalsifiable. Women's penis envy? Unfounded and insulting. Men's castration anxiety? Not in the way Freud meant it.
If Freud's legacy is so ill-informed, so unfounded, how did he and his cigars cast such a long shadow over the 20th century? Because there was nothing better to offer at the time.
When Freud came onto the scene, neurology was engaged in a giddy free-for-all. As New Yorker writer Louis Menand points out, the era's treatments included hypnosis, cocaine, hydrotherapy, female castration, and institutionalization. By contemporary standards, it was a horror show (as evident by these "treatments" featuring so prominently in our horror movies).
Psychoanalysis offered a comparably clement and humane alternative. "Freud's theories were like a flashlight in a candle factory," anthropologist Tanya Luhrmann told Menand.
But Freud and his advocates triumph his techniques as a science, and this is wrong. The empirical evidence for his ideas is limited and arbitrary, and his conclusions are unfalsifiable. The theory that explains every possible outcome explains none of them.
With that said, one might consider Freud's ideas to be a proto-science. As astrology heralded astronomy, and alchemy preceded chemistry, so to did Freud's psychoanalysis popularize psychology, paving the way for its more rapid development as a scientific discipline. But like astrology and alchemy, we should recognize Freud's ideas as the historic artifacts they are.
Do you have a magnetic compass in your head?
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.