Once a week.
Subscribe to our weekly newsletter.
Why A.I. is a big fat lie
The Dr. Data Show is a new web series that breaks the mold for data science infotainment, captivating the planet with short webisodes that cover the very best of machine learning and predictive analytics.
- All the hype around artificial intelligence misunderstands what intelligence really is.
- And A.I. is definitely, definitely not going to kill you, ever.
- Machine learning as a process and a concept, however, holds more promise.
A.I. is a big fat lie
A.I. is a big fat lie. Artificial intelligence is a fraudulent hoax — or in the best cases it's a hyped-up buzzword that confuses and deceives. The much better, precise term would instead usually be machine learning – which is genuinely powerful and everyone oughta be excited about it.
On the other hand, AI does provide some great material for nerdy jokes. So put on your skepticism hat, it's time for an AI-debunkin', slam-dunkin', machine learning-lovin', robopocalypse myth-bustin', smackdown jamboree – yeehaw!
3 main points
1) Unlike AI, machine learning's totally legit. I gotta say, it wins the Awesomest Technology Ever award, forging advancements that make ya go, "Hooha!". However, these advancements are almost entirely limited to supervised machine learning, which can only tackle problems for which there exist many labeled or historical examples in the data from which the computer can learn. This inherently limits machine learning to only a very particular subset of what humans can do – plus also a limited range of things humans can't do.
2) AI is BS. And for the record, this naysayer taught the Columbia University graduate-level "Artificial Intelligence" course, as well as other related courses there.
AI is nothing but a brand. A powerful brand, but an empty promise. The concept of "intelligence" is entirely subjective and intrinsically human. Those who espouse the limitless wonders of AI and warn of its dangers – including the likes of Bill Gates and Elon Musk – all make the same false presumption: that intelligence is a one-dimensional spectrum and that technological advancements propel us along that spectrum, down a path that leads toward human-level capabilities. Nuh uh. The advancements only happen with labeled data. We are advancing quickly, but in a different direction and only across a very particular, restricted microcosm of capabilities.
The term artificial intelligence has no place in science or engineering. "AI" is valid only for philosophy and science fiction – and, by the way, I totally love the exploration of AI in those areas.
3) AI isn't gonna kill you. The forthcoming robot apocalypse is a ghost story. The idea that machines will uprise on their own volition and eradicate humanity holds no merit.
Neural networks for the win
In the movie "Terminator 2: Judgment Day," the titular robot says, "My CPU is a neural net processor, a learning computer." The neural network of which that famous robot speaks is actually a real kind of machine learning method. A neural network is a way to depict a complex mathematical formula, organized into layers. This formula can be trained to do things like recognize images for self-driving cars. For example, watch several seconds of a neural network performing object recognition.
What you see it doing there is truly amazing. The network's identifying all those objects. With machine learning, the computer has essentially programmed itself to do this. On its own, it has worked out the nitty gritty details of exactly what patterns or visual features to look for. Machine learning's ability to achieve such things is awe-inspiring and extremely valuable.
The latest improvements to neural networks are called deep learning. They're what make this level of success in object recognition possible. With deep learning, the network is quite literally deeper – more of those layers. However, even way way back in 1997, the first time I taught the machine learning course, neural networks were already steering self-driving cars, in limited contexts, and we even had our students apply them for face recognition as a homework assignment.
The achitecture for a simple neural network with four layers
But the more recent improvements are uncanny, boosting its power for many industrial applications. So, we've even launched a new conference, Deep Learning World, which covers the commercial deployment of deep learning. It runs alongside our long-standing machine learning conference series, Predictive Analytics World.
Supervised machine learning requires labeled data
So, with machines just getting better and better at humanlike tasks, doesn't that mean they're getting smarter and smarter, moving towards human intelligence?
No. It can get really, really good at certain tasks, but only when there's the right data from which to learn. For the object recognition discussed above, it learned to do that from a large number of example photos within which the target objects were already correctly labeled. It needed those examples to learn to recognize those kinds of objects. This is called supervised machine learning: when there is pre-labeled training data. The learning process is guided or "supervised" by the labeled examples. It keeps tweaking the neural network to do better on those examples, one incremental improvement at a time. That's the learning process. And the only way it knows the neural network is improving or "learning" is by testing it on those labeled examples. Without labeled data, it couldn't recognize its own improvements so it wouldn't know to stick with each improvement along the way. Supervised machine learning is the most common form of machine learning.
Here's another example. In 2011, IBM's Watson computer defeated the two all-time human champions on the TV quiz show Jeopardy. I'm a big fan. This was by far the most amazing thing I've seen a computer do – more impressive than anything I'd seen during six years of graduate school in natural language understanding research. Here's a 30-second clip of Watson answering three questions.
To be clear, the computer didn't actually hear the spoken questions but rather was fed each question as typed text. But its ability to rattle off one answer after another – given the convoluted, clever wording of Jeopardy questions, which are designed for humans and run across any and all topics of conversation – feels to me like the best "intelligent-like" thing I've ever seen from a computer.
But the Watson machine could only do that because it had been given many labeled examples from which to learn: 25,000 questions taken from prior years of this TV quiz show, each with their own correct answer.
At the core, the trick was to turn every question into a yes/no prediction: "Will such-n-such turn out to be the answer to this question?" Yes or no. If you can answer that question, then you can answer any question – you just try thousands of options out until you get a confident "yes." For example, "Is 'Abraham Lincoln' the answer to 'Who was the first president?'" No. "Is 'George Washington'?" Yes! Now the machine has its answer and spits it out.
Computers that can talk like humans
And there's another area of language use that also has plentiful labeled data: machine translation. Machine learning gobbles up a feast of training data for translating between, say, English and Japanese, because there are tons of translated texts out there filled with English sentences and their corresponding Japanese translations.
In recent years, Google Translate – which anyone can use online – swapped out the original underlying solution for a much-improved one driven by deep learning. Go try it out – translate a letter to your friend or relative who has a different first language than you. I use it a lot myself.
On the other hand, general competence with natural languages like English is a hallmark of humanity – and only humanity. There's no known roadmap to fluency for our silicon sisters and brothers. When we humans understand one another, underneath all the words and somewhat logical grammatical rules is "general common sense and reasoning." You can't work with language without that very particular human skill. Which is a broad, unwieldy, amorphous thing we humans amazingly have.
So our hopes and dreams of talking computers are dashed because, unfortunately, there's no labeled data for "talking like a person." You can get the right data for a very restricted, specific task, like handling TV quiz show questions, or answering the limited range of questions people might expect Siri to be able to answer. But the general notion of "talking like a human" is not a well-defined problem. Computers can only solve problems that are precisely defined.
So we can't leverage machine learning to achieve the typical talkative computer we see in so many science fiction movies, like the Terminator, 2001's evil HAL computer, or the friendly, helpful ship computer in Star Trek. You can converse with those machines in English very much like you would with a human. It's easy. Ya just have to be a character in a science fiction movie.
Intelligence is subjective, so A.I. has no real definition
Now, if you think you don't already know enough about AI, you're wrong. There is nothing to know, because it isn't actually a thing. There's literally no meaningful definition whatsoever. AI poses as a field, but it's actually just a fanciful brand. As a supposed field, AI has many competing definitions, most of which just boil down to "smart computer." I must warn you, do not look up "self-referential" in the dictionary. You'll get stuck in an infinite loop.
Many definitions are even more circular than "smart computer," if that's possible. They just flat out use the word "intelligence" itself within the definition of AI, like "intelligence demonstrated by a machine."
If you've assumed there are more subtle shades of meaning at hand, surprise – there aren't. There's no way to resolve how utterly subjective the word "intelligence" is. For computers and engineering, "intelligence" is an arbitrary concept, irrelevant to any precise goal. All attempts to define AI fail to solve its vagueness.
Now, in practice the word is often just – confusingly – used as a synonym for machine learning. But as for AI as its own concept, most proposed definitions are variations of the following three:
1) AI is getting a computer to think like a human. Mimic human cognition. Now, we have very little insight into how our brains pull off what they pull off. Replicating a brain neuron-by-neuron is a science fiction "what if" pipe dream. And introspection – when you think about how you think – is interesting, big time, but ultimately tells us precious little about what's going on in there.
2) AI is getting a computer to act like a human. Mimic human behavior. Cause if it walks like a duck and talks like a duck... But it doesn't and it can't and we're way too sophisticated and complex to fully understand ourselves, let alone translate that understanding into computer code. Besides, fooling people into thinking a computer in a chatroom is actually a human – that's the famous Turing Test for machine intelligence – is an arbitrary accomplishment and it's a moving target as we humans continually become wiser to the trickery used to fool us.
3) AI is getting computers to solve hard problems. Get really good at tasks that seem to require "intelligence" or "human-level" capability, such as driving a car, recognizing human faces, or mastering chess. But now that computers can do them, these tasks don't seem so intelligent after all. Everything a computer does is just mechanical and well understood and in that way mundane. Once the computer can do it, it's no longer so impressive and it loses its charm. A computer scientist named Larry Tesler suggested we define intelligence as "whatever machines haven't done yet." Humorous! A moving-target definition that defines itself out of existence.
By the way, the points in this article also apply to the term "cognitive computing," which is another poorly-defined term coined to allege a relationship between technology and human cognition.
The logical fallacy of believing in A.I.'s innevitability
The thing is, "artificial intelligence" itself is a lie. Just evoking that buzzword automatically insinuates that technological advancement is making its way toward the ability to reason like people. To gain humanlike "common sense." That's a powerful brand. But it's an empty promise. Your common sense is more amazing – and unachievable – than your common sense can sense. You're amazing. Your ability to think abstractly and "understand" the world around you might feel simple in your moment-to-moment experience, but it's incredibly complex. That experience of simplicity is either a testament to how adept your uniquely human brain is or a great illusion that's intrinsic to the human condition – or probably both.
Now, some may respond to me, "Isn't inspired, visionary ambition a good thing? Imagination propels us and unknown horizons beckon us!" Arthur C. Clarke, the author of 2001, made a great point: "Any sufficiently advanced technology is indistinguishable from magic." I agree. However, that does not mean any and all "magic" we can imagine – or include in science fiction – could eventually be achieved by technology. Just 'cause it's in a movie doesn't mean it's gonna happen. AI evangelists often invoke Arthur's point – but they've got the logic reversed. My iPhone seems very "Star Trek" to me, but that's not an argument everything on Star Trek is gonna come true. The fact that creative fiction writers can make shows like Westworld is not at all evidence that stuff like that could happen.
Now, maybe I'm being a buzzkill, but actually I'm not. Let me put it this way. The uniqueness of humans and the real advancements of machine learning are each already more than amazing and exciting enough to keep us entertained. We don't need fairy tales – especially ones that mislead.
Sophia: A.I.'s most notoriously fraudulent publicity stunt
The star of this fairy tale, the leading role of "The Princess" is played by Sophia, a product of Hanson Robotics and AI's most notorious fraudulent publicity stunt. This robot has applied her artificial grace and charm to hoodwink the media. Jimmy Fallon and other interviewers have hosted her – it, I mean have hosted it. But when it "converses," it's all scripts and canned dialogue – misrepresented as spontaneous conversation – and in some contexts, rudimentary chatbot-level responsiveness.
Believe it or not, three fashion magazines have featured Sophia on their cover, and, ever goofier and sillier, the country Saudi Arabia officially granted it citizenship. For real. The first robot citizen. I'm actually a little upset about this, 'cause my microwave and pet rock have also applied for citizenship but still no word.
Sophia is a modern-day Mechanical Turk – which was an 18th century hoax that fooled the likes of Napoleon Bonaparte and Benjamin Franklin into believing they'd just lost a game of chess to a machine. A mannequin would move the chess pieces and the victims wouldn't notice there was actually a small human chess expert hidden inside a cabinet below the chess board.
In a modern day parallel, Amazon has an online service you use to hire workers to perform many small tasks that require human judgement, like choosing the nicest looking of several photographs. It's named Amazon Mechanical Turk, and its slogan, "Artificial Artificial Intelligence." Which reminds me of this great vegetarian restaurant with "mock mock duck" on the menu – I swear, it tastes exactly like mock duck. Hey, if it talks like a duck, and it tastes like a duck...
Yes indeed, the very best fake AI is humans. In 1965, when NASA was defending the idea of sending humans to space, they put it this way: "Man is the lowest-cost, 150-pound, nonlinear, all-purpose computer system which can be mass-produced by unskilled labor." I dunno. I think there's some skill in it. ;-)
The myth of dangerous superintelligence
Anyway, as for Sophia, mass hysteria, right? Well, it gets worse: Claims that AI presents an existential threat to the human race. From the most seemingly credible sources, the most elite of tech celebrities, comes a doomsday vision of homicidal robots and killer computers. None other than Bill Gates, Elon Musk, and even the late, great Stephen Hawking have jumped on the "superintelligence singularity" bandwagon. They believe machines will achieve a degree of general competence that empowers the machines to improve their own general competence – so much so that this will then quickly escalate past human intelligence, and do so at the lightning speed of computers, a speed the computers themselves will continue to improve by virtue of their superintelligence, and before you know it you have a system or entity so powerful that the slightest misalignment of objectives could wipe out the human race. Like if we naively commanded it to manufacture as many rubber chickens as possible, it might invent an entire new high-speed industry that can make 40 trillion rubber chickens but that happens to result in the extinction of Homo sapiens as a side effect. Well, at least it would be easier to get tickets for Hamilton.
There are two problems with this theory. First, it's so compellingly dramatic that it's gonna ruin movies. If the best bad guy is always a robot instead of a human, what about Nurse Ratched and Norman Bates? I need my Hannibal Lecter! "The best bad guy," by the way, is an oxymoron. And so is "artificial intelligence." Just sayin'.
But it is true: Robopocalypse is definitely coming. Soon. I'm totally serious, I swear. Based on a novel by the same name, Michael Bay – of the "Transformers" movies – is currently directing it as we speak. Fasten your gosh darn seatbelts people, 'cause, if "Robopocalypse" isn't in 3D, you were born in the wrong parallel universe.
Oh yeah, and the second problem with the AI doomsday theory is that it's ludicrous. AI is so smart it's gonna kill everyone by accident? Really really stupid superintelligence? That sounds like a contradiction.
To be more precise, the real problem is that the theory presumes that technological advancements move us along a path toward humanlike "thinking" capabilities. But they don't. It's not headed in that direction. I'll come back to that point again in a minute – first, a bit more on how widely this apocalyptic theory has radiated.
A widespread belief in superintelligence
The Kool-Aid these high-tech royalty drink, the go-to book that sets the foundation, is the New York Times bestseller "Superintelligence," by Nick Bostrom, who's a professor of applied ethics at Oxford University. The book mongers the fear and fans the flames, if not igniting the fire in the first place for many people. It explores how we might "make an intelligence explosion survivable." The Guardian newspaper ran a headline, "Artificial intelligence: 'We're like children playing with a bomb'," and Newsweek: "Artificial Intelligence Is Coming, and It Could Wipe Us Out," both headlines obediently quoting Bostrom himself.
Bill Gates "highly recommends" the book, Elon Musk said AI is "vastly more risky than North Korea" – as Fortune Magazine repeated in a headline – and, quoting Stephen Hawking, the BBC ran a headline, "'AI could spell end of the human race'."
In a Ted talk that's been viewed 5 million times (across platforms), the bestselling author and podcast intellectual Sam Harris states with supreme confidence, "At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves."
Both he and Bostrom show the audience an intelligence spectrum during their Ted talks – here's the one by Bostrom:
What happens when our computers get smarter than we are? | Nick Bostrom
You can see as we move along the path from left to right we pass a mouse, a chimp, a village idiot, and then the very smart theoretical physicist Ed Witten. He's relatively close to the idiot, because even an idiot human is much smarter than a chimp, relatively speaking. You can see the arrow just above the spectrum showing that "AI" progresses in that same direction, along to the right. At the very rightmost position is Bostrom himself, which is either just an accident of photography, or proof that he himself is an AI robot.
Oops, that was the wrong clip – uh, that was Dr. Frankenstein, but, ya know, same scenario.
A falsely conceived "spectrum of intelligence"
Anyway, that falsely-conceived intelligence spectrum is the problem. I've read the book and many of the interviews and watched the talks and pretty much all the believers intrinsically build on an erroneous presumption that "smartness" or "intelligence" falls more or less along a single, one-dimensional spectrum. They presume that the more adept machines become at more and more challenging tasks, the higher they will rank on this scale, eventually surpassing humans.
But machine learning has us marching along a different path. We're moving fast, and we'll likely go very far, but we're going in a different direction, only tangentially related to human capabilities.
The trick is to take a moment to think about this difference. Our own personal experiences of being one of those smart creatures called a human is what catches us in a thought trap. Our very particular and very impressive capabilities are hidden from ourselves beneath a veil of a conscious experience that just kind of feels like "clarity." It feels simple, but under the surface, it's oh so complex. Replicating our "general common sense" is a fanciful notion that no technological advancements have ever moved us towards in any meaningful way.
Thinking abstractly often feels uncomplicated. We draw visuals in our mind, like a not-to-scale map of a city we're navigating, or a "space" of products that two large companies are competing to sell, with each company dominating in some areas but not in others... or, when thinking about AI, the mistaken vision that increasingly adept capabilities – both intellectual and computational – all fall along the same, somewhat narrow path.
Now, Bostrom rightly emphasizes that we should not anthropomorphize what intelligent machines may be like in the future. It's not human, so it's hard to speculate on the specifics and perhap it will seem more like a space alien's intelligence. But what Bostrom and his followers aren't seeing is that, since they believe technology advances along a spectrum that includes and then transcends human cognition, the spectrum itself as they've conceived it is anthropomorphic. It has humanlike qualities built in. Now, your common sense reasoning may seem to you like a "natural stage" of any sort of intellectual development, but that's a very human-centric perspective. Your common sense is intricate and very, very particular. It's far beyond our grasp – for anyone – to formally define a "spectrum of intelligence" that includes human cognition on it. Our brains are spectacularly multi-faceted and adept, in a very arcane way.
Machines progress along a different spectrum
Machine learning actually does work by defining a kind of spectrum, but only for an extremely limited sort of trajectory – only for tasks that have labeled data, such as identifying objects in images. With labeled data, you can compare and rank various attempts to solve the problem. The computer uses the data to measure how well it does. Like, one neural network might correctly identify 90% of the trucks in the images and then a variation after some improvements might get 95%.
Getting better and better at a specific task like that obviously doesn't lead to general common sense reasoning capabilities. We're not on that trajectory, so the fears should be allayed. The machine isn't going to get to a human-like level where it then figures out how to propel itself into superintelligence. No, it's just gonna keep getting better at identifying objects, that's all.
Intelligence isn't a Platonic ideal that exists separately from humans, waiting to be discovered. It's not going to spontaneously emerge along a spectrum of better and better technology. Why would it? That's a ghost story.
It might feel tempting to believe that increased complexity leads to intelligence. After all, computers are incredibly general-purpose – they can basically do any task, if only we can figure out how to program them to do that task. And we're getting them to do more and more complex things. But just because they could do anything doesn't mean they will spontaneously do everything we imagine they might.
No advancements in machine learning to date have provided any hint or inkling of what kind of secret sauce could get computers to gain "general common sense reasoning." Dreaming that such abilities could emerge is just wishful thinking and rogue imagination, no different now, after the last several decades of innovations, than it was back in 1950, when Alan Turing, the father of computer science, first tried to define how the word "intelligence" might apply to computers.
Don't sell, buy, or regulate on A.I.
Machines will remain fundamentally under our control. Computer errors will kill – people will die from autonomous vehicles and medical automation – but not on a catastrophic level, unless by the intentional design of human cyber attackers. When a misstep does occur, we take the system offline and fix it.
Now, the aforementioned techno-celebrity believers are true intellectuals and are truly accomplished as entrepreneurs, engineers, and thought leaders in their respective fields. But they aren't machine learning experts. None of them are. When it comes to their AI pontificating, it would truly be better for everyone if they published their thoughts as blockbuster movie scripts rather than earnest futurism.
It's time for term "AI" to be "terminated." Mean what you say and say what you mean. If you're talking about machine learning, call it machine learning. The buzzword "AI" is doing more harm than good. It may sometimes help with publicity, but to at least the same degree, it misleads. AI isn't a thing. It's vaporware. Don't sell it and don't buy it.
And most importantly, do not regulate on "AI"! Technology greatly needs regulation in certain arenas, for example, to address bias in algorithmic decision-making and the development of autonomous weapons – which often use machine learning – so clarity is absolutely critical in these discussions. Using the imprecise, misleading term "artificial intelligence" is gravely detrimental to the effectiveness and credibility of any initiative that regulates technology. Regulation is already hard enough without muddying the waters.
Want more of Dr. Data?
A man's skeleton, found facedown with his hands bound, was unearthed near an ancient ceremonial circle during a high speed rail excavation project.
- A skeleton representing a man who was tossed face down into a ditch nearly 2,500 years ago with his hands bound in front of his hips was dug up during an excavation outside of London.
- The discovery was made during a high speed rail project that has been a bonanza for archaeology, as the area is home to more than 60 ancient sites along the planned route.
- An ornate grave of a high status individual from the Roman period and an ancient ceremonial circle were also discovered during the excavations.
An ancient skeleton of a man dating back to the Iron Age was uncovered outside of London last month, and though archaeologists aren't certain what the cause of death was, clues point to a murder most foul.
A skeleton representing a man who was tossed face down into a ditch nearly 2,500 years ago with his hands bound in front of his hips was dug up during a high speed rail excavation.
The positioning of the remains have led archaeologists to suspect that the man may have been a victim of an ancient murder or execution. Though any bindings have since decomposed, his hands were positioned together and pinned under his pelvis. There was also no sign of a grave or coffin.
"He seems to have had his hands tied, and he was face-down in the bottom of the ditch," said archaeologist Rachel Wood, who led the excavation. "There are not many ways that you end up that way."
Currently, archaeologists are examining the skeleton to uncover more information about the circumstances of the man's death. Fragments of pottery found in the ditch may offer some clues as to exactly when the man died.
"If he was struck across the head with a heavy object, you could find a mark of that on the back of the skull," Wood said to Live Science. "If he was stabbed, you could find blade marks on the ribs. So we're hoping to find something like that, to tell us how he died."
Other discoveries at Wellwick Farm
The grim discovery was made at Wellwick Farm near Wendover. That is about 15 miles north-west of the outskirts of London, where a tunnel is going to be built as part of a HS2 high-speed rail project due to open between London and several northern cities sometime after 2028. The infrastructure project has been something of a bonanza for archaeology as the area is home to more than 60 ancient sites along the planned route that are now being excavated before construction begins.
The farm sits less than a mile away from the ancient highway Icknield Way that runs along the tops of the Chiltern Hills. The route (now mostly trails) has been used since prehistoric times. Evidence at Wellwick Farm indicates that from the Neolithic to the Medieval eras, humans have occupied the region for more than 4,000 years, making it a rich area for archaeological finds.
Wood and her colleagues found some evidence of an ancient village occupied from the late Bronze Age (more than 3,000 years ago) until the Roman Empire's invasion of southern England about 2,000 years ago. At the site were the remains of animal pens, pits for disposing food, and a roundhouse — a standard British dwelling during the Bronze Age constructed with a circular plan made of stone or wood topped with a conical thatched roof.
Ceremonial burial site
A high status burial in a lead-lined coffin dating back to Roman times.
Photo Credit: HS2
While these ancient people moved away from Wellwick Farm before the Romans invaded, a large portion of the area was still used for ritual burials for high-status members of society, Wood told Live Science. The ceremonial burial site included a circular ditch (about 60 feet across) at the center, and was a bit of a distance away from the ditch where the (suspected) murder victim was uncovered. Additionally, archaeologists found an ornately detailed grave near the sacred burial site that dates back to the Roman period, hundreds of years later when the original Bronze Age burial site would have been overgrown.
The newer grave from the Roman period encapsulated an adult skeleton contained in a lead-lined coffin. It's likely that the outer coffin had been made of wood that rotted away. Since it was clearly an ornate burial, the occupant of the grave was probably a person of high status who could afford such a lavish burial. However, according to Wood, no treasures or tokens had been discovered.
Sacred timber circle
An aerial view of the sacred circular monument.
Photo Credit: HS2
One of the most compelling archaeological discoveries at Wellwick Farm are the indications of a huge ceremonial circle once circumscribed by timber posts lying south of the Bronze Age burial site. Though the wooden posts have rotted away, signs of the post holes remain. It's thought to date from the Neolithic period to 5,000 years ago, according to Wood.
This circle would have had a diameter stretching 210 feet across and consisted of two rings of hundreds of posts. There would have been an entry gap to the south-west. Five posts in the very center of the circle aligned with that same gap, which, according to Wood, appeared to have been in the direction of the rising sun on the day of the midwinter solstice.
Similar Neolithic timber circles have been discovered around Great Britain, such as one near Stonehenge that is considered to date back to around the same time.
This spring, a U.S. and Chinese team announced that it had successfully grown, for the first time, embryos that included both human and monkey cells.
In the novel, technicians in charge of the hatcheries manipulate the nutrients they give the fetuses to make the newborns fit the desires of society. Two recent scientific developments suggest that Huxley's imagined world of functionally manufactured people is no longer far-fetched.
On March 17, 2021, an Israeli team announced that it had grown mouse embryos for 11 days – about half of the gestation period – in artificial wombs that were essentially bottles. Until this experiment, no one had grown a mammal embryo outside a womb this far into pregnancy. Then, on April 15, 2021, a U.S. and Chinese team announced that it had successfully grown, for the first time, embryos that included both human and monkey cells in plates to a stage where organs began to form.
As both a philosopher and a biologist I cannot help but ask how far researchers should take this work. While creating chimeras – the name for creatures that are a mix of organisms – might seem like the more ethically fraught of these two advances, ethicists think the medical benefits far outweigh the ethical risks. However, ectogenesis could have far-reaching impacts on individuals and society, and the prospect of babies grown in a lab has not been put under nearly the same scrutiny as chimeras.
Mouse embryos were grown in an artificial womb for 11 days, and organs had begun to develop.
Growing in an artificial womb
When in vitro fertilization first emerged in the late 1970s, the press called IVF embryos “test-tube babies," though they are nothing of the sort. These embryos are implanted into the uterus within a day or two after doctors fertilize an egg in a petri dish.
Before the Israeli experiment, researchers had not been able to grow mouse embryos outside the womb for more than four days – providing the embryos with enough oxygen had been too hard. The team spent seven years creating a system of slowly spinning glass bottles and controlled atmospheric pressure that simulates the placenta and provides oxygen.
This development is a major step toward ectogenesis, and scientists expect that it will be possible to extend mouse development further, possibly to full term outside the womb. This will likely require new techniques, but at this point it is a problem of scale – being able to accommodate a larger fetus. This appears to be a simpler challenge to overcome than figuring out something totally new like supporting organ formation.
The Israeli team plans to deploy its techniques on human embryos. Since mice and humans have similar developmental processes, it is likely that the team will succeed in growing human embryos in artificial wombs.
To do so, though, members of the team need permission from their ethics board.
CRISPR – a technology that can cut and paste genes – already allows scientists to manipulate an embryo's genes after fertilization. Once fetuses can be grown outside the womb, as in Huxley's world, researchers will also be able to modify their growing environments to further influence what physical and behavioral qualities these parentless babies exhibit. Science still has a way to go before fetus development and births outside of a uterus become a reality, but researchers are getting closer. The question now is how far humanity should go down this path.
Chimeras evoke images of mythological creatures of multiple species – like this 15th-century drawing of a griffin – but the medical reality is much more sober. (Martin Schongauer/WikimediaCommons)
Human–monkey hybrids might seem to be a much scarier prospect than babies born from artificial wombs. But in fact, the recent research is more a step toward an important medical development than an ethical minefield.
If scientists can grow human cells in monkeys or other animals, it should be possible to grow human organs too. This would solve the problem of organ shortages around the world for people needing transplants.
But keeping human cells alive in the embryos of other animals for any length of time has proved to be extremely difficult. In the human-monkey chimera experiment, a team of researchers implanted 25 human stem cells into embryos of crab-eating macaques – a type of monkey. The researchers then grew these embryos for 20 days in petri dishes.
After 15 days, the human stem cells had disappeared from most of the embryos. But at the end of the 20-day experiment, three embryos still contained human cells that had grown as part of the region of the embryo where they were embedded. For scientists, the challenge now is to figure out how to maintain human cells in chimeric embryos for longer.
Regulating these technologies
Some ethicists have begun to worry that researchers are rushing into a future of chimeras without adequate preparation. Their main concern is the ethical status of chimeras that contain human and nonhuman cells – especially if the human cells integrate into sensitive regions such as a monkey's brain. What rights would such creatures have?
However, there seems to be an emerging consensus that the potential medical benefits justify a step-by-step extension of this research. Many ethicists are urging public discussion of appropriate regulation to determine how close to viability these embryos should be grown. One proposed solution is to limit growth of these embryos to the first trimester of pregnancy. Given that researchers don't plan to grow these embryos beyond the stage when they can harvest rudimentary organs, I don't believe chimeras are ethically problematic compared with the true test–tube babies of Huxley's world.
Few ethicists have broached the problems posed by the ability to use ectogenesis to engineer human beings to fit societal desires. Researchers have yet to conduct experiments on human ectogenesis, and for now, scientists lack the techniques to bring the embryos to full term. However, without regulation, I believe researchers are likely to try these techniques on human embryos – just as the now-infamous He Jiankui used CRISPR to edit human babies without properly assessing safety and desirability. Technologically, it is a matter of time before mammal embryos can be brought to term outside the body.
While people may be uncomfortable with ectogenesis today, this discomfort could pass into familiarity as happened with IVF. But scientists and regulators would do well to reflect on the wisdom of permitting a process that could allow someone to engineer human beings without parents. As critics have warned in the context of CRISPR-based genetic enhancement, pressure to change future generations to meet societal desires will be unavoidable and dangerous, regardless of whether that pressure comes from an authoritative state or cultural expectations. In Huxley's imagination, hatcheries run by the state grew a large numbers of identical individuals as needed. That would be a very different world from today.
Sahotra Sarkar, Professor of Philosophy and Integrative Biology, The University of Texas at Austin College of Liberal Arts
Scientists should be cautious when expressing an opinion based on little more than speculation.
- In October 2017, a strange celestial object was detected, soon to be declared our first recognized interstellar visitor.
- The press exploded when a leading Harvard astronomer suggested the object to have been engineered by an alien civilization.
- This is an extraordinary conclusion that was based on a faulty line of scientific reasoning. Ruling out competing hypotheses doesn't make your hypothesis right.
Sometimes, when you are looking for something ordinary, you find the unexpected. This is definitely the case with the strange 'Oumuamua, which made international headlines as a potential interstellar visitor. Its true identity remained obscure for a while, as scientists proposed different explanations for its puzzling behavior. This is the usual scientific approach of testing hypotheses to make sense of a new discovery.
What captured the popular imagination was the claim that the object was no piece of rock or comet, but an alien artifact, designed by a superior intelligence.
Do you remember the black monolith tumbling through space in the classic Stanley Kubrick movie 2001: A Space Odyssey? The one that "inspired" our ape-like ancestors to develop technology and followed humanity and its development since then? What made this claim amazing is that it wasn't coming from the usual UFO enthusiasts but from a respected astrophysicist from Harvard University, Avi Loeb, and his collaborator Shmuel Bialy. Does their claim really hold water? Were we really visited by an alien artifact? How would we know?
A mystery at 200,000 miles per hour
Before we dive into the controversy, let's examine some history. 'Oumuamua was discovered accidentally by Canadian astronomer Robert Weryk while he was routinely reviewing images captured by the telescope Pan-STARRS1 (Panoramic Survey and Rapid Response System 1), situated atop the ten-thousand-foot Haleakala volcanic peak on the Hawaiian island of Maui. The telescope scans the skies in search of near-Earth objects, mostly asteroids and possibly comets that come close to Earth. The idea is to monitor the solar system to learn more about such objects and their orbits and, of course, to sound the alarm in case of a potential collision course with Earth. Contrary to the objects Weryk was used to seeing, mostly moving at about 40,000 miles per hour, this one was moving almost five times as fast — nearly 200,000 miles per hour, definitely an anomaly.
Intrigued, astronomers tracked the visitor while it was visible, concluding that it indeed must have come from outside our solar system, the first recognized interstellar visitor. Contrary to most known asteroids that move in elliptical orbits around the sun, 'Oumuamua had a bizarre path, mostly straight. Also, its brightness varied by a factor of ten as it tumbled across space, a very unusual property that could be caused either by an elongated cigar shape or by it being flat, like a CD, one side with a different reflectivity than the other. The object, 1I/2017 U1, became popularly known as 'Oumuamua, from the Hawaiian for "scout."
In their paper, Loeb and Bialy argue that the only way the object could be accelerated to the speeds observed was if it were extremely thin and very large, like a sail. They estimated that its thickness had to be between 0.3 to 0.9 millimeters, which is extremely thin. After confirming that such an object is robust enough to withstand the hardships of interstellar travel (e.g., collision with gas particles and dust grains, tensile stresses, rotation, and tidal forces), Loeb and Bialy conclude that it couldn't possibly be a solar system object like an asteroid or comet. Being thus of interstellar origin, the question is whether it is a natural or artificial object. This is where the paper ventures into interesting but far-fetched speculation.
I'm not saying it was aliens, but it was aliens
First, the authors consider that it might be garbage "floating in interstellar space as debris from advanced technological equipment," ejected from its own stellar system due to its non-functionality; essentially, alien space junk. Then, they suggest that a "more exotic scenario is that 'Oumuamua may be a fully operational probe sent intentionally to Earth vicinity by an alien civilization," [italicized as in the original] concluding that a "survey for lightsails as technosignatures in the solar system is warranted, irrespective of whether 'Oumuamua is one of them."
You can shoot down as many hypotheses as you want to vindicate yours, but this doesn't prove yours is the right one.
I have known Avi Loeb for decades and consider him a serious and extremely talented astrophysicist. His 2018 paper includes a suggestive interpretation of strange data that obviously sparks the popular imagination. Theoretical physicists routinely suggest the existence of traversable wormholes, multiverses, and parallel quantum universes. Not surprisingly, Loeb was highly in demand by the press to fill in the details of his idea. A book followed, Extraterrestrial: The First Sign of Intelligent Life Beyond Earth, and its description tells all: "There was only one conceivable explanation: the object was a piece of advanced technology created by a distant alien civilization."
This is where most of the scientific establishment began to cringe. One thing is to discuss the properties of a strange natural phenomenon and rule out more prosaic hypotheses while suggesting a daring one. Another is to declare to the public that the only conceivable explanation is one that is also speculative. An outsider will conclude that a reliable scientist has confirmed not only the existence of extraterrestrial life but of intelligent and technologically sophisticated extraterrestrial life with an interest in our solar system. I wonder if Loeb considered the impact of his words and how they reflect on the scientific community as a whole.
This is why aliens won't talk to us
Earlier this year, in a live public lecture hosted by the Catholic University of Chile, Avi Loeb locked horns with Jill Tarter, the scientist that is perhaps most identifiable as someone who spent her career looking for signs of extraterrestrial intelligence. (Coincidentally, I was the speaker that followed Loeb the next week in the same seminar series and was cautioned — along with the other panelists — to behave myself to avoid another showdown. I smiled, knowing that my topic was pretty tame in comparison. I mean, how can the limits of human knowledge compare with alien surveillance?)
The Loeb-Tarter exchange was awful and, it being a public debate, was picked up by the press. Academics can be rough like anyone else. But the issue goes deeper.
What scientists say matters. When should a scientist make public declarations about a cutting-edge topic with absolute certainty? I'd say never. There is no clear-cut certainty in cutting-edge science. There are hypotheses that should be tested more until there is community consensus. Even then, consensus is not guaranteed proof. The history of science is full of examples where leading scientists were convinced of something, only to be proven wrong later.
The epistemological mistake Loeb committed was to make an assertion that publicly amounted to certainty by using a process of elimination of other competing hypotheses. You can shoot down as many hypotheses as you want to vindicate yours, but this doesn't prove yours is the right one. It only means that the other hypotheses are wrong. I do, however, agree with Loeb when he says that 'Oumuamua should be the trigger for an increase in funding for the search for technosignatures, a way of detecting intelligent extraterrestrial life.