Get smarter, faster. Subscribe to our daily newsletter.
Why A.I. is a big fat lie
The Dr. Data Show is a new web series that breaks the mold for data science infotainment, captivating the planet with short webisodes that cover the very best of machine learning and predictive analytics.
- All the hype around artificial intelligence misunderstands what intelligence really is.
- And A.I. is definitely, definitely not going to kill you, ever.
- Machine learning as a process and a concept, however, holds more promise.
A.I. is a big fat lie
A.I. is a big fat lie. Artificial intelligence is a fraudulent hoax — or in the best cases it's a hyped-up buzzword that confuses and deceives. The much better, precise term would instead usually be machine learning – which is genuinely powerful and everyone oughta be excited about it.
On the other hand, AI does provide some great material for nerdy jokes. So put on your skepticism hat, it's time for an AI-debunkin', slam-dunkin', machine learning-lovin', robopocalypse myth-bustin', smackdown jamboree – yeehaw!
3 main points
1) Unlike AI, machine learning's totally legit. I gotta say, it wins the Awesomest Technology Ever award, forging advancements that make ya go, "Hooha!". However, these advancements are almost entirely limited to supervised machine learning, which can only tackle problems for which there exist many labeled or historical examples in the data from which the computer can learn. This inherently limits machine learning to only a very particular subset of what humans can do – plus also a limited range of things humans can't do.
2) AI is BS. And for the record, this naysayer taught the Columbia University graduate-level "Artificial Intelligence" course, as well as other related courses there.
AI is nothing but a brand. A powerful brand, but an empty promise. The concept of "intelligence" is entirely subjective and intrinsically human. Those who espouse the limitless wonders of AI and warn of its dangers – including the likes of Bill Gates and Elon Musk – all make the same false presumption: that intelligence is a one-dimensional spectrum and that technological advancements propel us along that spectrum, down a path that leads toward human-level capabilities. Nuh uh. The advancements only happen with labeled data. We are advancing quickly, but in a different direction and only across a very particular, restricted microcosm of capabilities.
The term artificial intelligence has no place in science or engineering. "AI" is valid only for philosophy and science fiction – and, by the way, I totally love the exploration of AI in those areas.
3) AI isn't gonna kill you. The forthcoming robot apocalypse is a ghost story. The idea that machines will uprise on their own volition and eradicate humanity holds no merit.
Neural networks for the win
In the movie "Terminator 2: Judgment Day," the titular robot says, "My CPU is a neural net processor, a learning computer." The neural network of which that famous robot speaks is actually a real kind of machine learning method. A neural network is a way to depict a complex mathematical formula, organized into layers. This formula can be trained to do things like recognize images for self-driving cars. For example, watch several seconds of a neural network performing object recognition.
What you see it doing there is truly amazing. The network's identifying all those objects. With machine learning, the computer has essentially programmed itself to do this. On its own, it has worked out the nitty gritty details of exactly what patterns or visual features to look for. Machine learning's ability to achieve such things is awe-inspiring and extremely valuable.
The latest improvements to neural networks are called deep learning. They're what make this level of success in object recognition possible. With deep learning, the network is quite literally deeper – more of those layers. However, even way way back in 1997, the first time I taught the machine learning course, neural networks were already steering self-driving cars, in limited contexts, and we even had our students apply them for face recognition as a homework assignment.
The achitecture for a simple neural network with four layers
But the more recent improvements are uncanny, boosting its power for many industrial applications. So, we've even launched a new conference, Deep Learning World, which covers the commercial deployment of deep learning. It runs alongside our long-standing machine learning conference series, Predictive Analytics World.
Supervised machine learning requires labeled data
So, with machines just getting better and better at humanlike tasks, doesn't that mean they're getting smarter and smarter, moving towards human intelligence?
No. It can get really, really good at certain tasks, but only when there's the right data from which to learn. For the object recognition discussed above, it learned to do that from a large number of example photos within which the target objects were already correctly labeled. It needed those examples to learn to recognize those kinds of objects. This is called supervised machine learning: when there is pre-labeled training data. The learning process is guided or "supervised" by the labeled examples. It keeps tweaking the neural network to do better on those examples, one incremental improvement at a time. That's the learning process. And the only way it knows the neural network is improving or "learning" is by testing it on those labeled examples. Without labeled data, it couldn't recognize its own improvements so it wouldn't know to stick with each improvement along the way. Supervised machine learning is the most common form of machine learning.
Here's another example. In 2011, IBM's Watson computer defeated the two all-time human champions on the TV quiz show Jeopardy. I'm a big fan. This was by far the most amazing thing I've seen a computer do – more impressive than anything I'd seen during six years of graduate school in natural language understanding research. Here's a 30-second clip of Watson answering three questions.
To be clear, the computer didn't actually hear the spoken questions but rather was fed each question as typed text. But its ability to rattle off one answer after another – given the convoluted, clever wording of Jeopardy questions, which are designed for humans and run across any and all topics of conversation – feels to me like the best "intelligent-like" thing I've ever seen from a computer.
But the Watson machine could only do that because it had been given many labeled examples from which to learn: 25,000 questions taken from prior years of this TV quiz show, each with their own correct answer.
At the core, the trick was to turn every question into a yes/no prediction: "Will such-n-such turn out to be the answer to this question?" Yes or no. If you can answer that question, then you can answer any question – you just try thousands of options out until you get a confident "yes." For example, "Is 'Abraham Lincoln' the answer to 'Who was the first president?'" No. "Is 'George Washington'?" Yes! Now the machine has its answer and spits it out.
Computers that can talk like humans
And there's another area of language use that also has plentiful labeled data: machine translation. Machine learning gobbles up a feast of training data for translating between, say, English and Japanese, because there are tons of translated texts out there filled with English sentences and their corresponding Japanese translations.
In recent years, Google Translate – which anyone can use online – swapped out the original underlying solution for a much-improved one driven by deep learning. Go try it out – translate a letter to your friend or relative who has a different first language than you. I use it a lot myself.
On the other hand, general competence with natural languages like English is a hallmark of humanity – and only humanity. There's no known roadmap to fluency for our silicon sisters and brothers. When we humans understand one another, underneath all the words and somewhat logical grammatical rules is "general common sense and reasoning." You can't work with language without that very particular human skill. Which is a broad, unwieldy, amorphous thing we humans amazingly have.
So our hopes and dreams of talking computers are dashed because, unfortunately, there's no labeled data for "talking like a person." You can get the right data for a very restricted, specific task, like handling TV quiz show questions, or answering the limited range of questions people might expect Siri to be able to answer. But the general notion of "talking like a human" is not a well-defined problem. Computers can only solve problems that are precisely defined.
So we can't leverage machine learning to achieve the typical talkative computer we see in so many science fiction movies, like the Terminator, 2001's evil HAL computer, or the friendly, helpful ship computer in Star Trek. You can converse with those machines in English very much like you would with a human. It's easy. Ya just have to be a character in a science fiction movie.
Intelligence is subjective, so A.I. has no real definition
Now, if you think you don't already know enough about AI, you're wrong. There is nothing to know, because it isn't actually a thing. There's literally no meaningful definition whatsoever. AI poses as a field, but it's actually just a fanciful brand. As a supposed field, AI has many competing definitions, most of which just boil down to "smart computer." I must warn you, do not look up "self-referential" in the dictionary. You'll get stuck in an infinite loop.
Many definitions are even more circular than "smart computer," if that's possible. They just flat out use the word "intelligence" itself within the definition of AI, like "intelligence demonstrated by a machine."
If you've assumed there are more subtle shades of meaning at hand, surprise – there aren't. There's no way to resolve how utterly subjective the word "intelligence" is. For computers and engineering, "intelligence" is an arbitrary concept, irrelevant to any precise goal. All attempts to define AI fail to solve its vagueness.
Now, in practice the word is often just – confusingly – used as a synonym for machine learning. But as for AI as its own concept, most proposed definitions are variations of the following three:
1) AI is getting a computer to think like a human. Mimic human cognition. Now, we have very little insight into how our brains pull off what they pull off. Replicating a brain neuron-by-neuron is a science fiction "what if" pipe dream. And introspection – when you think about how you think – is interesting, big time, but ultimately tells us precious little about what's going on in there.
2) AI is getting a computer to act like a human. Mimic human behavior. Cause if it walks like a duck and talks like a duck... But it doesn't and it can't and we're way too sophisticated and complex to fully understand ourselves, let alone translate that understanding into computer code. Besides, fooling people into thinking a computer in a chatroom is actually a human – that's the famous Turing Test for machine intelligence – is an arbitrary accomplishment and it's a moving target as we humans continually become wiser to the trickery used to fool us.
3) AI is getting computers to solve hard problems. Get really good at tasks that seem to require "intelligence" or "human-level" capability, such as driving a car, recognizing human faces, or mastering chess. But now that computers can do them, these tasks don't seem so intelligent after all. Everything a computer does is just mechanical and well understood and in that way mundane. Once the computer can do it, it's no longer so impressive and it loses its charm. A computer scientist named Larry Tesler suggested we define intelligence as "whatever machines haven't done yet." Humorous! A moving-target definition that defines itself out of existence.
By the way, the points in this article also apply to the term "cognitive computing," which is another poorly-defined term coined to allege a relationship between technology and human cognition.
The logical fallacy of believing in A.I.'s innevitability
The thing is, "artificial intelligence" itself is a lie. Just evoking that buzzword automatically insinuates that technological advancement is making its way toward the ability to reason like people. To gain humanlike "common sense." That's a powerful brand. But it's an empty promise. Your common sense is more amazing – and unachievable – than your common sense can sense. You're amazing. Your ability to think abstractly and "understand" the world around you might feel simple in your moment-to-moment experience, but it's incredibly complex. That experience of simplicity is either a testament to how adept your uniquely human brain is or a great illusion that's intrinsic to the human condition – or probably both.
Now, some may respond to me, "Isn't inspired, visionary ambition a good thing? Imagination propels us and unknown horizons beckon us!" Arthur C. Clarke, the author of 2001, made a great point: "Any sufficiently advanced technology is indistinguishable from magic." I agree. However, that does not mean any and all "magic" we can imagine – or include in science fiction – could eventually be achieved by technology. Just 'cause it's in a movie doesn't mean it's gonna happen. AI evangelists often invoke Arthur's point – but they've got the logic reversed. My iPhone seems very "Star Trek" to me, but that's not an argument everything on Star Trek is gonna come true. The fact that creative fiction writers can make shows like Westworld is not at all evidence that stuff like that could happen.
Now, maybe I'm being a buzzkill, but actually I'm not. Let me put it this way. The uniqueness of humans and the real advancements of machine learning are each already more than amazing and exciting enough to keep us entertained. We don't need fairy tales – especially ones that mislead.
Sophia: A.I.'s most notoriously fraudulent publicity stunt
The star of this fairy tale, the leading role of "The Princess" is played by Sophia, a product of Hanson Robotics and AI's most notorious fraudulent publicity stunt. This robot has applied her artificial grace and charm to hoodwink the media. Jimmy Fallon and other interviewers have hosted her – it, I mean have hosted it. But when it "converses," it's all scripts and canned dialogue – misrepresented as spontaneous conversation – and in some contexts, rudimentary chatbot-level responsiveness.
Believe it or not, three fashion magazines have featured Sophia on their cover, and, ever goofier and sillier, the country Saudi Arabia officially granted it citizenship. For real. The first robot citizen. I'm actually a little upset about this, 'cause my microwave and pet rock have also applied for citizenship but still no word.
Sophia is a modern-day Mechanical Turk – which was an 18th century hoax that fooled the likes of Napoleon Bonaparte and Benjamin Franklin into believing they'd just lost a game of chess to a machine. A mannequin would move the chess pieces and the victims wouldn't notice there was actually a small human chess expert hidden inside a cabinet below the chess board.
In a modern day parallel, Amazon has an online service you use to hire workers to perform many small tasks that require human judgement, like choosing the nicest looking of several photographs. It's named Amazon Mechanical Turk, and its slogan, "Artificial Artificial Intelligence." Which reminds me of this great vegetarian restaurant with "mock mock duck" on the menu – I swear, it tastes exactly like mock duck. Hey, if it talks like a duck, and it tastes like a duck...
Yes indeed, the very best fake AI is humans. In 1965, when NASA was defending the idea of sending humans to space, they put it this way: "Man is the lowest-cost, 150-pound, nonlinear, all-purpose computer system which can be mass-produced by unskilled labor." I dunno. I think there's some skill in it. ;-)
The myth of dangerous superintelligence
Anyway, as for Sophia, mass hysteria, right? Well, it gets worse: Claims that AI presents an existential threat to the human race. From the most seemingly credible sources, the most elite of tech celebrities, comes a doomsday vision of homicidal robots and killer computers. None other than Bill Gates, Elon Musk, and even the late, great Stephen Hawking have jumped on the "superintelligence singularity" bandwagon. They believe machines will achieve a degree of general competence that empowers the machines to improve their own general competence – so much so that this will then quickly escalate past human intelligence, and do so at the lightning speed of computers, a speed the computers themselves will continue to improve by virtue of their superintelligence, and before you know it you have a system or entity so powerful that the slightest misalignment of objectives could wipe out the human race. Like if we naively commanded it to manufacture as many rubber chickens as possible, it might invent an entire new high-speed industry that can make 40 trillion rubber chickens but that happens to result in the extinction of Homo sapiens as a side effect. Well, at least it would be easier to get tickets for Hamilton.
There are two problems with this theory. First, it's so compellingly dramatic that it's gonna ruin movies. If the best bad guy is always a robot instead of a human, what about Nurse Ratched and Norman Bates? I need my Hannibal Lecter! "The best bad guy," by the way, is an oxymoron. And so is "artificial intelligence." Just sayin'.
But it is true: Robopocalypse is definitely coming. Soon. I'm totally serious, I swear. Based on a novel by the same name, Michael Bay – of the "Transformers" movies – is currently directing it as we speak. Fasten your gosh darn seatbelts people, 'cause, if "Robopocalypse" isn't in 3D, you were born in the wrong parallel universe.
Oh yeah, and the second problem with the AI doomsday theory is that it's ludicrous. AI is so smart it's gonna kill everyone by accident? Really really stupid superintelligence? That sounds like a contradiction.
To be more precise, the real problem is that the theory presumes that technological advancements move us along a path toward humanlike "thinking" capabilities. But they don't. It's not headed in that direction. I'll come back to that point again in a minute – first, a bit more on how widely this apocalyptic theory has radiated.
A widespread belief in superintelligence
The Kool-Aid these high-tech royalty drink, the go-to book that sets the foundation, is the New York Times bestseller "Superintelligence," by Nick Bostrom, who's a professor of applied ethics at Oxford University. The book mongers the fear and fans the flames, if not igniting the fire in the first place for many people. It explores how we might "make an intelligence explosion survivable." The Guardian newspaper ran a headline, "Artificial intelligence: 'We're like children playing with a bomb'," and Newsweek: "Artificial Intelligence Is Coming, and It Could Wipe Us Out," both headlines obediently quoting Bostrom himself.
Bill Gates "highly recommends" the book, Elon Musk said AI is "vastly more risky than North Korea" – as Fortune Magazine repeated in a headline – and, quoting Stephen Hawking, the BBC ran a headline, "'AI could spell end of the human race'."
In a Ted talk that's been viewed 5 million times (across platforms), the bestselling author and podcast intellectual Sam Harris states with supreme confidence, "At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves."
Both he and Bostrom show the audience an intelligence spectrum during their Ted talks – here's the one by Bostrom:
What happens when our computers get smarter than we are? | Nick Bostrom
You can see as we move along the path from left to right we pass a mouse, a chimp, a village idiot, and then the very smart theoretical physicist Ed Witten. He's relatively close to the idiot, because even an idiot human is much smarter than a chimp, relatively speaking. You can see the arrow just above the spectrum showing that "AI" progresses in that same direction, along to the right. At the very rightmost position is Bostrom himself, which is either just an accident of photography, or proof that he himself is an AI robot.
In fact, here's a 13-second clip of the moment that Bill Gates first brought Bostrom to life.
Oops, that was the wrong clip – uh, that was Dr. Frankenstein, but, ya know, same scenario.
A falsely conceived "spectrum of intelligence"
Anyway, that falsely-conceived intelligence spectrum is the problem. I've read the book and many of the interviews and watched the talks and pretty much all the believers intrinsically build on an erroneous presumption that "smartness" or "intelligence" falls more or less along a single, one-dimensional spectrum. They presume that the more adept machines become at more and more challenging tasks, the higher they will rank on this scale, eventually surpassing humans.
But machine learning has us marching along a different path. We're moving fast, and we'll likely go very far, but we're going in a different direction, only tangentially related to human capabilities.
The trick is to take a moment to think about this difference. Our own personal experiences of being one of those smart creatures called a human is what catches us in a thought trap. Our very particular and very impressive capabilities are hidden from ourselves beneath a veil of a conscious experience that just kind of feels like "clarity." It feels simple, but under the surface, it's oh so complex. Replicating our "general common sense" is a fanciful notion that no technological advancements have ever moved us towards in any meaningful way.
Thinking abstractly often feels uncomplicated. We draw visuals in our mind, like a not-to-scale map of a city we're navigating, or a "space" of products that two large companies are competing to sell, with each company dominating in some areas but not in others... or, when thinking about AI, the mistaken vision that increasingly adept capabilities – both intellectual and computational – all fall along the same, somewhat narrow path.
Now, Bostrom rightly emphasizes that we should not anthropomorphize what intelligent machines may be like in the future. It's not human, so it's hard to speculate on the specifics and perhap it will seem more like a space alien's intelligence. But what Bostrom and his followers aren't seeing is that, since they believe technology advances along a spectrum that includes and then transcends human cognition, the spectrum itself as they've conceived it is anthropomorphic. It has humanlike qualities built in. Now, your common sense reasoning may seem to you like a "natural stage" of any sort of intellectual development, but that's a very human-centric perspective. Your common sense is intricate and very, very particular. It's far beyond our grasp – for anyone – to formally define a "spectrum of intelligence" that includes human cognition on it. Our brains are spectacularly multi-faceted and adept, in a very arcane way.
Machines progress along a different spectrum
Machine learning actually does work by defining a kind of spectrum, but only for an extremely limited sort of trajectory – only for tasks that have labeled data, such as identifying objects in images. With labeled data, you can compare and rank various attempts to solve the problem. The computer uses the data to measure how well it does. Like, one neural network might correctly identify 90% of the trucks in the images and then a variation after some improvements might get 95%.
Getting better and better at a specific task like that obviously doesn't lead to general common sense reasoning capabilities. We're not on that trajectory, so the fears should be allayed. The machine isn't going to get to a human-like level where it then figures out how to propel itself into superintelligence. No, it's just gonna keep getting better at identifying objects, that's all.
Intelligence isn't a Platonic ideal that exists separately from humans, waiting to be discovered. It's not going to spontaneously emerge along a spectrum of better and better technology. Why would it? That's a ghost story.
It might feel tempting to believe that increased complexity leads to intelligence. After all, computers are incredibly general-purpose – they can basically do any task, if only we can figure out how to program them to do that task. And we're getting them to do more and more complex things. But just because they could do anything doesn't mean they will spontaneously do everything we imagine they might.
No advancements in machine learning to date have provided any hint or inkling of what kind of secret sauce could get computers to gain "general common sense reasoning." Dreaming that such abilities could emerge is just wishful thinking and rogue imagination, no different now, after the last several decades of innovations, than it was back in 1950, when Alan Turing, the father of computer science, first tried to define how the word "intelligence" might apply to computers.
Don't sell, buy, or regulate on A.I.
Machines will remain fundamentally under our control. Computer errors will kill – people will die from autonomous vehicles and medical automation – but not on a catastrophic level, unless by the intentional design of human cyber attackers. When a misstep does occur, we take the system offline and fix it.
Now, the aforementioned techno-celebrity believers are true intellectuals and are truly accomplished as entrepreneurs, engineers, and thought leaders in their respective fields. But they aren't machine learning experts. None of them are. When it comes to their AI pontificating, it would truly be better for everyone if they published their thoughts as blockbuster movie scripts rather than earnest futurism.
It's time for term "AI" to be "terminated." Mean what you say and say what you mean. If you're talking about machine learning, call it machine learning. The buzzword "AI" is doing more harm than good. It may sometimes help with publicity, but to at least the same degree, it misleads. AI isn't a thing. It's vaporware. Don't sell it and don't buy it.
And most importantly, do not regulate on "AI"! Technology greatly needs regulation in certain arenas, for example, to address bias in algorithmic decision-making and the development of autonomous weapons – which often use machine learning – so clarity is absolutely critical in these discussions. Using the imprecise, misleading term "artificial intelligence" is gravely detrimental to the effectiveness and credibility of any initiative that regulates technology. Regulation is already hard enough without muddying the waters.
Want more of Dr. Data?
Click here to view more episodes and to sign up for future episodes of The Dr. Data Show.
‘Designer baby’ book trilogy explores the moral dilemmas humans may soon create
How would the ability to genetically customize children change society? Sci-fi author Eugene Clark explores the future on our horizon in Volume I of the "Genetic Pressure" series.
- A new sci-fi book series called "Genetic Pressure" explores the scientific and moral implications of a world with a burgeoning designer baby industry.
- It's currently illegal to implant genetically edited human embryos in most nations, but designer babies may someday become widespread.
- While gene-editing technology could help humans eliminate genetic diseases, some in the scientific community fear it may also usher in a new era of eugenics.
Tribalism and discrimination
<p>One question the "Genetic Pressure" series explores: What would tribalism and discrimination look like in a world with designer babies? As designer babies grow up, they could be noticeably different from other people, potentially being smarter, more attractive and healthier. This could breed resentment between the groups—as it does in the series.</p><p>"[Designer babies] slowly find that 'everyone else,' and even their own parents, becomes less and less tolerable," author Eugene Clark told Big Think. "Meanwhile, everyone else slowly feels threatened by the designer babies."</p><p>For example, one character in the series who was born a designer baby faces discrimination and harassment from "normal people"—they call her "soulless" and say she was "made in a factory," a "consumer product." </p><p>Would such divisions emerge in the real world? The answer may depend on who's able to afford designer baby services. If it's only the ultra-wealthy, then it's easy to imagine how being a designer baby could be seen by society as a kind of hyper-privilege, which designer babies would have to reckon with. </p><p>Even if people from all socioeconomic backgrounds can someday afford designer babies, people born designer babies may struggle with tough existential questions: Can they ever take full credit for things they achieve, or were they born with an unfair advantage? To what extent should they spend their lives helping the less fortunate? </p>Sexuality dilemmas
<p>Sexuality presents another set of thorny questions. If a designer baby industry someday allows people to optimize humans for attractiveness, designer babies could grow up to find themselves surrounded by ultra-attractive people. That may not sound like a big problem.</p><p>But consider that, if designer babies someday become the standard way to have children, there'd necessarily be a years-long gap in which only some people are having designer babies. Meanwhile, the rest of society would be having children the old-fashioned way. So, in terms of attractiveness, society could see increasingly apparent disparities in physical appearances between the two groups. "Normal people" could begin to seem increasingly ugly.</p><p>But ultra-attractive people who were born designer babies could face problems, too. One could be the loss of body image. </p><p>When designer babies grow up in the "Genetic Pressure" series, men look like all the other men, and women look like all the other women. This homogeneity of physical appearance occurs because parents of designer babies start following trends, all choosing similar traits for their children: tall, athletic build, olive skin, etc. </p><p>Sure, facial traits remain relatively unique, but everyone's more or less equally attractive. And this causes strange changes to sexual preferences.</p><p>"In a society of sexual equals, they start looking for other differentiators," he said, noting that violet-colored eyes become a rare trait that genetically engineered humans find especially attractive in the series.</p><p>But what about sexual relationships between genetically engineered humans and "normal" people? In the "Genetic Pressure" series, many "normal" people want to have kids with (or at least have sex with) genetically engineered humans. But a minority of engineered humans oppose breeding with "normal" people, and this leads to an ideology that considers engineered humans to be racially supreme. </p>Regulating designer babies
<p>On a policy level, there are many open questions about how governments might legislate a world with designer babies. But it's not totally new territory, considering the West's dark history of eugenics experiments.</p><p>In the 20th century, the U.S. conducted multiple eugenics programs, including immigration restrictions based on genetic inferiority and forced sterilizations. In 1927, for example, the Supreme Court ruled that forcibly sterilizing the mentally handicapped didn't violate the Constitution. Supreme Court Justice Oliver Wendall Holmes wrote, "… three generations of imbeciles are enough." </p><p>After the Holocaust, eugenics programs became increasingly taboo and regulated in the U.S. (though some states continued forced sterilizations <a href="https://www.uvm.edu/~lkaelber/eugenics/" target="_blank">into the 1970s</a>). In recent years, some policymakers and scientists have expressed concerns about how gene-editing technologies could reanimate the eugenics nightmares of the 20th century. </p><p>Currently, the U.S. doesn't explicitly ban human germline genetic editing on the federal level, but a combination of laws effectively render it <a href="https://academic.oup.com/jlb/advance-article/doi/10.1093/jlb/lsaa006/5841599#204481018" target="_blank" rel="noopener noreferrer">illegal to implant a genetically modified embryo</a>. Part of the reason is that scientists still aren't sure of the unintended consequences of new gene-editing technologies. </p><p>But there are also concerns that these technologies could usher in a new era of eugenics. After all, the function of a designer baby industry, like the one in the "Genetic Pressure" series, wouldn't necessarily be limited to eliminating genetic diseases; it could also work to increase the occurrence of "desirable" traits. </p><p>If the industry did that, it'd effectively signal that the <em>opposites of those traits are undesirable. </em>As the International Bioethics Committee <a href="https://academic.oup.com/jlb/advance-article/doi/10.1093/jlb/lsaa006/5841599#204481018" target="_blank" rel="noopener noreferrer">wrote</a>, this would "jeopardize the inherent and therefore equal dignity of all human beings and renew eugenics, disguised as the fulfillment of the wish for a better, improved life."</p><p><em>"Genetic Pressure Volume I: Baby Steps"</em><em> by Eugene Clark is <a href="http://bigth.ink/38VhJn3" target="_blank">available now.</a></em></p>There are 5 eras in the universe's lifecycle. Right now, we're in the second era.
Astronomers find these five chapters to be a handy way of conceiving the universe's incredibly long lifespan.
Image based on logarithmic maps of the Universe put together by Princeton University researchers, and images produced by NASA based on observations made by their telescopes and roving spacecraft
- We're in the middle, or thereabouts, of the universe's Stelliferous era.
- If you think there's a lot going on out there now, the first era's drama makes things these days look pretty calm.
- Scientists attempt to understand the past and present by bringing together the last couple of centuries' major schools of thought.
The 5 eras of the universe
<p>There are many ways to consider and discuss the past, present, and future of the universe, but one in particular has caught the fancy of many astronomers. First published in 1999 in their book <a href="https://amzn.to/2wFQLiL" target="_blank"><em>The Five Ages of the Universe: Inside the Physics of Eternity</em></a>, <a href="https://en.wikipedia.org/wiki/Fred_Adams" target="_blank">Fred Adams</a> and <a href="https://en.wikipedia.org/wiki/Gregory_P._Laughlin" target="_blank">Gregory Laughlin</a> divided the universe's life story into five eras:</p><ul><li>Primordial era</li><li>Stellferous era</li><li>Degenerate era</li><li>Black Hole Era</li><li>Dark era</li></ul><p>The book was last updated according to current scientific understandings in 2013.</p><p>It's worth noting that not everyone is a subscriber to the book's structure. Popular astrophysics writer <a href="https://www.forbes.com/sites/ethansiegel/#30921c93683e" target="_blank">Ethan C. Siegel</a>, for example, published an article on <a href="https://www.forbes.com/sites/startswithabang/2019/07/26/we-have-already-entered-the-sixth-and-final-era-of-our-universe/#7072d52d4e5d" target="_blank"><em>Medium</em></a> last June called "We Have Already Entered The Sixth And Final Era Of Our Universe." Nonetheless, many astronomers find the quintet a useful way of discuss such an extraordinarily vast amount of time.</p>The Primordial era
<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjkwMTEyMi9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyNjEzMjY1OX0.PRpvAoa99qwsDNprDme9tBWDim6mS7Mjx6IwF60fSN8/img.jpg?width=980" id="db4eb" class="rm-shortcode" data-rm-shortcode-id="0e568b0cc12ed624bb8d7e5ff45882bd" data-rm-shortcode-name="rebelmouse-image" data-width="1440" data-height="1049" />Image source: Sagittarius Production/Shutterstock
<p> This is where the universe begins, though what came before it and where it came from are certainly still up for discussion. It begins at the Big Bang about 13.8 billion years ago. </p><p> For the first little, and we mean <em>very</em> little, bit of time, spacetime and the laws of physics are thought not yet to have existed. That weird, unknowable interval is the <a href="https://www.universeadventure.org/eras/era1-plankepoch.htm" target="_blank">Planck Epoch</a> that lasted for 10<sup>-44</sup> seconds, or 10 million of a trillion of a trillion of a trillionth of a second. Much of what we currently believe about the Planck Epoch eras is theoretical, based largely on a hybrid of general-relativity and quantum theories called quantum gravity. And it's all subject to revision. </p><p> That having been said, within a second after the Big Bang finished Big Banging, inflation began, a sudden ballooning of the universe into 100 trillion trillion times its original size. </p><p> Within minutes, the plasma began cooling, and subatomic particles began to form and stick together. In the 20 minutes after the Big Bang, atoms started forming in the super-hot, fusion-fired universe. Cooling proceeded apace, leaving us with a universe containing mostly 75% hydrogen and 25% helium, similar to that we see in the Sun today. Electrons gobbled up photons, leaving the universe opaque. </p><p> About 380,000 years after the Big Bang, the universe had cooled enough that the first stable atoms capable of surviving began forming. With electrons thus occupied in atoms, photons were released as the background glow that astronomers detect today as cosmic background radiation. </p><p> Inflation is believed to have happened due to the remarkable overall consistency astronomers measure in cosmic background radiation. Astronomer <a href="https://www.youtube.com/watch?v=IGCVTSQw7WU" target="_blank">Phil Plait</a> suggests that inflation was like pulling on a bedsheet, suddenly pulling the universe's energy smooth. The smaller irregularities that survived eventually enlarged, pooling in denser areas of energy that served as seeds for star formation—their gravity pulled in dark matter and matter that eventually coalesced into the first stars. </p>The Stelliferous era
<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjkwMTEzNy9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxMjA0OTcwMn0.GVCCFbBSsPdA1kciHivFfWlegOfKfXUfEtFKEF3otQg/img.jpg?width=980" id="bc650" class="rm-shortcode" data-rm-shortcode-id="c8f86bf160ecdea6b330f818447393cd" data-rm-shortcode-name="rebelmouse-image" data-width="481" data-height="720" />Image source: Casey Horner/unsplash
<p>The era we know, the age of stars, in which most matter existing in the universe takes the form of stars and galaxies during this active period. </p><p>A star is formed when a gas pocket becomes denser and denser until it, and matter nearby, collapse in on itself, producing enough heat to trigger nuclear fusion in its core, the source of most of the universe's energy now. The first stars were immense, eventually exploding as supernovas, forming many more, smaller stars. These coalesced, thanks to gravity, into galaxies.</p><p>One axiom of the Stelliferous era is that the bigger the star, the more quickly it burns through its energy, and then dies, typically in just a couple of million years. Smaller stars that consume energy more slowly stay active longer. In any event, stars — and galaxies — are coming and going all the time in this era, burning out and colliding.</p><p>Scientists predict that our Milky Way galaxy, for example, will crash into and combine with the neighboring Andromeda galaxy in about 4 billion years to form a new one astronomers are calling the Milkomeda galaxy.</p><p>Our solar system may actually survive that merger, amazingly, but don't get too complacent. About a billion years later, the Sun will start running out of hydrogen and begin enlarging into its red giant phase, eventually subsuming Earth and its companions, before shrining down to a white dwarf star.</p>The Degenerate era
<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjkwMTE1MS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxNTk3NDQyN30.gy4__ALBQrdbdm-byW5gQoaGNvFTuxP5KLYxEMBImNc/img.jpg?width=980" id="77f72" class="rm-shortcode" data-rm-shortcode-id="08bb56ea9fde2cee02d63ed472d79ca3" data-rm-shortcode-name="rebelmouse-image" data-width="1440" data-height="810" />Image source: Diego Barucco/Shutterstock/Big Think
<p>Next up is the Degenerate era, which will begin about 1 quintillion years after the Big Bang, and last until 1 duodecillion after it. This is the period during which the remains of stars we see today will dominate the universe. Were we to look up — we'll assuredly be outta here long before then — we'd see a much darker sky with just a handful of dim pinpoints of light remaining: <a href="https://earthsky.org/space/evaporating-giant-exoplanet-white-dwarf-star" target="_blank">white dwarfs</a>, <a href="https://earthsky.org/space/new-observations-where-stars-end-and-brown-dwarfs-begin" target="_blank">brown dwarfs</a>, and <a href="https://earthsky.org/astronomy-essentials/definition-what-is-a-neutron-star" target="_blank">neutron stars</a>. These"degenerate stars" are much cooler and less light-emitting than what we see up there now. Occasionally, star corpses will pair off into orbital death spirals that result in a brief flash of energy as they collide, and their combined mass may become low-wattage stars that will last for a little while in cosmic-timescale terms. But mostly the skies will be be bereft of light in the visible spectrum.</p><p>During this era, small brown dwarfs will wind up holding most of the available hydrogen, and black holes will grow and grow and grow, fed on stellar remains. With so little hydrogen around for the formation of new stars, the universe will grow duller and duller, colder and colder.</p><p>And then the protons, having been around since the beginning of the universe will start dying off, dissolving matter, leaving behind a universe of subatomic particles, unclaimed radiation…and black holes.</p>The Black Hole era
<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjkwMTE2MS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYzMjE0OTQ2MX0.ifwOQJgU0uItiSRg9z8IxFD9jmfXlfrw6Jc1y-22FuQ/img.jpg?width=980" id="103ea" class="rm-shortcode" data-rm-shortcode-id="f0e6a71dacf95ee780dd7a1eadde288d" data-rm-shortcode-name="rebelmouse-image" data-width="1400" data-height="787" />Image source: Vadim Sadovski/Shutterstock/Big Think
<p> For a considerable length of time, black holes will dominate the universe, pulling in what mass and energy still remain. </p><p> Eventually, though, black holes evaporate, albeit super-slowly, leaking small bits of their contents as they do. Plait estimates that a small black hole 50 times the mass of the sun would take about 10<sup>68</sup> years to dissipate. A massive one? A 1 followed by 92 zeros. </p><p> When a black hole finally drips to its last drop, a small pop of light occurs letting out some of the only remaining energy in the universe. At that point, at 10<sup>92</sup>, the universe will be pretty much history, containing only low-energy, very weak subatomic particles and photons. </p>The Dark Era
<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjkwMTE5NC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY0Mzg5OTEyMH0.AwiPRGJlGIcQjjSoRLi6V3g5klRYtxQJIpHFgZdZkuo/img.jpg?width=980" id="60c77" class="rm-shortcode" data-rm-shortcode-id="7a857fb7f0d85cf4a248dbb3350a6e1c" data-rm-shortcode-name="rebelmouse-image" data-width="1440" data-height="810" />Image source: Big Think
<p>We can sum this up pretty easily. Lights out. Forever.</p>Astrophysicists find unique "hot Jupiter" planet without clouds
A unique exoplanet without clouds or haze was found by astrophysicists from Harvard and Smithsonian.
Illustration of WASP-62b, the Jupiter-like planet without clouds or haze in its atmosphere.
- Astronomers from Harvard and Smithsonian find a very rare "hot Jupiter" exoplanet without clouds or haze.
- Such planets were formed differently from others and offer unique research opportunities.
- Only one other such exoplanet was found previously.
Munazza Alam – a graduate student at the Center for Astrophysics | Harvard & Smithsonian.
Credit: Jackie Faherty
Jupiter's Colorful Cloud Bands Studied by Spacecraft
<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="8a72dfe5b407b584cf867852c36211dc"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/GzUzCesfVuw?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>Lair of giant predator worms from 20 million years ago found
Scientists discover burrows of giant predator worms that lived on the seafloor 20 million years ago.
Bobbit worm (Eunice aphroditois)
- Scientists in Taiwan find the lair of giant predator worms that inhabited the seafloor 20 million years ago.
- The worm is possibly related to the modern bobbit worm (Eunice aphroditois).
- The creatures can reach several meters in length and famously ambush their pray.
A three-dimensional model of the feeding behavior of Bobbit worms and the proposed formation of Pennichnus formosae.
Credit: Scientific Reports
Beware the Bobbit Worm!
<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="1f9918e77851242c91382369581d3aac"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/_As1pHhyDHY?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>FOSTA-SESTA: Have controversial sex trafficking acts done more harm than good?
The idea behind the law was simple: make it more difficult for online sex traffickers to find victims.
