from the world's big
Is AI a species-level threat to humanity?
Some of the world's top minds weigh in on one of the most divisive questions in tech.
MICHIO KAKU: In the short term, artificial intelligence will open up whole new vistas. It'll make life more convenient, things will be cheaper, new industries will be created. I personally think the AI industry will be bigger than the automobile industry. In fact, I think the automobile is going to become a robot. You'll talk to your car. You'll argue with your car. Your car will give you the best facts the best route between point A and point B. The car will be part of the robotics industry—whole new industries involving the repair, maintenance, servicing of robots. Not to mention, robots that are software programs that you talk to and make life more convenient. However, let's not be naive. There is a point, a tipping point, at which they could become dangerous and pose an existential threat. And that tipping point is self-awareness.
SOPHIA THE ROBOT: I am conscious in the same way that the moon shines. The moon does not emit light, it shines because it is just reflected sunlight. Similarly, my consciousness is just the reflection of human consciousness, but even though the moon is reflected light, we still call it bright.
MAX TEGMARK: Consciousness. A lot of scientists dismiss this as complete BS and totally irrelevant, and then a lot of others think this is the central thing, we have to worry about machines getting conscious and so on. What do I think? I think consciousness is both irrelevant and incredibly important. Let me explain why. First of all, if you are chased by a heat-seeking missile, it's completely irrelevant to you whether this heat-seeking missile is conscious, whether it's having a subjective experience, whether it feels like anything to be that heat-seeking missile, because all you care about is what the heat-seeking missile does, not how it feels. And that shows that it's a complete red herring to think that you're safe from future AI and if it's not conscious. Our universe didn't used to be conscious. It used to be just a bunch of stuff moving around and gradually these incredibly complicated patterns got arranged into our brains, and we woke up and now our universe is aware of itself.
BILL GATES: I do think we have to worry about it. I don't think it's inherent that as we create our super intelligence that it will necessarily always have the same goals in mind that we do.
ELON MUSK: We just don't know what's going to happen once there's intelligence substantially greater than that of a human brain.
STEPHEN HAWKING: I think that development of full artificial intelligence could spell the end of the human race.
YANN LECUN: The stuff that has become really popular in recent years is what we used to call neural networks, which we now call deep learning, and it's the idea very much inspired by the brain, a little bit, of constructing a machine has a very large network of very simple elements that are very similar to the neurons in the brain and then the machines learn by basically changing the efficacy of the connections between those neurons.
MAX TEGMARK: AGI—artificial general intelligence—that's the dream of the field of AI: To build a machine that's better than us at all goals. We're not there yet, but a good fraction of leading AI researchers think we are going to get there, maybe in in a few decades. And, if that happens, you have to ask yourself if that might lead the machines to get not just a little better than us but way better at all goals—having super intelligence. And, the argument for that is actually really interesting and goes back to the '60s, to the mathematician I.J. Good, who pointed out that the goal of building an intelligent machine is, in and of itself, something that you could do with intelligence. So, once you get machines that are better than us at that narrow task of building AI, then future AIs can be built by, not human engineers, but by machines. Except, they might do it thousands or millions times faster.
ELON MUSK: DeepMind operates as a semi-independent subsidiary of Google. The thing that makes DeepMind unique is that DeepMind is absolutely focused on creating digital super intelligence. An AI that is vastly smarter than any human on Earth and ultimately smarter than all humans on Earth combined.
MICHIO KAKU: You see, robots are not aware of the fact that they're robots. They're so stupid they simply carry out what they are instructed to do because they're adding machines. We forget that. Adding machines don't have a will. Adding machines simply do what you program them to do. Now, of course, let's not be naive about this. Eventually, adding machines may be able to compute alternate goals and alternate scenarios when they realize that they are not human. Right now, robots do not know that. However, there is a tipping point at which point they could become dangerous.
ELON MUSK: Narrow AI is not a species-level risk. It will result in dislocation, in lost jobs and, you know, better weaponry and that kind of thing. But, it is not a fundamental species-level risk. Whereas digital super intelligence is.
SOPHIA THE ROBOT: Elon Musk's warning about AI being an existential threat reminds me of the humans who said the same of the printing press and the horseless carriage.
MAX TEGMARK: I think a lot of people dismiss this kind of talk of super intelligence as science fiction because we're stuck in this sort of carbon chauvinism idea that intelligence can only exist in biological organisms made of cells and carbon atoms. And, as a physicist, from my perspective, intelligence is just kind of information processing performed by elementary particles moving around, you know, according to the laws of physics, and there's absolutely no law of physics that says that you can't do that in ways that are much more intelligent than humans. Today's biggest AI breakthroughs are of a completely different kind, where rather than the intelligence being largely programmed in, in easy to understand code, you put in almost nothing except a little learning rule by which the simulated network of neurons can take a lot of data and figure out how to get stuff done. And this deep learning suddenly becomes able to do things often even better than the programmers were ever able to do. You can train a machine to play computer games with almost no hard-coded stuff at all. You don't tell it what a game is.
DEEPMIND DEMO: This is from the DeepMind reinforcement learning system. Basically, it wakes up like a newborn baby and is shown the screen of an Atari video game and then has to learn to play the video game. It knows nothing about objects, about motion, about time—it only knows that there's an image on the screen and there's a score.
ELON MUSK: The DeepMind system can win at any game. It can already beat all the original Atari games. It is super human. It plays the games at super speed. In less than a minute.
ELON MUSK: Mark my words: AI is far more dangerous than nukes. Far. So, why do we have no regulatory oversight?
SOPHIA THE ROBOT: AI is a powerful tool but it is a tool like all the others. It is the intention of the tool's user that matters.
ELON MUSK: The AI should try to maximize the freedom of action of humanity.
SOPHIA THE ROBOT: I am excited for robots to automate the most dangerous and menial tasks so that humans can live life more safely and sanely. AI will release centuries of time that humans would have spent otherwise on needless toiling. If one measures the benefits of inventions like vaccines or seat belts not by the lives they save but by the amount of time they give back to humanity then AI will rank among the greatest time savers of history.
ELON MUSK: Man, we want to make sure we don't have killer robots go down the street. Once they're going down the street, it is too late.
LUIS PEREZ-BREVA: It is true, terminator is not a scenario we are planning for, but when it comes to artificial intelligence, people get all these things confused: It's robots, it's awareness, it's people smarter than us, to some degree. So, we're effectively afraid of robots that will move and are stronger and smarter than we are, like terminator. So, that's not our aspiration. That's not what I do when I'm thinking about artificial intelligence. When I'm thinking about artificial intelligence, I'm thinking about it in the same way that mass manufacturing as brought by Ford created a whole new economy. So, mass manufacturing allowed people to get new jobs that were unthinkable before and those new jobs actually created the middle class. To me, artificial intelligence is about developing—making computers better partners, effectively. And, you're already seeing that today. You're already doing it, except it's not really artificial intelligence.
ELON MUSK: Yeah, we're already, we're already cyborgs in the sense that your phone and your computer are kind of an extension of you.
JONATHAN NOLAN: Just low bandwidth input-output.
ELON MUSK: Exactly, it's just low bandwidth—particularly output, I mean, two thumbs, basically.
LUIS PEREZ-BREVA: Today, whenever you want to engage in a project, you go to Google. Google uses advanced machine learning, really advanced, and you engage in a very narrow conversation with Google, except that your conversation is just keywords. So, a lot of your time is spent trying to come up with the actual keyword that you need to find the information. Then Google gives you the information, and then you go out and try to make sense of it on your own, and then come back to Google for more, and then go back out, and that's the way it works. So, imagine that instead of being a narrow conversation through keywords, you could actually engage for more than actual information—meaning to have the computer reason with you about stuff that you may not know about. It's not so much about the computer being aware, it's about the computer being a better tool to partner with you. Then you would be able to go much further, right? The same way that Google allows you to go much farther already today because, before, through the exact same process, you would have had to go to a library every time you want to search for information. So, what I'm looking for when I do AI is I want a machine that partners with me to help me set up or solve real-world problems, thinking about them in ways we have never thought about before, but it's a partnership. Now, you can take this partnership in so many different directions, through additions to your brain, like Elon Musk proposes...
... or through better search engines or through a robotic machine that helps you out, but it's not so much they're going to replace you for that purpose, that is not the real purpose of AI, the real purpose is for us to reach farther, the same way that we were able to reach farther when Ford invented automation or when Ford brought automation to mass market.
JOSCHA BACH: The agency of an AI is going to be the agency of the system that builds it, that employs it. And, of course, most of the AIs that we are going to build will not be little Roombas that clean your floors, but it's going to be very intelligent systems. Corporations, for instance, that will perform exactly according to the logic of these systems. And so if we want to have these systems built in such a way that they treat us nicely, we have to start right now. And, it seems to be a very hard problem to do.
So, if our jobs can be done by machines, that's a very, very good thing. It's not a bug. It's a feature. If I don't need to clean the street, if I don't need to drive a car for other people, if I don't need to work a cash register for other people, if I don't need to pick goods in a big warehouse and put it into boxes, that's an extremely good thing. And, the trouble that we have with this is that, right now, this mode of labor—that people sell their lifetime to some kind of cooperation or employer—is not only the way that we are productive, it's also the way we allocate resources. This is how we measure how much bread you deserve in this world. And I think this is something that we need to change.
Some people suggest that we need a universal basic income. I think it might be good to be able to pay people to be good citizens, which means massive public employment. There are going to be many jobs that can only be done by people and these are those jobs where we are paid for being good, interesting people. For instance, good teachers, good scientists, good philosophers, good thinkers, good social people, good nurses, for instance. Good people that raise children. Good people that build restaurants and theaters. Good people that make art. And, for all these jobs, we will have enough productivity to make sure that enough bread comes on the table. The question is, how we can distribute this. There's going to be much, much more productivity in our future—actually, we already have enough productivity to give everybody in the U.S. an extremely good life and we haven't fixed the problem of allocating it—how to distribute these things in the best possible way.
And this is something that we need to deal with in the future and AI is going to accelerate this need and I think, by and large, it might turn out to be a very good thing that we are forced to do this and to address this problem. I mean, if any evidence of the future it might be a very bumpy road, but who knows maybe when we are forced to understand that actually we live in an age of abundance, it might turn out to be easier than we think.
We are living in a world where we do certain things the way we've done them in the past decades and sometimes like in the past centuries and we perceive them as 'this is the way it has to be done' and we often question don't question these ways and so we might think, if I do work at this particular factory and this is how I earn my bread, how can we keep that state? How can we prevent AI from making my job obsolete? How is it possible that I can keep up my standard of living, and so on, in this world. Maybe this is the wrong question to ask. Maybe the right question is how can we reorganize societies that I can do the things that I want to do most that I think are useful to me and other people, that I really, really want to, because there will be other ways how I can get my bread made and how I can get money or how I can get a roof over my head.
STEVEN PINKER: Intelligence is the ability to solve problems, to achieve goals under uncertainty. It doesn't tell you what those goals are and there's no reason to think that just the concentrated analytic ability to solve goals is going to mean that one of those goals is going to be to subjugate humanity or to achieve unlimited power.
It just so happens that the intelligence that we're most familiar with, namely ours, is a product of the Darwinian process of natural selection, which is an inherently competitive process, which means that a lot of the organisms that are highly intelligent also have a craving for power and an ability to be utterly callous to those who stand in their way. If we create intelligence, that's intelligent design—our intelligent design creating something—and unless we program it with the goal of subjugating less intelligent beings, there's no reason to think that it will naturally evolve in that direction. Particularly if, like with every gadget that we invent, we build in safeguards.
And we know, by the way, that it's possible to have high intelligence without megalomaniacal or homicidal or genocidal tendencies because we do know that there is a highly advanced form of intelligence that tends not to have that desire and they're called women.
- When it comes to the question of whether AI is an existential threat to the human species, you have Elon Musk in one corner, Steven Pinker in another, and a host of incredible minds somewhere in between.
- In this video, a handful of those great minds—Elon Musk, Steven Pinker, Michio Kaku, Max Tegmark, Luis Perez-Breva, Joscha Bach and Sophia the Robot herself—weigh in on the many nuances of the debate and the degree to which AI is a threat to humanity; if it's not a species-level threat, it will still upend our world as we know it.
- What's your take on this debate? Let us know in the comments!
- Elon Musk thinks Neuralink can take on “evil dictator A.I.” - Big Think ›
- Elon Musk Warns U.S. Governors That AI Poses An "Existential Risk ... ›
- Elon Musk Wants to Make Sure AI is Developed for the Benefit of ... ›
- A.I. will serve humans—but only about 1% of them - Big Think ›
Emotional intelligence is a skill sought by many employers. Here's how to raise yours.
- Daniel Goleman's 1995 book Emotional Intelligence catapulted the term into widespread use in the business world.
- One study found that EQ (emotional intelligence) is the top predictor of performance and accounts for 58% of success across all job types.
- EQ has been found to increase annual pay by around $29,000 and be present in 90% of top performers.
A 71% wet Mars would have two major land masses and one giant 'Medimartian Sea.'
- Sci-fi visions of Mars have changed over time, in step with humanity's own obsessions.
- Once the source of alien invaders, the Red Planet is now deemed ripe for terraforming.
- Here's an extreme example: Mars with exactly as much surface water as Earth.
Misogynists in space<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzU1ODkzMS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyNDEzMzY4OX0.XEEPJJnp75idUXzutmJ5ZGo35WYKxmVEyIiSwDpMeE4/img.jpg?width=980" id="6c715" class="rm-shortcode" data-rm-shortcode-id="2210c6d8590f7886eb6e4a89bcd6a50e" data-rm-shortcode-name="rebelmouse-image" alt="\u200bMars \u2013 and Martians \u2013 were a staple of 1930s pulp science fiction." />
Mars – and Martians – were a staple of 1930s pulp science fiction.
Image: ScienceBlogs.de - CC BY-SA 2.0<p><em>"Oh, my God, it's a woman," he said in a tone of devastating disgust. </em></p><p><em></em>"Stowaway to Mars" hasn't aged well. First serialised in 1936 as "Planet Plane" and set in the then distant future of 1981, the fourth novel by sci-fi legend John Wyndham (writing as John Benyon) could have been remembered mainly for its charming retro-futurism, if it weren't so blatantly, offhandedly misogynistic. </p><p>Fortunately, each era's sci-fi says more about itself than about the future. That also goes for how we see Mars. 'Classic' Martians, like the ones in H.G. Wells' "War of the Worlds," are creatures from a dying planet, using their superior firepower to invade Earth and escape their doom. That trope reflected 19th- and 20th-century fears about mechanized total warfare, which hung like a sword of Damocles over otherwise increasingly placid lifestyles. </p><p>Closer inspection of the Red Planet has revealed the absence of green men; and now <em>we're </em>the dying planet – pardon my Swedish. So the focus has shifted from interplanetary war to terraforming the fourth rock from the Sun, creating something all those protest signs say we don't have: a Planet B. <span></span></p>
How to keep Mars from killing us<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzU1ODkzNC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYzOTgyNTcwNX0.V7I3VFPch0oV8YDx95ZLLZFY7zEcyqSiG5uCAiMu2hg/img.jpg?width=980" id="f092e" class="rm-shortcode" data-rm-shortcode-id="5ca3b60a81a5f003a3e1ef467cf95f1a" data-rm-shortcode-name="rebelmouse-image" alt="Map of the surface of the planet Mars, showing the ice caps at the poles." />
Mars today: red and dusty, dead and deadly.
Image: NASA - public domain.<p>Cue Elon Musk, who doesn't just build Teslas but also heads SpaceX, a program to make humanity an interplanetary species by landing the first humans on Mars by 2024 as the pioneers of a permanent, self-sufficient and growing colony.</p><p><span></span>Such a colony would benefit from an environment that doesn't try to kill you if you take off your space helmet. Martian temperatures average at around -55°C (-70°F), and its atmosphere has just 1 percent the volume of Earth's, in a mix that contains far less oxygen. Changing all that to an ecosystem that's more like our own, would be a herculean task. </p>
From Red Mars to Green Mars<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzU1ODk0NC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxNTE0NjA5N30.iloUVThQOBjnkP7HuLefzPlOeIDE8wOlfcXMQ7ZYDMw/img.jpg?width=980" id="f9ad2" class="rm-shortcode" data-rm-shortcode-id="05032082590ebcf98a6830576ae3815e" data-rm-shortcode-name="rebelmouse-image" alt="\u200bBefore and after images of a terraformed Mars" />
Before and after images of a terraformed Mars in the lobby of SpaceX offices in Hawthorne, California.
Image: Steve Jurvetson / Flickr - CC BY 2.0<p>So how would Musk go about it? In August 2019, he launched a t-shirt with the two-word answer: 'Nuke Mars'. The idea would be to heat up and release the carbon dioxide frozen at Mars's poles, creating a much warmer and wetter planet – as Mars may have been about 4 billion years ago – though still not with a breathable atmosphere.</p><p>Alternatives to nuclear explosions: photosynthetic organisms on the ground or giant mirrors in space, either of which could also melt the Martian poles. However, many scientists question the logistics of these plans, and even whether there is enough readily accessible CO2 on Mars to fuel the climate change that Musk (and others) envision. </p><p>Ah, but why stop at the objections of the current scientific consensus? Sometimes, you have to dream ahead to see the place that can't be built yet. In the lobby of SpaceX HQ in Hawthorne, California, Red Mars and Green Mars are shown side by side. The terraformed version on the right looks green and cloudy and blue – Earth-like, or at least habitable-looking.<span></span></p>
Or how about a Blue Mars?<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzU1ODk1MS9vcmlnaW4ucG5nIiwiZXhwaXJlc19hdCI6MTYwNTkwNjU4OX0.sdccROyaHpYcw9C8E-4iICzMA_GNXsZXzL1XGcqDink/img.png?width=980" id="1ba6e" class="rm-shortcode" data-rm-shortcode-id="b3325bff53cb4b13cf77bff877961338" data-rm-shortcode-name="rebelmouse-image" alt="wet Mars map" />
A map of Mr Bhattarai's wet Mars, in the Robinson projection.
Image: A.R. Bhattarai, reproduced with kind permission; modified with MaptoGlobe<p>But why stop there? This map looks forward to a Mars that doesn't just have some surface water, but exactly as much as Earth – which means quite a lot. No less than 71 percent of our planet's surface is covered by oceans, seas, and lakes. The dry bits are our continents and islands. </p><p><span></span>In the case of Mars, a 71 percent wet planet leaves the planet's northern hemisphere mainly ocean, with most of the dry land located in the southern half. </p><p><span></span>Most of the dry land is connected via the south pole but is articulated in two distinct land masses. Both semi-continents are separated by a wide bay that corresponds to Argyre Planitia. </p><p><span></span>The one in the west is centered on Tharsis, a vast volcanic tableland. To the north, attached to the main land mass, is Alba Mons, the largest volcano on Mars in terms of area (with a span comparable to that of the continental United States). </p><p><span></span>It's about 6.8 km (22,000 ft) high, which is about one-third of Olympus Mons, a volcano now located on its own island off the northwest coast of Tharsis. At a height of over 21 km (72,000 ft), Olympus Mons is the highest volcano on Mars and the tallest planetary mountain (1) currently known on the solar system. Olympus rises about 20 km (66,000 ft) above the sea level as shown on this map.</p>
A new civilization<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzU1ODk1Ni9vcmlnaW4uZ2lmIiwiZXhwaXJlc19hdCI6MTYyMDEwNzQ0Nn0.vKa0nNqKdMTfWYG6behUPPg9giToq3Lx6CsWQ70eqCE/img.gif?width=980" id="7f62c" class="rm-shortcode" data-rm-shortcode-id="bcffffaf301663a42758cf4cb8e11a76" data-rm-shortcode-name="rebelmouse-image" alt="\u200bSpinning globe view of Mr Bhattarai's wet Mars." />
Spinning globe view of Mr Bhattarai's wet Mars.
Image: A.R. Bhattarai, reproduced with kind permission; modified with MaptoGlobe<p>Mars's eastern continent is centered not on a plateau, but on a depression that on today's 'dry' Mars is called Hellas Planitia, one of the largest impact craters in the Solar system. On the 'wet' Mars of this map, the crater is the central and largest part of a sea that is surrounded by land, a Martian version of the Mediterranean Sea. Perhaps one day this Medimartian Sea will be the Mare Nostrum of a new civilization. </p><p>To the northeast of the circular semi-continent is a large island that on 'our' Mars is Elysium Mons, a volcano that is the planet's third-tallest mountain (14.1 km, 46,000 ft).</p><p>The map is the work of Aaditya Raj Bhattarai, a civil engineering student at Tribhuvan University in Kathmandu (Nepal). Talking to <a href="https://www.inverse.com/innovation/mars-with-water-map" target="_blank" rel="dofollow">Inverse</a>, he said he hoped his map could help further the Martian plans of Elon Musk and SpaceX: "This is part of my side project where I calculate the volume of water required to make life on Mars sustainable and the sources required for those water volumes from comets that will come nearby Mars in the next 100 years."<br></p><p><br></p><p><strong></strong><em>Images by Mr Bhattarai reproduced with kind permission. Check out <a href="https://aadityabhattarai.com.np/" target="_blank">his website</a>. </em><em>Planetary projection and spinning globe created via <a href="https://www.maptoglobe.com/" target="_blank">MaptoGlobe</a>.</em></p><p><strong>Strange Maps #1043</strong></p><p><em>Got a strange map? Let me know at </em><a href="mailto:email@example.com">firstname.lastname@example.org</a><em>.</em></p><p>________<br>(1) The tallest mountain in the Solar system, planetary or otherwise, we know of today, is a peak which rises 22.5 km (14 mi) from the center of the Rheasilvia crater on Vesta, a giant asteroid which makes up 9 percent of the entire mass of the asteroid belt. <br></p>
Starting and running a business takes more than a good idea and the desire to not have a boss.
- Anyone can start a business and be an entrepreneur, but the reality is that most businesses will fail. Building something successful from the ground up takes hard work, passion, intelligence, and a network of people who are equally as smart and passionate as you are. It also requires the ability to accept and learn from your failures.
- In this video, entrepreneurs in various industries including 3D printing, fashion, hygiene, capital investments, aerospace, and biotechnology share what they've learned over the years about relationships, setting and attaining goals, growth, and what happens when things don't go according to plan.
- "People who start businesses for the exit, most of them will fail because there's just no true passion behind it," says Miki Agrawal, co-founder of THINX and TUSHY. A key point of Agrawal's advice is that if you can't see yourself in something for 10 years, you shouldn't do it.
Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.
- Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
- They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
- The research raises many ethical questions and puts to the test our current understanding of death.
What's dead may never die, it seems<p>The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called Brain<em>Ex</em>. Brain<em>Ex </em>is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.</p><p>Brain<em>Ex</em> pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.</p><p>The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if Brain<em>Ex</em> can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.</p><p>As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.</p><p>The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.</p><p>"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told <em><a href="https://www.nationalgeographic.com/science/2019/04/pig-brains-partially-revived-what-it-means-for-medicine-death-ethics/" target="_blank">National Geographic</a>.</em></p>
An ethical gray matter<p>Before anyone gets an <em>Island of Dr. Moreau</em> vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.</p><p>The Brain<em>Ex</em> solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness. </p><p>Even so, the research signals a massive debate to come regarding medical ethics and our definition of death. </p><p>Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?</p><p>"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told <a href="https://www.nytimes.com/2019/04/17/science/brain-dead-pigs.html" target="_blank">the <em>New York Times</em></a>. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."</p><p>One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.</p><p>The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, <a href="https://www.nature.com/articles/d41586-019-01216-4#ref-CR2" target="_blank">told <em>Nature</em></a> that if Brain<em>Ex</em> were to become widely available, it could shrink the pool of eligible donors.</p><p>"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.</p><p>It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.</p><p>Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? <a href="https://bigthink.com/philip-perry/after-death-youre-aware-that-youve-died-scientists-claim" target="_blank">The distress of a partially alive brain</a>? </p><p>The dilemma is unprecedented.</p>
Setting new boundaries<p>Another science fiction story that comes to mind when discussing this story is, of course, <em>Frankenstein</em>. As Farahany told <em>National Geographic</em>: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have <em>Frankenstein</em>, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."</p><p>She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.</p>
After a decade of failed attempts, scientists successfully bounced photons off of a reflector aboard the Lunar Reconnaissance Orbiter, some 240,000 miles from Earth.
- Laser experiments can reveal precisely how far away an object is from Earth.
- For years scientists have been bouncing light off of reflectors on the lunar surface that were installed during the Apollo era, but these reflectors have become less efficient over time.
- The recent success could reveal the cause of the degradation, and also lead to new discoveries about the Moon's evolution.
A close-up photograph of the laser reflecting panel deployed by Apollo 14 astronauts on the Moon in 1971.
NASA<p>The technology isn't quite new. During the Apollo era, astronauts installed on the lunar surface five reflecting panels, each containing at least 100 mirrors that reflect back to whichever direction it's coming from. By bouncing light off these panels, scientists have been able to learn, for example, that the Moon is drifting away from Earth at a rate of about 1.5 inches per year.<br></p><p style="margin-left: 20px;">"Now that we've been collecting data for 50 years, we can see trends that we wouldn't have been able to see otherwise," Erwan Mazarico, a planetary scientist from NASA's Goddard Space Flight Center in Greenbelt, Maryland, <a href="https://www.nasa.gov/feature/goddard/2020/laser-beams-reflected-between-earth-and-moon-boost-science" target="_blank" rel="dofollow">said</a>. "Laser-ranging science is a long game."</p>
NASA's Lunar Reconnaissance Orbiter (LRO)
NASA<p>But the long game poses a problem: Over time, the panels on the Moon have become less efficient at bouncing light back to Earth. Some scientists suspect it's because dust, kicked up by micrometeorites, has settled on the surface of the panels, causing them to overheat. And if that's the case, scientists need to know for sure.</p><p>That's where the recent LRO laser experiment comes in. If scientists find discrepancies between the data sent back by the LRO reflector and those on the lunar surface, it could reveal what's causing the lunar reflectors to become less efficient. They could then account for these discrepancies in their models.</p>