Self-Motivation
David Goggins
Former Navy Seal
Career Development
Bryan Cranston
Actor
Critical Thinking
Liv Boeree
International Poker Champion
Emotional Intelligence
Amaryllis Fox
Former CIA Clandestine Operative
Management
Chris Hadfield
Retired Canadian Astronaut & Author
Learn
from the world's big
thinkers
Start Learning

Michael Vassar: Unchecked AI Will Bring On Human Extinction

Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity.

Michael Vassar: If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order. People have written extensively about why it is basically analytically compulsory to conclude that in the default scenario, in the absence of major surprising developments or concerted integrated effort, in the long term artificial intelligence will replace humanity.

It’s the natural all but inevitable consequence of greater-than-human artificial intelligence that it ought to develop what Steve Omohundro has called basic AI drives and basic AI drives basically boils down to properties of any goal-directed system. The obedience to the Von Neumann-Morgenstern decision theory suggests that one ought to do the things that you expect to have the best outcomes based on some value function. And that value function uniquely specifies some configuration of matter in the universe. And unless the value function that is built into an AI implicitly uniquely specifies a configuration of matter in the universe that conforms to our values, which would require a great deal of planning to make that happen, then given sufficient power, we should expect an AI to reconfigure the universe in a manner that does not preserve our values. As far as I can tell, this position is analytically compelling. It’s not a position that a person can intelligently, honestly, and reasonably be uncertain about.

Therefore, I conclude that the major global catastrophic threat to humanity is not AI, but rather the absence of social, intellectual frameworks for people quickly and easily converging on analytically compelling conclusions. Nick Bostrom who recently wrote the book Superintelligence, was aware of the basic concerns associated with AI risk 20 years ago and wrote about them intelligently in a manner that ought to be sufficiently compelling to convince any thoughtful and open-minded person. By 10 years ago, practically everything that is said in Machine Intelligence had been developed intellectually into a form that a person who was more skeptical and not willing to think for themselves, but who was willing to listen to other people’s thoughts and merely critically scrutinize, ought to have been convinced by. But instead Bostrom had to spend 10 years more becoming the director of an incredibly prestigious institute and writing an incredibly rigorous, meticulous book in order to get a still tiny number of people and still a minority of the world — essentially most analytically capable people — onto the right page on a topic that is, from a philosophy perspective, about as difficult as Plato’s issues in The Republic, about how it’s possible for an object to be bigger than one thing and smaller than another even though bigness and smallness are opposites. We are talking about completely trivial conclusions and we are talking about the world’s greatest minds failing to adopt these conclusions when they are laid out analytically until an enormous body of prestige is placed behind them.

And as far as I can tell, most of the problems that humanity faces now and in the future are not going to be analytically tractable and analytically compelling the way risk from AI is analytically tractable and analytically compelling.  Risks associated with biotechnologies, risks associated with economic issues — these sorts of risks are a lot less likely to cause human extinction within a few years than AI. But they are more immediate and they are much, much, much more complicated. The technical difficulty of creating institutions that are capable of thinking about AI risk is so enormously high compared to the analytical abilities of existing institutions, demonstrated by existing institutions' failure to reach the trivially correct, easy conclusions about AI risk, that existing institutions are compellingly not qualified to think about these issues and ought not to do so. But it is a very high priority for humanity. I think in the long run, it is the highest priority for humanity that we create institutions that are capable of digesting and integrating both logical argument and empirical evidence and figuring out what things are important and true not just in trivial cases like AI, but in harder cases.

Directed/Produced by Jonathan Fowler, Elizabeth Rodd, and Dillon Fitton


 

 

Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity. The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening. Yet Vassar makes note that AI itself isn't the greatest risk to humanity. Rather, it's "the absence of social, intellectual frameworks" through which experts making key discoveries and drawing analytical conclusions can swiftly and convincingly communicate these ideas to the public.

Does conscious AI deserve rights?

If machines develop consciousness, or if we manage to give it to them, the human-robot dynamic will forever be different.

Videos
  • Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.
  • Given the grave tragedy of slavery throughout human history, philosophers and technologists must answer this question ahead of technological development to avoid humanity creating a slave class of conscious beings.
  • One potential safeguard against that? Regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.

A new hydrogel might be strong enough for knee replacements

Duke University researchers might have solved a half-century old problem.

Photo by Alexander Hassenstein/Getty Images
Technology & Innovation
  • Duke University researchers created a hydrogel that appears to be as strong and flexible as human cartilage.
  • The blend of three polymers provides enough flexibility and durability to mimic the knee.
  • The next step is to test this hydrogel in sheep; human use can take at least three years.
Keep reading Show less

Hints of the 4th dimension have been detected by physicists

What would it be like to experience the 4th dimension?

Two different experiments show hints of a 4th spatial dimension. Credit: Zilberberg Group / ETH Zürich
Technology & Innovation

Physicists have understood at least theoretically, that there may be higher dimensions, besides our normal three. The first clue came in 1905 when Einstein developed his theory of special relativity. Of course, by dimensions we’re talking about length, width, and height. Generally speaking, when we talk about a fourth dimension, it’s considered space-time. But here, physicists mean a spatial dimension beyond the normal three, not a parallel universe, as such dimensions are mistaken for in popular sci-fi shows.

Keep reading Show less

Predicting PTSD symptoms becomes possible with a new test

An algorithm may allow doctors to assess PTSD candidates for early intervention after traumatic ER visits.

Image source: camillo jimenez/Unsplash
Technology & Innovation
  • 10-15% of people visiting emergency rooms eventually develop symptoms of long-lasting PTSD.
  • Early treatment is available but there's been no way to tell who needs it.
  • Using clinical data already being collected, machine learning can identify who's at risk.

The psychological scars a traumatic experience can leave behind may have a more profound effect on a person than the original traumatic experience. Long after an acute emergency is resolved, victims of post-traumatic stress disorder (PTSD) continue to suffer its consequences.

In the U.S. some 30 million patients are annually treated in emergency departments (EDs) for a range of traumatic injuries. Add to that urgent admissions to the ED with the onset of COVID-19 symptoms. Health experts predict that some 10 percent to 15 percent of these people will develop long-lasting PTSD within a year of the initial incident. While there are interventions that can help individuals avoid PTSD, there's been no reliable way to identify those most likely to need it.

That may now have changed. A multi-disciplinary team of researchers has developed a method for predicting who is most likely to develop PTSD after a traumatic emergency-room experience. Their study is published in the journal Nature Medicine.

70 data points and machine learning

nurse wrapping patient's arm

Image source: Creators Collective/Unsplash

Study lead author Katharina Schultebraucks of Columbia University's Department Vagelos College of Physicians and Surgeons says:

"For many trauma patients, the ED visit is often their sole contact with the health care system. The time immediately after a traumatic injury is a critical window for identifying people at risk for PTSD and arranging appropriate follow-up treatment. The earlier we can treat those at risk, the better the likely outcomes."

The new PTSD test uses machine learning and 70 clinical data points plus a clinical stress-level assessment to develop a PTSD score for an individual that identifies their risk of acquiring the condition.

Among the 70 data points are stress hormone levels, inflammatory signals, high blood pressure, and an anxiety-level assessment. Says Schultebraucks, "We selected measures that are routinely collected in the ED and logged in the electronic medical record, plus answers to a few short questions about the psychological stress response. The idea was to create a tool that would be universally available and would add little burden to ED personnel."

Researchers used data from adult trauma survivors in Atlanta, Georgia (377 individuals) and New York City (221 individuals) to test their system.

Of this cohort, 90 percent of those predicted to be at high risk developed long-lasting PTSD symptoms within a year of the initial traumatic event — just 5 percent of people who never developed PTSD symptoms had been erroneously identified as being at risk.

On the other side of the coin, 29 percent of individuals were 'false negatives," tagged by the algorithm as not being at risk of PTSD, but then developing symptoms.

Going forward

person leaning their head on another's shoulder

Image source: Külli Kittus/Unsplash

Schultebraucks looks forward to more testing as the researchers continue to refine their algorithm and to instill confidence in the approach among ED clinicians: "Because previous models for predicting PTSD risk have not been validated in independent samples like our model, they haven't been adopted in clinical practice." She expects that, "Testing and validation of our model in larger samples will be necessary for the algorithm to be ready-to-use in the general population."

"Currently only 7% of level-1 trauma centers routinely screen for PTSD," notes Schultebraucks. "We hope that the algorithm will provide ED clinicians with a rapid, automatic readout that they could use for discharge planning and the prevention of PTSD." She envisions the algorithm being implemented in the future as a feature of electronic medical records.

The researchers also plan to test their algorithm at predicting PTSD in people whose traumatic experiences come in the form of health events such as heart attacks and strokes, as opposed to visits to the emergency department.

Quantcast