Self-Motivation
David Goggins
Former Navy Seal
Career Development
Bryan Cranston
Actor
Critical Thinking
Liv Boeree
International Poker Champion
Emotional Intelligence
Amaryllis Fox
Former CIA Clandestine Operative
Management
Chris Hadfield
Retired Canadian Astronaut & Author
Learn
from the world's big
thinkers
Start Learning

The biggest A.I. risks: Superintelligence and the elite silos

When it comes to raising superintelligent A.I., kindness may be our best bet.

BEN GOERTZEL: We can have no guarantee that a super intelligent AI is going to do what we want. Once we're creating something ten, a hundred, a thousand, a million times more intelligent than we are it would be insane to think that we could really like rigorously control what it does. It may discover aspects of the universe that we don't even imagine at this point.

However, my best intuition and educated guess is that much like raising a human child, if we raise the young AGI in a way that's imbued with compassion, love and understanding and if we raise the young AGI to fully understand human values and human culture then we're maximizing the odds that as this AGI gets beyond our rigorous control at least it's own self-modification and evolution is imbued with human values and culture and with compassion and connection. So I would rather have an AGI that understood human values and culture become super intelligent than one that doesn't understand even what we're about. And I would rather have an AGI that was doing good works like advancing science and medicine and doing elder care and education becomes super intelligent than an AGI that was being, for example, a spy system, a killer drone coordination system or an advertising agency. So even when you don't have a full guarantee I think we can do things that commonsensically will bias the odds in a positive way.

Now, in terms of nearer-term risks regarding AI, I think we now have a somewhat unpleasant situation where much of the world's data, including personal data about all of us and our bodies and our minds and our relationships and our tastes, much of the world's data and much of the world's AI fire power are held by a few large corporations, which are acting in close concert with a few large governments. In China the connection between big tech and the government apparatus is very clear, but in the U.S. as well. I mean there was a big noise about Amazon's new office, well 25,000 Amazon employees are going in Crystal City Virginia right next-door to the Pentagon; there could be a nice big data pipe there if they want. We in the U.S. as well have very close connections between big tech and government. Anyone can Google Eric Schmidt verses NSA as well. So there's a few big companies with close government connections hoarding everyone's data, developing AI processing power, hiring most of the AI PhDs and it's not hard to see that this can bring up some ethical issues in the near-term, even before we get to superhuman super intelligences potentially turning the universe into paper clips. And decentralization of AI can serve to counteract these nearer-term risks in a pretty palpable way.

So as a very concrete example, one of our largest AI development offices for SingularityNET, and for Hanson Robotics the robotics company I'm also involved with, is in Addis Ababa Ethiopia. We have 25 AI developers and 40 or 50 interns there. I mean these young Ethiopians aren't going to get a job for Google, Facebook, Tencent or Baidu except in very rare cases when they managed to get a work visa to go to one of these countries somehow. And many of the AI applications of acute interest in those countries, say AI for analyzing agriculture and preventing agricultural disease or AI for credit scoring for the unbank to enable micro finance, AI problems of specific interest in sub-Saharan Africa don't get a heck of a lot of attention these days. AI wizardry from young developers there doesn't have a heck of a lot of market these days so you've got a both a lot of the market and a lot of the developer community that's sort of shut out by the siloing of AI inside a few large tech companies and military organizations. And this is both a humanitarian ethical problem because there's a lot of value being left on the table and a lot of value not being delivered, but it also could become a different sort of crisis because if you have a whole bunch of brilliant young hackers throughout the developing world who aren't able to fully enter into the world economy there's a lot of other less pleasant things than work for Google or Tencent that these young hackers could choose to spend their time on. So I think getting the whole world fully pulled into the AI economy in terms of developers being able to monetize their code and application developers having an easy way to apply AI to the problems of local interest to them, I mean this is ethically positive right now in terms of doing good and in terms of diverting effort away from people doing bad things out of frustration.

  • We have no guarantee that a superintelligent A.I. is going to do what we want. Once we create something many times more intelligent than we are, it may be "insane" to think we can control what it does.
  • What's the best bet to ensure superintelligent A.I. remains compliant with humans and does good works, such as advance medicine? To raise it in a way that's imbued with compassion and understanding, says Goertzel.
  • One way to limit "people doing bad things out of frustration," it may be advantageous for the entire world to be plugged into the A.I. economy so that developers, from whatever country, can monetize their codes.

Does conscious AI deserve rights?

If machines develop consciousness, or if we manage to give it to them, the human-robot dynamic will forever be different.

Videos
  • Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.
  • Given the grave tragedy of slavery throughout human history, philosophers and technologists must answer this question ahead of technological development to avoid humanity creating a slave class of conscious beings.
  • One potential safeguard against that? Regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.

A new hydrogel might be strong enough for knee replacements

Duke University researchers might have solved a half-century old problem.

Photo by Alexander Hassenstein/Getty Images
Technology & Innovation
  • Duke University researchers created a hydrogel that appears to be as strong and flexible as human cartilage.
  • The blend of three polymers provides enough flexibility and durability to mimic the knee.
  • The next step is to test this hydrogel in sheep; human use can take at least three years.
Keep reading Show less

Hints of the 4th dimension have been detected by physicists

What would it be like to experience the 4th dimension?

Two different experiments show hints of a 4th spatial dimension. Credit: Zilberberg Group / ETH Zürich
Technology & Innovation

Physicists have understood at least theoretically, that there may be higher dimensions, besides our normal three. The first clue came in 1905 when Einstein developed his theory of special relativity. Of course, by dimensions we’re talking about length, width, and height. Generally speaking, when we talk about a fourth dimension, it’s considered space-time. But here, physicists mean a spatial dimension beyond the normal three, not a parallel universe, as such dimensions are mistaken for in popular sci-fi shows.

Keep reading Show less

Predicting PTSD symptoms becomes possible with a new test

An algorithm may allow doctors to assess PTSD candidates for early intervention after traumatic ER visits.

Image source: camillo jimenez/Unsplash
Technology & Innovation
  • 10-15% of people visiting emergency rooms eventually develop symptoms of long-lasting PTSD.
  • Early treatment is available but there's been no way to tell who needs it.
  • Using clinical data already being collected, machine learning can identify who's at risk.

The psychological scars a traumatic experience can leave behind may have a more profound effect on a person than the original traumatic experience. Long after an acute emergency is resolved, victims of post-traumatic stress disorder (PTSD) continue to suffer its consequences.

In the U.S. some 30 million patients are annually treated in emergency departments (EDs) for a range of traumatic injuries. Add to that urgent admissions to the ED with the onset of COVID-19 symptoms. Health experts predict that some 10 percent to 15 percent of these people will develop long-lasting PTSD within a year of the initial incident. While there are interventions that can help individuals avoid PTSD, there's been no reliable way to identify those most likely to need it.

That may now have changed. A multi-disciplinary team of researchers has developed a method for predicting who is most likely to develop PTSD after a traumatic emergency-room experience. Their study is published in the journal Nature Medicine.

70 data points and machine learning

nurse wrapping patient's arm

Image source: Creators Collective/Unsplash

Study lead author Katharina Schultebraucks of Columbia University's Department Vagelos College of Physicians and Surgeons says:

"For many trauma patients, the ED visit is often their sole contact with the health care system. The time immediately after a traumatic injury is a critical window for identifying people at risk for PTSD and arranging appropriate follow-up treatment. The earlier we can treat those at risk, the better the likely outcomes."

The new PTSD test uses machine learning and 70 clinical data points plus a clinical stress-level assessment to develop a PTSD score for an individual that identifies their risk of acquiring the condition.

Among the 70 data points are stress hormone levels, inflammatory signals, high blood pressure, and an anxiety-level assessment. Says Schultebraucks, "We selected measures that are routinely collected in the ED and logged in the electronic medical record, plus answers to a few short questions about the psychological stress response. The idea was to create a tool that would be universally available and would add little burden to ED personnel."

Researchers used data from adult trauma survivors in Atlanta, Georgia (377 individuals) and New York City (221 individuals) to test their system.

Of this cohort, 90 percent of those predicted to be at high risk developed long-lasting PTSD symptoms within a year of the initial traumatic event — just 5 percent of people who never developed PTSD symptoms had been erroneously identified as being at risk.

On the other side of the coin, 29 percent of individuals were 'false negatives," tagged by the algorithm as not being at risk of PTSD, but then developing symptoms.

Going forward

person leaning their head on another's shoulder

Image source: Külli Kittus/Unsplash

Schultebraucks looks forward to more testing as the researchers continue to refine their algorithm and to instill confidence in the approach among ED clinicians: "Because previous models for predicting PTSD risk have not been validated in independent samples like our model, they haven't been adopted in clinical practice." She expects that, "Testing and validation of our model in larger samples will be necessary for the algorithm to be ready-to-use in the general population."

"Currently only 7% of level-1 trauma centers routinely screen for PTSD," notes Schultebraucks. "We hope that the algorithm will provide ED clinicians with a rapid, automatic readout that they could use for discharge planning and the prevention of PTSD." She envisions the algorithm being implemented in the future as a feature of electronic medical records.

The researchers also plan to test their algorithm at predicting PTSD in people whose traumatic experiences come in the form of health events such as heart attacks and strokes, as opposed to visits to the emergency department.

Quantcast