Michael Vassar: Unchecked AI Will Bring On Human Extinction
Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity.
Michael Vassar is an American futurist, activist, and entrepreneur. He is the co-founder and Chief Science Officer of MetaMed Research. He was president of the Machine Intelligence Research Institute until January 2012. Vassar advocates safe development of new technologies for the benefit of humankind. He has co-authored papers on the risks of advanced molecular manufacturing with Robert Freitas, and has written the special report "Corporate Cornucopia: Examining the Special Implications of Commercial MNT Development" for the Center for Responsible Nanotechnology Task Force.
Michael Vassar: If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order. People have written extensively about why it is basically analytically compulsory to conclude that in the default scenario, in the absence of major surprising developments or concerted integrated effort, in the long term artificial intelligence will replace humanity.
It’s the natural all but inevitable consequence of greater-than-human artificial intelligence that it ought to develop what Steve Omohundro has called basic AI drives and basic AI drives basically boils down to properties of any goal-directed system. The obedience to the Von Neumann-Morgenstern decision theory suggests that one ought to do the things that you expect to have the best outcomes based on some value function. And that value function uniquely specifies some configuration of matter in the universe. And unless the value function that is built into an AI implicitly uniquely specifies a configuration of matter in the universe that conforms to our values, which would require a great deal of planning to make that happen, then given sufficient power, we should expect an AI to reconfigure the universe in a manner that does not preserve our values. As far as I can tell, this position is analytically compelling. It’s not a position that a person can intelligently, honestly, and reasonably be uncertain about.
Therefore, I conclude that the major global catastrophic threat to humanity is not AI, but rather the absence of social, intellectual frameworks for people quickly and easily converging on analytically compelling conclusions. Nick Bostrom who recently wrote the book Superintelligence, was aware of the basic concerns associated with AI risk 20 years ago and wrote about them intelligently in a manner that ought to be sufficiently compelling to convince any thoughtful and open-minded person. By 10 years ago, practically everything that is said in Machine Intelligence had been developed intellectually into a form that a person who was more skeptical and not willing to think for themselves, but who was willing to listen to other people’s thoughts and merely critically scrutinize, ought to have been convinced by. But instead Bostrom had to spend 10 years more becoming the director of an incredibly prestigious institute and writing an incredibly rigorous, meticulous book in order to get a still tiny number of people and still a minority of the world — essentially most analytically capable people — onto the right page on a topic that is, from a philosophy perspective, about as difficult as Plato’s issues in The Republic, about how it’s possible for an object to be bigger than one thing and smaller than another even though bigness and smallness are opposites. We are talking about completely trivial conclusions and we are talking about the world’s greatest minds failing to adopt these conclusions when they are laid out analytically until an enormous body of prestige is placed behind them.
And as far as I can tell, most of the problems that humanity faces now and in the future are not going to be analytically tractable and analytically compelling the way risk from AI is analytically tractable and analytically compelling. Risks associated with biotechnologies, risks associated with economic issues — these sorts of risks are a lot less likely to cause human extinction within a few years than AI. But they are more immediate and they are much, much, much more complicated. The technical difficulty of creating institutions that are capable of thinking about AI risk is so enormously high compared to the analytical abilities of existing institutions, demonstrated by existing institutions' failure to reach the trivially correct, easy conclusions about AI risk, that existing institutions are compellingly not qualified to think about these issues and ought not to do so. But it is a very high priority for humanity. I think in the long run, it is the highest priority for humanity that we create institutions that are capable of digesting and integrating both logical argument and empirical evidence and figuring out what things are important and true not just in trivial cases like AI, but in harder cases.
Directed/Produced by Jonathan Fowler, Elizabeth Rodd, and Dillon Fitton
Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity. The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening. Yet Vassar makes note that AI itself isn't the greatest risk to humanity. Rather, it's "the absence of social, intellectual frameworks" through which experts making key discoveries and drawing analytical conclusions can swiftly and convincingly communicate these ideas to the public.
Malcolm Gladwell teaches "Get over yourself and get to work" for Big Think Edge.
- Learn to recognize failure and know the big difference between panicking and choking.
- At Big Think Edge, Malcolm Gladwell teaches how to check your inner critic and get clear on what failure is.
- Subscribe to Big Think Edge before we launch on March 30 to get 20% off monthly and annual memberships.
It turns out, that tattoo ink can travel throughout your body and settle in lymph nodes.
In the slightly macabre experiment to find out where tattoo ink travels to in the body, French and German researchers recently used synchrotron X-ray fluorescence in four "inked" human cadavers — as well as one without. The results of their 2017 study? Some of the tattoo ink apparently settled in lymph nodes.
Image from the study.
As the authors explain in the study — they hail from Ludwig Maximilian University of Munich, the European Synchrotron Radiation Facility, and the German Federal Institute for Risk Assessment — it would have been unethical to test this on live animals since those creatures would not be able to give permission to be tattooed.
Because of the prevalence of tattoos these days, the researchers wanted to find out if the ink could be harmful in some way.
"The increasing prevalence of tattoos provoked safety concerns with respect to particle distribution and effects inside the human body," they write.
It works like this: Since lymph nodes filter lymph, which is the fluid that carries white blood cells throughout the body in an effort to fight infections that are encountered, that is where some of the ink particles collect.
Image by authors of the study.
Titanium dioxide appears to be the thing that travels. It's a white tattoo ink pigment that's mixed with other colors all the time to control shades.
The study's authors will keep working on this in the meantime.
“In future experiments we will also look into the pigment and heavy metal burden of other, more distant internal organs and tissues in order to track any possible bio-distribution of tattoo ink ingredients throughout the body. The outcome of these investigations not only will be helpful in the assessment of the health risks associated with tattooing but also in the judgment of other exposures such as, e.g., the entrance of TiO2 nanoparticles present in cosmetics at the site of damaged skin."
It's one of the most consistent patterns in the unviverse. What causes it?
- Spinning discs are everywhere – just look at our solar system, the rings of Saturn, and all the spiral galaxies in the universe.
- Spinning discs are the result of two things: The force of gravity and a phenomenon in physics called the conservation of angular momentum.
- Gravity brings matter together; the closer the matter gets, the more it accelerates – much like an ice skater who spins faster and faster the closer their arms get to their body. Then, this spinning cloud collapses due to up and down and diagonal collisions that cancel each other out until the only motion they have in common is the spin – and voila: A flat disc.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.