The biggest A.I. risks: Superintelligence and the elite silos
When it comes to raising superintelligent A.I., kindness may be our best bet.
BEN GOERTZEL: We can have no guarantee that a super intelligent AI is going to do what we want. Once we're creating something ten, a hundred, a thousand, a million times more intelligent than we are it would be insane to think that we could really like rigorously control what it does. It may discover aspects of the universe that we don't even imagine at this point.
However, my best intuition and educated guess is that much like raising a human child, if we raise the young AGI in a way that's imbued with compassion, love and understanding and if we raise the young AGI to fully understand human values and human culture then we're maximizing the odds that as this AGI gets beyond our rigorous control at least it's own self-modification and evolution is imbued with human values and culture and with compassion and connection. So I would rather have an AGI that understood human values and culture become super intelligent than one that doesn't understand even what we're about. And I would rather have an AGI that was doing good works like advancing science and medicine and doing elder care and education becomes super intelligent than an AGI that was being, for example, a spy system, a killer drone coordination system or an advertising agency. So even when you don't have a full guarantee I think we can do things that commonsensically will bias the odds in a positive way.
Now, in terms of nearer-term risks regarding AI, I think we now have a somewhat unpleasant situation where much of the world's data, including personal data about all of us and our bodies and our minds and our relationships and our tastes, much of the world's data and much of the world's AI fire power are held by a few large corporations, which are acting in close concert with a few large governments. In China the connection between big tech and the government apparatus is very clear, but in the U.S. as well. I mean there was a big noise about Amazon's new office, well 25,000 Amazon employees are going in Crystal City Virginia right next-door to the Pentagon; there could be a nice big data pipe there if they want. We in the U.S. as well have very close connections between big tech and government. Anyone can Google Eric Schmidt verses NSA as well. So there's a few big companies with close government connections hoarding everyone's data, developing AI processing power, hiring most of the AI PhDs and it's not hard to see that this can bring up some ethical issues in the near-term, even before we get to superhuman super intelligences potentially turning the universe into paper clips. And decentralization of AI can serve to counteract these nearer-term risks in a pretty palpable way.
So as a very concrete example, one of our largest AI development offices for SingularityNET, and for Hanson Robotics the robotics company I'm also involved with, is in Addis Ababa Ethiopia. We have 25 AI developers and 40 or 50 interns there. I mean these young Ethiopians aren't going to get a job for Google, Facebook, Tencent or Baidu except in very rare cases when they managed to get a work visa to go to one of these countries somehow. And many of the AI applications of acute interest in those countries, say AI for analyzing agriculture and preventing agricultural disease or AI for credit scoring for the unbank to enable micro finance, AI problems of specific interest in sub-Saharan Africa don't get a heck of a lot of attention these days. AI wizardry from young developers there doesn't have a heck of a lot of market these days so you've got a both a lot of the market and a lot of the developer community that's sort of shut out by the siloing of AI inside a few large tech companies and military organizations. And this is both a humanitarian ethical problem because there's a lot of value being left on the table and a lot of value not being delivered, but it also could become a different sort of crisis because if you have a whole bunch of brilliant young hackers throughout the developing world who aren't able to fully enter into the world economy there's a lot of other less pleasant things than work for Google or Tencent that these young hackers could choose to spend their time on. So I think getting the whole world fully pulled into the AI economy in terms of developers being able to monetize their code and application developers having an easy way to apply AI to the problems of local interest to them, I mean this is ethically positive right now in terms of doing good and in terms of diverting effort away from people doing bad things out of frustration.
- We have no guarantee that a superintelligent A.I. is going to do what we want. Once we create something many times more intelligent than we are, it may be "insane" to think we can control what it does.
- What's the best bet to ensure superintelligent A.I. remains compliant with humans and does good works, such as advance medicine? To raise it in a way that's imbued with compassion and understanding, says Goertzel.
- One way to limit "people doing bad things out of frustration," it may be advantageous for the entire world to be plugged into the A.I. economy so that developers, from whatever country, can monetize their codes.
Going back to the moon will give us fresh insights about the creation of our solar system.
- July 2019 marks the 50th anniversary of the moon landing — Apollo 11.
- Today, we have a strong scientific case for returning to the moon: the original rock samples that we took from the moon revolutionized our view of how Earth and the solar system formed. We could now glean even more insights with fresh, nonchemically-altered samples.
- NASA plans to send humans to a crater in the South Pole of the moon because it's safer there, and would allow for better communications with people back on Earth.
Pugs and bulldogs are incredibly trendy, but experts have massive animal welfare concerns about these genetically manipulated breeds.
- Pugs, Frenchies, boxers, shih-tzus and other flat-faced dog breeds have been trending for at least the last decade.
- Higher visibility (usually in a celebrity's handbag), an increase in city living (smaller dogs for smaller homes), and possibly even the fine acting of Frank the Pug in 1997's Men in Black may be the cause.
- These small, specialty pure breeds are seen as the pinnacle of cuteness – they have friendly personalities, endearing odd looks, and are perfect for Stranger Things video montages.
Jokesters and serious Area 51 raiders would be met with military force.
- Facebook joke event to "raid Area 51" has already gained 1,000,000 "going" attendees.
- The U.S. Air Force has issued an official warning to potential "raiders."
- If anyone actually tries to storm an American military base, the use of deadly force is authorized.