The Bulletin of the Atomic Scientists decided to keep the hands of the Doomsday Clock at three minutes to midnight. The decision was meant to express disappointment in the world's failure to take dramatic action to curb climate change and the risk of nuclear disaster. Lower down on the list of potential catastrophes is a worry that disruptive technological advancements are going unchecked.
“It is clear that advances in biotechnology; in artificial intelligence, particularly for use in robotic weapons; and in the cyber realm all have the potential to create global-scale risk,” the group wrote.
The Bulletin recognizes advancements in artificial intelligence have the capacity to do great good for humanity, but also great harm. It has been growing at such a rapid pace thanks in part to deep learning, but the board and the world's brightest minds have given cause to worry.
The fear of a corporate-driven Skynet-like catastrophe has been on the minds of many brilliant thinkers. It has been such a concern for Elon Musk that he helped found OpenAI, a nonprofit artificial intelligence research company. It was established on the belief that it's “important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.” This institution would help create a place to develop without the potential for misuse by an unregulated government, scientific, or corporate institution.
"Where the technology is pushing conflict is moving so much faster than our systems ability to adapt and regulate it that it’s going to be a real challenge for us the next 10 to 15 years."
Even research done by well-intentioned people can go awry. They may be pushing the boundaries of science not stopping to think if it's something they should be doing.
Scientists have pointed out over and over again that progress comes as a double-edged sword. More technological advancements means better standards of living, but also means we're creating a number of "new ways things can go wrong,” according to Stephen Hawking.
However, Lawrence Krauss isn't convinced there's cause for immediate concern.
“Elon Musk and others who have expressed concern and Stephen Hawking are friends of mine and I understand their potential concerns, but I’m frankly not as concerned about AI in the near term at the very least as many of my friends and colleagues are,” says Krauss, who is the chair of the Bulletin's Board of Sponsors.
“We, of course, have to realize that the rate at which machines are evolving in capability may far exceed the rate at which society is able to deal with them,” he said.
For this reason, Hawking has said, “It's important to ensure that these changes are heading in the right directions. In a democratic society, this means that everyone needs to have a basic understanding of science to make informed decisions about the future.”
The Science and Security Board believes the international community should establish an institution to inspect and regulate these emerging technologies to assess any risks. So far, these technologies have gone with little oversight, which is why it falls upon society to ask for these regulators.
Photo Credit: Christian Science Monitor / Contributor/ Getty
Natalie has been writing professionally for about 6 years. After graduating from Ithaca College with a degree in Feature Writing, she snagged a job at PCMag.com where she had the opportunity to review all the latest consumer gadgets. Since then she has become a writer for hire, freelancing for various websites. In her spare time, you may find her riding her motorcycle, reading YA novels, hiking, or playing video games. Follow her on Twitter: @nat_schumaker