"We are not luddites," said Nobel Peace Prize laureate Jody Williams at the UN on Monday, "we are not trying to stop the advance of robotics." However, Williams and mission delegates from other countries spoke of the need to regulate "killer robots" — completely autonomous weapon systems. 

Williams won the 1997 Nobel Peace Prize for her advocacy work that led to the banning and clearing of anti-personnel mines. Now Williams is proposing to ban weapons systems that can fire "without a human in the loop," the culmination of a Campaign to Stop Killer Robots launched by Human Rights Watch earlier this year.

Now over 270 robotics researchers have signed on to a statement advocating this ban. 

One of the key hurdles that proponents of the ban face is the lack of transparency from many governments that possess semi-autonomous weapons systems.

So how far off are fully autonomous weapons systems?

"Right now our machines are as smart as insects," points out Big Think expert Dr. Michio Kaku.  "Eventually they’ll be smart as mice.  After that they’ll be smart as dogs and cats.  Probably by the end of the century, who knows, they’ll be as smart as monkeys."

When robots reach that level of intelligence, Dr. Kaku points out,

they could become potentially dangerous because monkeys can formulate their own plans.  They don’t have to listen to you.  They can formulate their own strategies, their own goals and I would say therefore at that point let’s put a chip in their brain to shut them off if they get murderous thoughts.  Isaac Asimov advocated something like that with his "Three Laws."  I say hey, put a chip in their brain to shut them off if they start to get murderous.

Image courtesy of Shutterstock