Perhaps singularity is a little too near

The latest video by robot developer Boston Dynamics, which seems to always coincide with a rise of Terminator references, has elicited a wide range of strong emotional reactions. The new robot, Handle (because it will be handling objects), brings forth a sense of awe for its speed, agility, strength, and ability to jump. At the same time, the machine's impressiveness brings forward deep-seated fears of robots-gone-wild. 

We don't want the student to become the master.

"This is the debut presentation of what I think will be a nightmare-inducing robot."-Boston Dynamics' founder Marc Raibert, introducing Handle at a private event in late January 2017. 

It is easy to picture Handle as either:

1. A benevolent robot working alongside human employees in a warehouse. (Bonus: no sore back from lifting all of those heavy boxes.)

2. A weaponized robot deployed by militaries. (I wouldn't want to go up against Handle in a human vs machine version of BattleBots.)

Raibert was right in his prediction that Handle would be viewed as nightmare-inducing, with a flurry of comments online expressing a certain level of anxiety. 

The fear is less about the current state of robots, and more so an uncertain future as to how they will be developed. It hasn't helped that luminaries such as Stephen Hawking have expressed a level of uncertainty:

"In short, the rise of powerful AI will either be the best, or the worst thing, ever to happen to humanity. We do not know which yet." -Stephen Hawking, speaking at the Leverhulme Centre for the Future of Intelligence at Cambridge University.

Well, that's reassuring.

Putting aside the concept of machines gaining sentience and turning on humans, there is the more short-term concern about how the robots will be developed. As a company, Boston Dynamics has helped build robots for organizations ranging from Sony to the US Army.

Why is the Handle So Frightening?

When I watch the Handle in the video, I imagine an advanced human. That may be the problem. I am anthropomorphizing an object that can provide a great deal of utility, and am envisioning something that can have a personality. Instead of viewing it as a "thing" that picks up objects (like a crane), I am envisioning a "person" that not only picks up objects but throws them. It is a thin line between help and hurt. 

That may be too much to, ehem, handle.   

ben-goertzel-will-superhuman-agi-kill-us

"It’s almost like a Rorschach type of thing really. I mean we fundamentally don’t know what a superhuman AI is going to do and that’s the truth of it, right. And then if you tend to be an optimist you will focus on the good possibilities. If you tend to be a worried person who’s pessimistic you’ll focus on the bad possibilities. If you tend to be a Hollywood movie maker you focus on scary possibilities maybe with a happy ending because that’s what sells movies. We don’t know what’s going to happen."-Ben Goertzel, AI researcher 

 

===

Want to connect with me? Reach out @TechEthicist and on Facebook