Skip to content
Culture & Religion

Maybe Robots Aren’t Really That Ready for Prime Time

Are robots and AI really ready for us to begin depending on them?
The Knightscope K5 Autonomous Data Machine (Knightscope)
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Meet the Knightscope-K5. It’s most definitely not the droid 16-month-old Harwin Cheng was looking for, since it ran him over. The robot is one of two Autonomous Data Machine models marketed by its manufacturer Knightscope as “force multipliers, data gatherers and smart eyes and ears on the ground helping protect your customers.” It was on duty as a robot security guard —practicing “anomaly detection” — at the Stanford Shopping Center in Palo Alto, CA when it rammed Cheng from behind. The child fell forward face-first, and Machine Identification Number 13 just kept moving, rolling right over its victim. The toddler wasn’t seriously hurt, thankfully.


It’s not that 13 went out of its way to hit the child — it was classic nonsensical kid movement towards and away from it that likely confused the robot. But nonetheless, it raises an issue we need to consider.

Are robots really ready for prime time? This is not the first cautionary story we’ve heard lately. 

Joshua Brown was a Tesla Model S owner and an enthusiast for its Autopilot feature, posting youtube videos showing off his car’s smarts.

In June, though, “Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied,” according Tesla’s statement regarding on the crash that cost Brown his life. (The truck driver heard the Harry Potter movie Brown had been watching on his dashboard still playing from the wreckage afterward.)

Tesla’s statement notes that Autopilot is still being tested, explains why this shouldn’t have happened, and notes that this is the first Autopilot fatality after 130 million miles of driving. Since humans have them every 90 million miles, that’s not bad.

But how come we’re both terrified of robots taking over and irresistibly attracted to them at the same time? What’s with our mad rush to put our security in their, um, hands? (How manyRoomba-related deaths must there be? None, presumably.) 

There are also a number of very significant ethics questions to be resolved. For example, should the AI that drives your car be programmed to save your life at all costs, or be instructed to save the maximum number of people? If it’s the latter, couldn’t there be a situation in which AI concludes that you’ve gotta be sacrificed to save others?

As always, tech races forward, intoxicated by what it can do rather than by what it should do (see: Jurassic Park). And given that we’re just at the dawn of robotic capabilities, maybe we need to hit PAUSE and think a little bit more about the power we’re already in the process of ceding to our future overlords. (Just kidding. I think.)

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next