What if A.I. is coming for jobs faster than we thought?
The pool of things that "AI Can't Do" appears to be steadily shrinking.
In May 2018, Tipsy the Robot bartender nearly brought the bars and restaurants of Las Vegas to a grinding halt. Tipsy is an automated mixologist who slings bespoke cocktails, obviating human hands for the same task. Servers still bring the beverages to patrons at tables, although the tipplers have placed their orders via tablets. Fears of a future awash with thousands of Tipsies were a top concern of the 38,000-member Culinary Union. In response, the union nearly went on strike. The echoes of the early Luddite Rebellions of the early Industrial Revolution were, of course, unmistakable.
The general consensus on whether robots will take jobs wholesale remains mixed but is trending towards resignation. The optimists believe that, as with the Industrial Revolution and the Agricultural Revolution, the technological improvements that will come from the dawning era of artificial intelligence and its offshoot in modern robotics will create more new jobs than they destroy.
But until now, those conversations have held that the robots and AI will replace human jobs at some point in the future. The fears of the Sin City servers may, however, be a glimpse into the wisdom of the crowds and, looking at some of the more recent developments in artificial intelligence and robotics, their fears may be more than justified. What if, in fact, the robots and AI have already started coming for jobs and this is happening not due to simple automation but because these systems are rapidly attaining capabilities and skills once presumed to be defensible by humans?
In fact, the pool of things that "AI Can't Do" appears to be steadily shrinking. These are still fairly narrow tasks. But what if we have an outsized view of human capabilities and even modest improvements to current AI neural networks will start to attack and win over supposedly "AI-Proof" capabilities?
Robots are getting really good at making your food
Consider the robot burger chef, an automated chicken parts packing system, and a team of five neural nets that use artificial intelligence to play as a team against human competitors in the game of Dota. What do these things have in common? All are examples of systems that right now can do things which only a few years ago engineers were out of the reach of AI and would remain so for some time in the future.
The robot burger chef, and french fry master, Flippy, from Miso Robotics. Flippy swivels to pick a burger and gently lay cheese on top. It uses infrared sensing to determine the temperature of chicken and hamburger on the grill and remove them at the optimal time for flavor and texture. Miso is now running grills at 60 burger restaurants around the world. Running a grill is a dirty, dull and dangerous job but also one with considerable complexities.
Miso must interact with humans, must deal with unpredictable situations, with a variety of textures and semi-irregular shapes, and navigate three-dimensional spaces filled with easy to damage objects. This is not magical AI. It is not superhuman intelligence. But it is exactly the type of repetitive, dirty, dangerous work in slightly unpredictable situations that was presumed to be the sole province of humans. While a Miso costs upwards of $100,000, it takes no vacation, requires no benefits, and can work a 24-hour shift without complaining. These types of jobs are also high turnover and very hard to fill right now in U.S. fast food joints.
The chicken parts packing system is from Osaro, a company focused on combining deep learning and industrial robotics. Recognizing and handling irregular shapes and placing them into boxes or packages has long been a bugaboo of industrial robots. With cooked chicken parts, the robot is also dealing with slippery objects that could easily be crushed. And here’s the amazing part. Osaro did not require significant training by humans. It was able to teach itself how to gently grasp irregular chicken shapes and put them into a pack. This type of reinforcement learning from scratch moves us closer to general learning in artificial intelligence. Learning tasks from scratch that may be simple assembly line or food preparation and packing duties.
Dota and Go: How robots conquered complex games
Last, there is the champion Dota team, courtesy of the AI scientists at OpenAI, the foundation backed by Elon Musk tasked with ensuring mankind is not destroyed by a superintelligent being and that the benefits of AI are evenly distributed across nations and humanity. The team consisted of five different AI systems that learned to work together and handily defeat a team of five humans playing this complex multiplayer role game. The humans were not top level expert players but they were considered advanced in the game.
Teamwork in semi-unstructured tasks was not something we dreamed AI systems were capable of. In fact, teamwork is considered a creative management skill that AI systems should struggle mightily to match. And yes, a winning Dota team is achieving teamwork in a limited realm. Or is it really? I would suspect that if one breaks down the actual tasks required in a typical white collar job, the environment and teamwork required may be not much more complex than playing Dota as a squad.
Then again, scientists also though an AI would not beat a human Go master until well into the 2020s but DeepMind’s AlphaGo accomplished this in 2016. It did so with signs of “emergence” - intelligence that arises from innate creativity rather than from copying and collating all moves recorded in millions of hours of human go games.
In the now famous “Move 37”, DeepMind unveiled a move that it perplexed its human trainers who had never seen anything like it. So disturbing was Move 37 that DeepMind’s opponent, Korean Go master Lee Seidol, felt compelled to leave the room to collect himself. Since that time, DeepMind has built AlphaGo systems that didn’t require a training dataset but instead taught itself to play go by playing endless games against itself. This newer version of AlphaGo easily defeated the original system.
None of this is to say that AI and robots are remotely close to general intelligence or replacing human capabilities wholesale. Robot systems and AI remain brittle and unable to handle exceptions outside a certain range. Yet perhaps the subset of human skills and capabilities we believe to be defensible and hard to replicate are far smaller than we originally thought? What if human-centric skills such as “empathy” and “management” prove to be more malleable to artificial intelligence than we had originally envisioned? We may see this sooner than we realize.
Plus humans trust AI more than other humans
Startups like Woebot are building mental health and counselling chatbots that help depressed and sad patients better deal with problems. While we may think that the human touch is a critical part of counseling, there is some evidence that humans may respond better to non-judgemental, totally impartial inputs and conversation.
And modern machine vision and facial recognition can combine to cue bots to mimic empathetic behaviors. Take the case of Ellie the Avatar designed to help vets talk about their PTSD. According to the Wired article on the topic, “Ellie uses machine vision to interpret test subjects’ verbal and facial cues and respond supportively. For example, Ellie not only knows how to perform sympathetic gestures, like nodding, smiling, or quietly uttering “mhm” when listening to a sensitive story—she knows when to perform them.”
In fact, we already know that to a certain degree, humans prefer to ask their most sensitive questions to machines rather than other humans. This is precisely what former Google data scientist Seth Stephens-Davidowitz documented in his provocative book “Everybody Lies.” So there is a not insignificant possibility that robots and AI will actually be preferred by humans for the most intimate tasks and transactions, if their skills are good enough.
Robots don't even need to be better than humans to replace their labor
And the emphasis should be on “good enough”; total superiority to human capabilities is not required for success, just as the VHS beat the Betamax format despite inferior video quality. We have seen this already happy in key realms. Automated customer support systems have taken over a large burden of the work from humans, even though they remain far less accurate and interactive than people. The robot barista at Cafe X in San Francisco is not going to be able to handle a request for an Aeropress, but that doesn’t matter to most people who only want a cappuccino or an Americano. By the same token, the AI that can manage a team of humans in a marketing department doesn’t need to be the best manager ever. It only needs to be a good enough manager to hit the sales goals for the company.
There remains a wide gap between beating humans at Dota to managing a marketing team. The OpenAI Dota team required the equivalent of 128,000 computers and 256 graphics processors to compete. And the idea of superintelligence of high-powered general intelligence in AI remains far, far away. But a marketing team operates in a similarly structured environment; the game of Dota is probably no more complex than the elements at play in building and executing marketing campaigns. Dota allows for roughly 1,000 possible actions per every eighth of a section and is an environment that is much more similar to the real world than chess or Go in terms of decision making.
Given the rapid improvements in AI-driven systems, perhaps that gap to “good enough” is closing faster than we realize and could be a mere decade away. The economics will likely improve for AI and robots, as well. The cost of computing, a major input for operating AI, continues to go down rapidly. The cost of each robot equivalent of a human worker in service jobs will go down over time due to the effect of unit economics. The robots have not taken over Vegas yet, but perhaps the Culinary Union was right be concerned that their jobs may come under attack sooner than anyone is willing to admit.