Skip to content
Who's in the Video
Dr. James Manyika is a director of the McKinsey Global Institute (MGI), McKinsey & Company’s business and economics research arm, and one of its three global co-leaders. He is a member[…]

As a member of the White House Global Development Council, Dr. James Manyika makes it his business to keep a keen eye on economic trends with big international implications. Here he tackles automation and the rise of robot workers. Manyika and his team of researchers at the McKinsey Global Institute have found that, so far, we’ve proven far better at teaching robots to reason than we are at getting them to perceive. It’s the sort of industries that rely on sensory perception in which we’re likely to see a slower rise of automated workers and thus more opportunities for qualified members of the human labor force.

James Manyika: The hardest things to do with technology — not that they’re impossible, but the hardest things to do with technology have to do with if you like motor-sensory-perception challenges. Those are actually the — now we’ve made progression on those by the way so, you know, humanoid robots have made huge progress, to sense environments, have made huge progress. But we’ve not made in anywhere near as much progress on that as we’ve made in the more reasoning, thinking tasks — the knowledge work. And the reason why those two differences are interesting is that so I’ll stay with what’s called the physical-sensory-perception end of that spectrum. You end up still needing to actually build machines that actually costs money that have arms and legs and whatever physical things that will move things around. It’s also the place where there will be an abundance of human labor available. And so the combination of the costs, not so much progress, and the availability of human labor will probably mean that we’ll see less automation happen there because there’s always going to be an alternative.

Whereas if you go to the other end of the spectrum where it’s mostly thinking work... the algorithm that does medical diagnosis or pattern recognition or image recognition is essentially an algorithm. There’s no moving parts so to speak. So the economics of that are very, very low. And, by the way, we’ve made more progress there in the last five years than we’ve made in the last 50 with machine learning and deep learning. And, by the way, that’s where the labor and the skills are in short supply.

So you put the technology and the labor economics that go with it — which is a shortage — you’re likely to see more of it actually being applied there. If you look at a sector like manufacturing, for example. You know, I’ll pick a period — 2000-2008; 2008 just because that was the start of the recession. In that period, much of the conversation we had about the 5.8 million jobs we lost in manufacturing was always a conversation about offshoring. Now when we look back and various economists have different estimates of this. We have our own. But for the most part, roughly about 20 percent of the jobs lost in that period were, in fact, due to offshoring. The rest was a combination of technology-driven automation as well as shortfalls in demand. At some level, when you’ve got economies like the United States where something like, you know, 60 percent of our GDP growth comes from household and consumer consumption and spending, it’s going to be important for people to be able to consume and spend to drive GDP. So if people aren’t earning anything because they’re not working or whatever the case may be, what happens to that?

So I think there’s a very complicated set of questions here, questions about transitions as we move towards a world in which there’s more automation. It’s a much longer conversation that we’ll have to have over a much longer period of time. So I think this question of automation is actually a bigger deal and I think we got distracted and looked at the offshoring question. Of course that’s real, but a bigger question is what happens to work?


Related