A.I economics: How cheaper predictions will change the world

Predicting the future is about to become a whole lot cheaper. Here's how economists look at artificial intelligence.

Ajay Agrawal: I think economics has something to contribute in terms of our understanding of artificial intelligence because it gives us a different view. So, for example, if you ask a technologist to tell you about the rise of semiconductors they will talk to you about the increasing number of transistors on a chip and all the science underlying the ability to keep doubling the number of transistors every 18 months or so. But if you ask an economist to describe to you the rise of semiconductors they won’t talk about transistors on a chip, instead they’ll talk about a drop in the cost of arithmetic. They’ll say, what’s so powerful about semiconductors is they substantially reduced the cost of arithmetic.

It’s the same with A.I., everybody is fascinated with all the magical things A.I. can do and what economists bring to the conversation is that they are able to look at a fascinating technology like artificial intelligence and strip all the fun and wizardry out of it and reduce A.I. down to a single question, which is, “What does this technology reduce the cost of?” And in the case of A.I. the recent economists think it’s such a foundational technology and why it’s so important it stands in a different category from virtually every other domain of technology that we see today, is because the thing for which it drops the cost is such a foundational input, we use it for so many things; in the case of A.I., that’s prediction.

And so why that’s useful is that as soon as we think of A.I. as a drop in the cost of prediction, first of all, it takes away all the confusion of well, what is this current renaissance in A.I. actually doing? Is it Westworld? Is it C-3PO? Is it a Hal, what is it? And really what it is, it’s simply a drop in the cost of prediction. And we define prediction as taking information you have to generate information you don’t have. So it’s not just through the traditional form of forecasting like taking last months sales and predicting next months sales. It’s also taking, for example, if we have a medical image and we’re looking at a tumor and the data we have is the image and what we don’t have is the classification of the tumor as benign or malignant, the A.I. makes that classification, that’s a form of prediction. And so when something becomes cheap—from economics 101 most people remember there’s a downward sloping demand curve—and so when something becomes cheaper that means we use more of it. And so in the case of prediction as it becomes cheaper we’ll use more and more of it. And so that will take two forms: one is that we’ll use more of it for things we traditionally use prediction for like demand forecasting and supply chain management. But where I think it’s really interesting is that when it becomes cheap, we’ll start using it for things that weren’t traditionally prediction problems but we’ll start converting problems into prediction problems to take advantage of the new, cheap prediction.

So one example is driving. We’ve had autonomous cars for a long time, or autonomous vehicles, but we’ve always used them inside a controlled environment like a factory or a warehouse. And we did that because we had to control the number of—think of it as the if/then statement. So we have a robot, the engineer would program the robot to move around the factory or the warehouse and then they would give it a bit of intelligence. They would put a camera on the front of the robot and they would give it some logic, saying okay if something walks in front then stop. If the shelf is empty then move to the next shelf. If/then. If/then.

But you could never put that vehicle on a city street because there is an infinite number of ifs. There are so many things that could happen in an uncontrolled environment. That’s why as recently as six years ago experts in the field were saying we’ll never have a driverless car on a city street in our lifetime—until it was converted into a prediction problem. And the people who are familiar with this new, cheap form of prediction said why don’t we solve this problem in a different way and instead we’ll treat it as a single prediction problem? And the prediction is: What would a good human driver do?

And so effectively the way you can think about it is that we put humans in a car and we told them to drive and humans have data coming in through the cameras on our face and the microphones on the side of our heads and our data came in, we process the data with our monkey brains and then we take action. And our actions are very limited: we can turn left; we can turn right; we can brake; we can accelerate. The way you can think about it is, think about an A.I. sitting in the car along with the driver and what that A..I is trying to do is—it doesn’t have its own input sensors, eyes and ears, so we have to give it some: we put a radar camera, LiDAR, around the car—and then the A.I. has this incoming data and every second it’s got data coming in, it tries to predict in the next second what will the human driver do? In the beginning, it’s a terrible predictor it makes lots of mistakes. And from a statistical point of view, we can say it has big confidence intervals; it’s not very confident. But it learns as it goes and every time it makes a mistake, it thinks that the driver is about to turn left but the driver doesn’t turn left and it updates its model. It thinks the driver was going to brake, the driver doesn’t brake, it updates its model. And as it goes, the predictions get better and better and better and the confidence intervals get smaller and smaller and smaller.

So we turned driving into a prediction problem. We’ve turned translation into a prediction problem. That used to be a rules-based problem where we had linguists with many rules and many exceptions and that’s how we did translation. Now we’ve turned it into a prediction problem.

I think probably the most common surprise that people have is we have a lot of HR people that come into our lab and they say: 'Hey, we’re here to learn about A.I. because we need to know what kinds of people to hire for our company you know, for our manufacturing or our sales or this or that division. Of course, it won’t affect my division because I’m in HR and we’re a very people-part of the business and so A.I. is not going to affect us.' But of course, people are breaking HR down to a series of prediction problems.

So for example, the first thing HR people do is recruit, and recruit is essentially they take in a set of input data like resumes and interview transcripts and then they try to predict from a set of applicants who will be the best for this job. And once they hire people then the next part is promotion. Promotion has also been converted into a prediction problem. You have a set of people working in the company and you have to predict who will be the best at the next-level-up job. And then the next role they do is retention. They have 10,000 people working in the company and they have to predict which of those people are most likely to leave, particularly their stars, and also predict: what can we do that would most likely increase the chance of them staying? And so one of the, what I would say, a black art right now in A.I. is converting existing problems into prediction problems so that A.I.s can handle them.

When most of us look at A.I. we see magical capabilities. When economists look at A.I. they see something very different. Economist Ajay Agrawal explains: "What economists bring to the conversation is that they are able to look at a fascinating technology like artificial intelligence and strip all the fun and wizardry out of it and reduce A.I. down to a single question, which is, 'What does this technology reduce the cost of?'" Never has one person taken such delight in stripping the fun from something awesome. But what does A.I. lower the cost of? Predictions, says Agrawal. Intelligent machines can take information we have and use it to generate information we need. Uncertainty is the single biggest hurdle in good decision making, and A.I. can drastically increase certainty in many areas, like automated vehicles, language translation, human resources and medical diagnostics. As A.I. becomes a cheaper technology, its use will become even more widespread. "Where I think it’s really interesting is that when it becomes cheap, we’ll start using it for things that weren’t traditionally prediction problems but we’ll start converting problems into prediction problems to take advantage of the new, cheap prediction." Ajay Agrawal is the co-author of Prediction Machines: The Simple Economics of Artificial Intelligence.

Harness the Power of Calm

Tap into the "Rest and Digest" System to Achieve Your Goals

Big Think Edge
  • In the fast-paced workplaces and productivity-focused societies many of us inhabit today, it is easy to burnout.
  • Emma Seppälä, a Stanford researcher on human happiness, recommends tapping into the parasympathetic nervous system instead—"rest and digest"rather than "fight or flight."
  • Aiming for energy management rather than time management will give you the resilience you need to excel at the things that really matter in your life and career, rather than living "mostly off" by attempting to seem "always on."

Apple co-founder says we should all ditch Facebook — permanently

Steve Wozniak doesn't know if his phone is listening, but he's minimizing risks.

Photo by Bryan Steffy/Getty Images
Technology & Innovation
  • Steve Wozniak didn't hold back his feelings about the social media giant when stopped at an airport.
  • The Apple co-founder admitted that devices spying on his conversations is worrisome.
  • Wozniak deleted his Facebook account last year, recommending that "most people" should do the same.
Keep reading Show less

Where the evidence of fake news is really hiding

When it comes to sniffing out whether a source is credible or not, even journalists can sometimes take the wrong approach.

Sponsored by Charles Koch Foundation
  • We all think that we're competent consumers of news media, but the research shows that even journalists struggle with identifying fact from fiction.
  • When judging whether a piece of media is true or not, most of us focus too much on the source itself. Knowledge has a context, and it's important to look at that context when trying to validate a source.
  • The opinions expressed in this video do not necessarily reflect the views of the Charles Koch Foundation, which encourages the expression of diverse viewpoints within a culture of civil discourse and mutual respect.
Keep reading Show less