AI model uses human irrationality to predict our next moves
- In a recent paper, researchers at MIT and the University of Washington described a new way to model an agent’s behavior. They then used their method to predict humans’ goals or actions.
- The latent inference budget model quickly identifies patterns in how an agent makes “sub-optimal” decisions. It then uses this information to forecast behavior.
- The researchers envision this model being used in robot helpers and AI assistants to prevent human mistakes before they occur.
Human beings behave irrationally — or as an artificially intelligent robot might say, “sub-optimally.” Data, the emotionless yet affable android depicted in Star Trek: The Next Generation, regularly struggled to comprehend humans’ flawed decision-making. If he had been programmed with a new model devised by researchers at MIT and the University of Washington, he might have had an easier go of it.
In a paper published last month, Athul Paul Jacob, a Ph.D. student in AI at MIT, Dr. Jacob Andreas, his academic advisor, and Abhishek Gupta, an assistant professor in computer science and engineering at the University of Washington, described a new way to model an agent’s behavior. They then used their method to predict humans’ goals or actions.
Jacob, Andreas, and Gupta created what they termed a “latent inference budget model.” Its underlying breakthrough lies in inferring a human or machine’s “computational constraints” based on prior actions. These constraints result in sub-optimal choices. For example, a human constraint for decision decision-making is often time. When confronted with a difficult choice, we typically don’t spend hours (or longer) gaming out every possible outcome. Instead, we make decisions quickly without taking the time to gather all the information available.
Leveraging irrational decision-making
Models currently exist that account for irrational decision-making, but these only predict that errors will occur randomly. In reality, humans and machines screw up in more formulaic patterns. The latent inference budget model can quickly identify these patterns and then use them to forecast future behavior.
Across three tests, the researchers found that their new model generally outperforms the old models: It was as good or better at predicting a computer algorithm’s route when navigating a maze, a human chess player’s next move, and what a human speaker was trying to say from a quick utterance.
Jacob says that the research process made him realize how fundamental planning is to human behavior. Certain people are not inherently rational or irrational. It’s just that some people take extra time to plan their actions while others take less.
“At the end of the day, we saw that the depth of the planning, or how long someone thinks about the problem, is a really good proxy of how humans behave,” he said in a statement.
Jacob envisions the model being used in futuristic robotic helpers or AI assistants.
“If we know that a human is about to make a mistake, having seen how they have behaved before, the AI agent could step in and offer a better way to do it. Or the agent could adapt to the weaknesses that its human collaborators have,” he said.
This is not scientists’ first attempt to develop tools that help AI predict human decision-making. Most researchers pursuing this goal envision positive futures. For example, we may someday see AI seamlessly coordinating their actions with ours, providing assistance in everyday tasks, boosting productivity at workplaces, and being our drinking buddies.
But there are more dystopian possibilities, too. AI models thoroughly designed to predict human behavior could also be used by bad actors to manipulate us. With enough data on how humans react to various stimuli, AI could be programmed to elicit responses that might not be in the targeted individuals’ best interest. Imagine if AI got really good at this. It would bring new urgency to the question of whether humans are agents with free will or simply automata reacting to external forces.