Skip to content
The Future

MIT breakthrough in deep learning could help reduce errors

Researchers make the case for “deep evidential regression.”

Credit: sdeocoret / Adobe Stock

Key Takeaways
  • MIT researchers claim that deep learning neural networks need better uncertainty analysis to reduce errors.
  • “Deep evidential regression” reduces uncertainty after only one pass on a network, greatly reducing time and memory.
  • This could help mitigate problems in medical diagnoses, autonomous driving, and much more.
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

We’ve all seen the movies: a mad genius creates breakthrough artificial intelligence only to have it turn on them—and humanity. Midway through the film, the robots are taking over. By the end, humans have won, though barely. Like Godzilla, AI is never really gone. The monster that is our darkest shadow always lurks, ready to lurch back into action.

Fantasy aside, AI is a real problem. As Richard Clarke and R.P. Eddy write in their 2017 book, “Warnings,” 47 percent of all U.S. jobs could be put out of commission in 20 years—and that was predicted by Oxford researchers in 2013. A McKinsey study from the same year predicts AI will “depose 140 million full-time knowledge workers worldwide.”

Large-scale unemployment is dangerous, especially in terms of governmental action. The current administration has basically ignored AI, while the incoming administration does have a research platform. How that factors into job loss remains to be seen. Clarke and Eddy point to various responses to the Great Depression:

“In 1932, the U.S. responded with the New Deal. Western Europe responded with Fascism and the imminent rise of Nazism, Russia deepened into Stalinism and five-year plans.”

There’s also the question of efficacy. How do we really know when AI is working as planned? Statistics rely on two main confidence intervals: 95 percent and 99 percent. While the latter seems to inspire confidence from large data sets, do you want, for example, an AI medical intervention to have a 1 percent chance of failure?

Alexander Amini, a PhD student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and first author of a new paper on neural networks, says we shouldn’t have to take that risk.

“One thing that has eluded researchers is the ability of these models to know and tell us when they might be wrong. We really care about that 1 percent of the time, and how we can detect those situations reliably and efficiently.”

Deep learning neural networks are being used in autonomous driving and medical diagnoses, among many other fields. A 1 percent risk in an AI that filters social media feeds might not seem like much of a gamble, but when it comes to drug design or medical image analysis, such a risk could result in tragedy.

Credit: scharsfinn86 / Adobe Stock

On the road, 1 percent could be the difference between stopping at an intersection or rushing through just as another car runs a stop sign. Amini and colleagues wanted to produce a model that could better detect patterns in giant data sets. They named their solution “deep evidential regression.”

Sorting through billions of parameters is no easy task. Amini’s model utilizes uncertainly analysis—learning how much error exists within a model and supplying missing data. This approach in deep learning isn’t novel, though it often takes a lot of time and memory. Deep evidential regression estimates uncertainty after only one run of the neural network. According to the team, they can assess uncertainty in both input data and the final decision, after which they can either address the neural network or recognize noise in the input data.

In real-world terms, this is the difference between trusting an initial medical diagnosis or seeking a second opinion. By arming AI with a built-in detection system for uncertainty, a new level of honesty with data is reached—in this model, with pixels. During a test run, the neural network was given novel images; it was able to detect changes imperceptible to the human eye. Ramini believes this technology can also be used to pinpoint deepfakes, a serious problem we must begin to grapple with.

Any field that uses machine learning will have to factor in uncertainty awareness, be it medicine, cars, or otherwise. As Amini says,

“Any user of the method, whether it’s a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision.”

We might not have to worry about alien robots turning on us (yet), but we should be concerned with that new feature we just downloaded into our electric car. There will be many other issues to face with the emergence of AI in our world—and workforce. The safer we can make the transition, the better.

Stay in touch with Derek on Twitter and Facebook. His new book isHero’s Dose: The Case For Psychedelics in Ritual and Therapy.”

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next