MIT breakthrough in deep learning could help reduce errors

Researchers make the case for "deep evidential regression."

MIT breakthrough in deep learning could help reduce errors
Credit: sdeocoret / Adobe Stock
  • MIT researchers claim that deep learning neural networks need better uncertainty analysis to reduce errors.
  • "Deep evidential regression" reduces uncertainty after only one pass on a network, greatly reducing time and memory.
  • This could help mitigate problems in medical diagnoses, autonomous driving, and much more.

We've all seen the movies: a mad genius creates breakthrough artificial intelligence only to have it turn on them—and humanity. Midway through the film, the robots are taking over. By the end, humans have won, though barely. Like Godzilla, AI is never really gone. The monster that is our darkest shadow always lurks, ready to lurch back into action.

Fantasy aside, AI is a real problem. As Richard Clarke and R.P. Eddy write in their 2017 book, "Warnings," 47 percent of all U.S. jobs could be put out of commission in 20 years—and that was predicted by Oxford researchers in 2013. A McKinsey study from the same year predicts AI will "depose 140 million full-time knowledge workers worldwide."

Large-scale unemployment is dangerous, especially in terms of governmental action. The current administration has basically ignored AI, while the incoming administration does have a research platform. How that factors into job loss remains to be seen. Clarke and Eddy point to various responses to the Great Depression:

"In 1932, the U.S. responded with the New Deal. Western Europe responded with Fascism and the imminent rise of Nazism, Russia deepened into Stalinism and five-year plans."

There's also the question of efficacy. How do we really know when AI is working as planned? Statistics rely on two main confidence intervals: 95 percent and 99 percent. While the latter seems to inspire confidence from large data sets, do you want, for example, an AI medical intervention to have a 1 percent chance of failure?

Alexander Amini, a PhD student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and first author of a new paper on neural networks, says we shouldn't have to take that risk.

"One thing that has eluded researchers is the ability of these models to know and tell us when they might be wrong. We really care about that 1 percent of the time, and how we can detect those situations reliably and efficiently."

Deep learning neural networks are being used in autonomous driving and medical diagnoses, among many other fields. A 1 percent risk in an AI that filters social media feeds might not seem like much of a gamble, but when it comes to drug design or medical image analysis, such a risk could result in tragedy.

Credit: scharsfinn86 / Adobe Stock

On the road, 1 percent could be the difference between stopping at an intersection or rushing through just as another car runs a stop sign. Amini and colleagues wanted to produce a model that could better detect patterns in giant data sets. They named their solution "deep evidential regression."

Sorting through billions of parameters is no easy task. Amini's model utilizes uncertainly analysis—learning how much error exists within a model and supplying missing data. This approach in deep learning isn't novel, though it often takes a lot of time and memory. Deep evidential regression estimates uncertainty after only one run of the neural network. According to the team, they can assess uncertainty in both input data and the final decision, after which they can either address the neural network or recognize noise in the input data.

In real-world terms, this is the difference between trusting an initial medical diagnosis or seeking a second opinion. By arming AI with a built-in detection system for uncertainty, a new level of honesty with data is reached—in this model, with pixels. During a test run, the neural network was given novel images; it was able to detect changes imperceptible to the human eye. Ramini believes this technology can also be used to pinpoint deepfakes, a serious problem we must begin to grapple with.

Any field that uses machine learning will have to factor in uncertainty awareness, be it medicine, cars, or otherwise. As Amini says,

"Any user of the method, whether it's a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision."

We might not have to worry about alien robots turning on us (yet), but we should be concerned with that new feature we just downloaded into our electric car. There will be many other issues to face with the emergence of AI in our world—and workforce. The safer we can make the transition, the better.

--

Stay in touch with Derek on Twitter and Facebook. His new book is "Hero's Dose: The Case For Psychedelics in Ritual and Therapy."


From 1.8 million years ago, earliest evidence of human activity found

Scientists discover what our human ancestors were making inside the Wonderwerk Cave in South Africa 1.8 million years ago.

Inside the Kalahari Desert Wonderwerk Cave

Credit: Michael Chazan / Hebrew University of Jerusalem
Surprising Science
  • Researchers find evidence of early tool-making and fire use inside the Wonderwerk Cave in Africa.
  • The scientists date the human activity in the cave to 1.8 million years ago.
  • The evidence is the earliest found yet and advances our understanding of human evolution.
Keep reading Show less

Catacombs of Paris: The city of darkness finds its new raison d'être

Ancient corridors below the French capital have served as its ossuary, playground, brewery, and perhaps soon, air conditioning.

Excerpt from a 19th century map of the Paris Catacombs, showing the labyrinthine layout underground (in color) beneath the straight-lined structures on the surface (in grey).

Credit: Inspection Générale des Carrières, 1857 / Public domain
Strange Maps
  • People have been digging up limestone and gypsum from below Paris since Roman times.
  • They left behind a vast network of corridors and galleries, since reused for many purposes — most famously, the Catacombs.
  • Soon, the ancient labyrinth may find a new lease of life, providing a sustainable form of air conditioning.
Keep reading Show less

Baby's first poop predicts risk of allergies

Meconium contains a wealth of information.

Surprising Science
  • A new study finds that the contents of an infants' first stool, known as meconium, can predict if they'll develop allergies with a high degree of accuracy.
  • A metabolically diverse meconium, which indicates the initial food source for the gut microbiota, is associated with fewer allergies.
  • The research hints at possible early interventions to prevent or treat allergies just after birth.
Keep reading Show less
Mind & Brain

Big think: Will AI ever achieve true understanding?

If you ask your maps app to find "restaurants that aren't McDonald's," you won't like the result.

Quantcast