MIT breakthrough in deep learning could help reduce errors

Researchers make the case for "deep evidential regression."

MIT breakthrough in deep learning could help reduce errors
Credit: sdeocoret / Adobe Stock
  • MIT researchers claim that deep learning neural networks need better uncertainty analysis to reduce errors.
  • "Deep evidential regression" reduces uncertainty after only one pass on a network, greatly reducing time and memory.
  • This could help mitigate problems in medical diagnoses, autonomous driving, and much more.

We've all seen the movies: a mad genius creates breakthrough artificial intelligence only to have it turn on them—and humanity. Midway through the film, the robots are taking over. By the end, humans have won, though barely. Like Godzilla, AI is never really gone. The monster that is our darkest shadow always lurks, ready to lurch back into action.

Fantasy aside, AI is a real problem. As Richard Clarke and R.P. Eddy write in their 2017 book, "Warnings," 47 percent of all U.S. jobs could be put out of commission in 20 years—and that was predicted by Oxford researchers in 2013. A McKinsey study from the same year predicts AI will "depose 140 million full-time knowledge workers worldwide."

Large-scale unemployment is dangerous, especially in terms of governmental action. The current administration has basically ignored AI, while the incoming administration does have a research platform. How that factors into job loss remains to be seen. Clarke and Eddy point to various responses to the Great Depression:

"In 1932, the U.S. responded with the New Deal. Western Europe responded with Fascism and the imminent rise of Nazism, Russia deepened into Stalinism and five-year plans."

There's also the question of efficacy. How do we really know when AI is working as planned? Statistics rely on two main confidence intervals: 95 percent and 99 percent. While the latter seems to inspire confidence from large data sets, do you want, for example, an AI medical intervention to have a 1 percent chance of failure?

Alexander Amini, a PhD student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and first author of a new paper on neural networks, says we shouldn't have to take that risk.

"One thing that has eluded researchers is the ability of these models to know and tell us when they might be wrong. We really care about that 1 percent of the time, and how we can detect those situations reliably and efficiently."

Deep learning neural networks are being used in autonomous driving and medical diagnoses, among many other fields. A 1 percent risk in an AI that filters social media feeds might not seem like much of a gamble, but when it comes to drug design or medical image analysis, such a risk could result in tragedy.

Credit: scharsfinn86 / Adobe Stock

On the road, 1 percent could be the difference between stopping at an intersection or rushing through just as another car runs a stop sign. Amini and colleagues wanted to produce a model that could better detect patterns in giant data sets. They named their solution "deep evidential regression."

Sorting through billions of parameters is no easy task. Amini's model utilizes uncertainly analysis—learning how much error exists within a model and supplying missing data. This approach in deep learning isn't novel, though it often takes a lot of time and memory. Deep evidential regression estimates uncertainty after only one run of the neural network. According to the team, they can assess uncertainty in both input data and the final decision, after which they can either address the neural network or recognize noise in the input data.

In real-world terms, this is the difference between trusting an initial medical diagnosis or seeking a second opinion. By arming AI with a built-in detection system for uncertainty, a new level of honesty with data is reached—in this model, with pixels. During a test run, the neural network was given novel images; it was able to detect changes imperceptible to the human eye. Ramini believes this technology can also be used to pinpoint deepfakes, a serious problem we must begin to grapple with.

Any field that uses machine learning will have to factor in uncertainty awareness, be it medicine, cars, or otherwise. As Amini says,

"Any user of the method, whether it's a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision."

We might not have to worry about alien robots turning on us (yet), but we should be concerned with that new feature we just downloaded into our electric car. There will be many other issues to face with the emergence of AI in our world—and workforce. The safer we can make the transition, the better.

--

Stay in touch with Derek on Twitter and Facebook. His new book is "Hero's Dose: The Case For Psychedelics in Ritual and Therapy."


‘Designer baby’ book trilogy explores the moral dilemmas humans may soon create

How would the ability to genetically customize children change society? Sci-fi author Eugene Clark explores the future on our horizon in Volume I of the "Genetic Pressure" series.

Surprising Science
  • A new sci-fi book series called "Genetic Pressure" explores the scientific and moral implications of a world with a burgeoning designer baby industry.
  • It's currently illegal to implant genetically edited human embryos in most nations, but designer babies may someday become widespread.
  • While gene-editing technology could help humans eliminate genetic diseases, some in the scientific community fear it may also usher in a new era of eugenics.
Keep reading Show less

Designer uses AI to bring 54 Roman emperors to life

It's hard to stop looking back and forth between these faces and the busts they came from.

Meet Emperors Augustus, left, and Maximinus Thrax, right

Credit: Daniel Voshart
Technology & Innovation
  • A quarantine project gone wild produces the possibly realistic faces of ancient Roman rulers.
  • A designer worked with a machine learning app to produce the images.
  • It's impossible to know if they're accurate, but they sure look plausible.
Keep reading Show less

Archaeologists identify contents of ancient Mayan drug containers

Scientists use new methods to discover what's inside drug containers used by ancient Mayan people.

A Muna-type paneled flask with distinctive serrated-edge decoration from AD 750-900.

Credit: WSU
Surprising Science
  • Archaeologists used new methods to identify contents of Mayan drug containers.
  • They were able to discover a non-tobacco plant that was mixed in by the smoking Mayans.
  • The approach promises to open up new frontiers in the knowledge of substances ancient people consumed.
Keep reading Show less

Ten “keys to reality” from a Nobel-winning physicist

To understand ourselves and our place in the universe, "we should have humility but also self-respect," Frank Wilczek writes in a new book.

Photo by Andy HYD on Unsplash
Surprising Science
In the spring of 1970, colleges across the country erupted with student protests in response to the Vietnam War and the National Guard's shooting of student demonstrators at Kent State University.
Keep reading Show less
Mind & Brain

This is your brain on political arguments

Debating is cognitively taxing but also important for the health of a democracy—provided it's face-to-face.

Scroll down to load more…
Quantcast