Skip to content
The Present

New AI improves itself through Darwinian-style evolution

AutoML-Zero is a proof-of-concept project that suggests the future of machine learning may be machine-created algorithms.

Pixabay

Key Takeaways
  • Automatic machine learning is a fast-developing branch of deep learning.
  • It seeks to vastly reduce the amount of human input and energy needed to apply machine learning to real-world problems.
  • AutoML-Zero, developed by scientists at Google, serves as a simple proof-of-concept that shows how this kind of technology might someday be scaled up and applied to more complex problems.
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Machine learning has fundamentally changed how we engage with technology. Today, it’s able to curate social media feeds, recognize complex images, drive cars down the interstate, and even diagnose medical conditions, to name a few tasks.

But while machine learning technology can do some things automatically, it still requires a lot of input from human engineers to set it up, and point it in the right direction. Inevitably, that means human biases and limitations are baked into the technology.

So, what if scientists could minimize their influence on the process by creating a system that generates its own machine-learning algorithms? Could it discover new solutions that humans never considered?

To answer these questions, a team of computer scientists at Google developed a project called AutoML-Zero, which is described in a preprint paper published on arXiv.

“Human-designed components bias the search results in favor of human-designed algorithms, possibly reducing the innovation potential of AutoML,” the paper states. “Innovation is also limited by having fewer options: you cannot discover what you cannot search for.”

Automatic machine learning (AutoML) is a fast-growing area of deep learning. In simple terms, AutoML seeks to automate the end-to-end process of applying machine learning to real-world problems. Unlike other machine-learning techniques, AutoML requires relatively little human effort, which means companies might soon be able to utilize it without having to hire a team of data scientists.

AutoML-Zero is unique because it uses simple mathematical concepts to generate algorithms “from scratch,” as the paper states. Then, it selects the best ones, and mutates them through a process that’s similar to Darwinian evolution.

AutoML-Zero first randomly generates 100 candidate algorithms, each of which then performs a task, like recognizing an image. The performance of these algorithms is compared to hand-designed algorithms. AutoML-Zero then selects the top-performing algorithm to be the “parent.”

“This parent is then copied and mutated to produce a child algorithm that is added to the population, while the oldest algorithm in the population is removed,” the paper states.

The system can create thousands of populations at once, which are mutated through random procedures. Over enough cycles, these self-generated algorithms get better at performing tasks.

“The nice thing about this kind of AI is that it can be left to its own devices without any pre-defined parameters, and is able to plug away 24/7 working on developing new algorithms,” Ray Walsh, a computer expert and digital researcher at ProPrivacy, told Newsweek.

Fun AutoML-Zero experiments: Evolutionary search discovers fundamental ML algorithms from scratch, e.g., small neural nets with backprop. Can evolution be the “Master Algorithm”? 😉 Paper: https://arxiv.org/abs/2003.03384 Code: https://git.io/JvKrZ pic.twitter.com/wZQJimrLid

twitter.com

If computer scientists can scale up this kind of automated machine-learning to complete more complex tasks, it could usher in a new era of machine learning where systems are designed by machines instead of humans. This would likely make it much cheaper to reap the benefits of deep learning, while also leading to novel solutions to real-world problems.

Still, the recent paper was a small-scale proof of concept, and the researchers note that much more research is needed.

“Starting from empty component functions and using only basic mathematical operations, we evolved linear regressors, neural networks, gradient descent… multiplicative interactions. These results are promising, but there is still much work to be done,” the scientists’ preprint paper noted.

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next