New AI improves itself through Darwinian-style evolution

AutoML-Zero is a proof-of-concept project that suggests the future of machine learning may be machine-created algorithms.

New AI improves itself through Darwinian-style evolution
Pixabay
  • Automatic machine learning is a fast-developing branch of deep learning.
  • It seeks to vastly reduce the amount of human input and energy needed to apply machine learning to real-world problems.
  • AutoML-Zero, developed by scientists at Google, serves as a simple proof-of-concept that shows how this kind of technology might someday be scaled up and applied to more complex problems.

Machine learning has fundamentally changed how we engage with technology. Today, it's able to curate social media feeds, recognize complex images, drive cars down the interstate, and even diagnose medical conditions, to name a few tasks.

But while machine learning technology can do some things automatically, it still requires a lot of input from human engineers to set it up, and point it in the right direction. Inevitably, that means human biases and limitations are baked into the technology.

So, what if scientists could minimize their influence on the process by creating a system that generates its own machine-learning algorithms? Could it discover new solutions that humans never considered?

To answer these questions, a team of computer scientists at Google developed a project called AutoML-Zero, which is described in a preprint paper published on arXiv.

"Human-designed components bias the search results in favor of human-designed algorithms, possibly reducing the innovation potential of AutoML," the paper states. "Innovation is also limited by having fewer options: you cannot discover what you cannot search for."

Automatic machine learning (AutoML) is a fast-growing area of deep learning. In simple terms, AutoML seeks to automate the end-to-end process of applying machine learning to real-world problems. Unlike other machine-learning techniques, AutoML requires relatively little human effort, which means companies might soon be able to utilize it without having to hire a team of data scientists.

AutoML-Zero is unique because it uses simple mathematical concepts to generate algorithms "from scratch," as the paper states. Then, it selects the best ones, and mutates them through a process that's similar to Darwinian evolution.

AutoML-Zero first randomly generates 100 candidate algorithms, each of which then performs a task, like recognizing an image. The performance of these algorithms is compared to hand-designed algorithms. AutoML-Zero then selects the top-performing algorithm to be the "parent."

"This parent is then copied and mutated to produce a child algorithm that is added to the population, while the oldest algorithm in the population is removed," the paper states.

The system can create thousands of populations at once, which are mutated through random procedures. Over enough cycles, these self-generated algorithms get better at performing tasks.

"The nice thing about this kind of AI is that it can be left to its own devices without any pre-defined parameters, and is able to plug away 24/7 working on developing new algorithms," Ray Walsh, a computer expert and digital researcher at ProPrivacy, told Newsweek.

If computer scientists can scale up this kind of automated machine-learning to complete more complex tasks, it could usher in a new era of machine learning where systems are designed by machines instead of humans. This would likely make it much cheaper to reap the benefits of deep learning, while also leading to novel solutions to real-world problems.

Still, the recent paper was a small-scale proof of concept, and the researchers note that much more research is needed.

"Starting from empty component functions and using only basic mathematical operations, we evolved linear regressors, neural networks, gradient descent... multiplicative interactions. These results are promising, but there is still much work to be done," the scientists' preprint paper noted.

Why the number 137 is one of the greatest mysteries in physics

Famous physicists like Richard Feynman think 137 holds the answers to the Universe.

Pixabay
Surprising Science
  • The fine structure constant has mystified scientists since the 1800s.
  • The number 1/137 might hold the clues to the Grand Unified Theory.
  • Relativity, electromagnetism and quantum mechanics are unified by the number.
Keep reading Show less

Americans under 40 want major reforms, expanded Supreme Court

Younger Americans support expanding the Supreme Court and serious political reforms, says new poll.

Demonstrators In Louisville calling for justice for Breonna Taylor.

Credit: Jon Cherry/Getty Images
Politics & Current Affairs
  • Americans under 40 largely favor major political reforms, finds a new survey.
  • The poll revealed that most would want to expand the Supreme Court, impose terms limits, and make it easier to vote.
  • Millennials are more liberal and reform-centered than Generation Z.
Keep reading Show less

Can fake news help you remember real facts better?

A 2020 study published in the journal of Psychological Science explores the idea that fake news can actually help you remember real facts better.

Credit: Rawpixel.com on Shutterstock
Mind & Brain
  • In 2019, researchers at Stanford Engineering analyzed the spread of fake news as if it were a strain of Ebola. They adapted a model for understanding diseases that can infect a person more than once to better understand how fake news spreads and gains traction.
  • A new study published in 2020 explores the idea that fake news can actually help you remember real facts better.
  • "These findings demonstrate one situation in which misinformation reminders can diminish the negative effects of fake-news exposure in the short term," researchers on the project explained.
Keep reading Show less
Scroll down to load more…
Quantcast