New AI improves itself through Darwinian-style evolution
AutoML-Zero is a proof-of-concept project that suggests the future of machine learning may be machine-created algorithms.
- Automatic machine learning is a fast-developing branch of deep learning.
- It seeks to vastly reduce the amount of human input and energy needed to apply machine learning to real-world problems.
- AutoML-Zero, developed by scientists at Google, serves as a simple proof-of-concept that shows how this kind of technology might someday be scaled up and applied to more complex problems.
Machine learning has fundamentally changed how we engage with technology. Today, it's able to curate social media feeds, recognize complex images, drive cars down the interstate, and even diagnose medical conditions, to name a few tasks.
But while machine learning technology can do some things automatically, it still requires a lot of input from human engineers to set it up, and point it in the right direction. Inevitably, that means human biases and limitations are baked into the technology.
So, what if scientists could minimize their influence on the process by creating a system that generates its own machine-learning algorithms? Could it discover new solutions that humans never considered?
To answer these questions, a team of computer scientists at Google developed a project called AutoML-Zero, which is described in a preprint paper published on arXiv.
"Human-designed components bias the search results in favor of human-designed algorithms, possibly reducing the innovation potential of AutoML," the paper states. "Innovation is also limited by having fewer options: you cannot discover what you cannot search for."
Automatic machine learning (AutoML) is a fast-growing area of deep learning. In simple terms, AutoML seeks to automate the end-to-end process of applying machine learning to real-world problems. Unlike other machine-learning techniques, AutoML requires relatively little human effort, which means companies might soon be able to utilize it without having to hire a team of data scientists.
AutoML-Zero is unique because it uses simple mathematical concepts to generate algorithms "from scratch," as the paper states. Then, it selects the best ones, and mutates them through a process that's similar to Darwinian evolution.
AutoML-Zero first randomly generates 100 candidate algorithms, each of which then performs a task, like recognizing an image. The performance of these algorithms is compared to hand-designed algorithms. AutoML-Zero then selects the top-performing algorithm to be the "parent."
"This parent is then copied and mutated to produce a child algorithm that is added to the population, while the oldest algorithm in the population is removed," the paper states.
The system can create thousands of populations at once, which are mutated through random procedures. Over enough cycles, these self-generated algorithms get better at performing tasks.
"The nice thing about this kind of AI is that it can be left to its own devices without any pre-defined parameters, and is able to plug away 24/7 working on developing new algorithms," Ray Walsh, a computer expert and digital researcher at ProPrivacy, told Newsweek.
Fun AutoML-Zero experiments: Evolutionary search discovers fundamental ML algorithms from scratch, e.g., small neur… https://t.co/yMtUHa07Pa— Quoc Le (@Quoc Le)1583884785.0
If computer scientists can scale up this kind of automated machine-learning to complete more complex tasks, it could usher in a new era of machine learning where systems are designed by machines instead of humans. This would likely make it much cheaper to reap the benefits of deep learning, while also leading to novel solutions to real-world problems.
Still, the recent paper was a small-scale proof of concept, and the researchers note that much more research is needed.
"Starting from empty component functions and using only basic mathematical operations, we evolved linear regressors, neural networks, gradient descent... multiplicative interactions. These results are promising, but there is still much work to be done," the scientists' preprint paper noted.
Famous physicists like Richard Feynman think 137 holds the answers to the Universe.
- The fine structure constant has mystified scientists since the 1800s.
- The number 1/137 might hold the clues to the Grand Unified Theory.
- Relativity, electromagnetism and quantum mechanics are unified by the number.
Younger Americans support expanding the Supreme Court and serious political reforms, says new poll.
- Americans under 40 largely favor major political reforms, finds a new survey.
- The poll revealed that most would want to expand the Supreme Court, impose terms limits, and make it easier to vote.
- Millennials are more liberal and reform-centered than Generation Z.
A 2020 study published in the journal of Psychological Science explores the idea that fake news can actually help you remember real facts better.
- In 2019, researchers at Stanford Engineering analyzed the spread of fake news as if it were a strain of Ebola. They adapted a model for understanding diseases that can infect a person more than once to better understand how fake news spreads and gains traction.
- A new study published in 2020 explores the idea that fake news can actually help you remember real facts better.
- "These findings demonstrate one situation in which misinformation reminders can diminish the negative effects of fake-news exposure in the short term," researchers on the project explained.
Previous studies on misinformation have already paved the way to a better understanding<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDU1NzQ4NC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxNjE2Mjg1Nn0.hs_xHktN1KXUDVoWpHIVBI2sMJy6aRK6tvBVFkqmYjk/img.jpg?width=1245&coordinates=0%2C800%2C0%2C823&height=700" id="fc135" class="rm-shortcode" data-rm-shortcode-id="246bb1920c0f40ccb15e123914de1ab1" data-rm-shortcode-name="rebelmouse-image" alt="fake news concept of misinformation and fake news in the media" />
How does misinformation spread?
Credit: Visual Generation on Shutterstock<p><strong>What is the "continued-influence" effect?</strong></p><p>A challenge in using corrections effectively is that repeating the misinformation can have negative consequences. Research on this effect (referred to as "continued-influence") has shown that information presented as factual that is later deemed false can still contaminate memory and reasoning. The persistence of the continued-influence effect has led researchers to generally recommend avoiding repeating misinformation. </p><p>"Repetition increases familiarity and believability of misinformation," <a href="https://engineering.stanford.edu/magazine/article/how-fake-news-spreads-real-virus" target="_blank" rel="noopener noreferrer">the study explains</a>.</p><p><strong>What is the "familiarity-backfire" effect?</strong></p><p>Studies of this effect have shown that increasing misinformation familiarity through extra exposure to it leads to misattributions of fluency when the context of said information cannot be recalled. <a href="https://journals.sagepub.com/doi/10.1177/0956797620952797#" target="_blank" rel="noopener noreferrer">A 2017 study</a> examined this effect in myth correction. Subjects rated beliefs in facts and myths of unclear veracity. Then, the facts were affirmed and myths corrected and subjects again made belief ratings. The results suggested a role for familiarity but the myth beliefs remained below pre-manipulation levels. </p>