Controversial physics theory says reality around us behaves like a computer neural network.
- Physicist proposes that the universe behaves like an artificial neural network.
- The scientist's new paper seeks to reconcile classical physics and quantum mechanics.
- The theory claims that natural selection produces both atoms and "observers".
Does the reality around us work like a neural network, a Matrix-like computer system that operates similar to a human brain? A new physics paper argues that looking at the universe that way can provide the elusive "theory of everything".
This controversial proposal is the brainchild of the University of Minnesota Duluth physics professor Vitaly Vanchurin. In an interview with Futurism, Vanchurin conceded that "the idea is definitely crazy, but if it is crazy enough to be true?"
The scientist developed the theory while exploring the workings of machine learning using statistical mechanics. He found that mechanisms involved in the computer learning were similar in some instances to the dynamics of quantum mechanics.
A computer neural network works via nodes, which mimic biological neurons, processing and passing on signals. As the network learns new information, it changes, giving certain nodes more priority, allowing it to connect bits of information in such a way that next time in will know, for example, what are they key traits of a "zebra".
"We are not just saying that the artificial neural networks can be useful for analyzing physical systems or for discovering physical laws, we are saying that this is how the world around us actually works," writes Vanchurin in the paper. "With this respect it could be considered as a proposal for the theory of everything, and as such it should be easy to prove it wrong."
How do you prove his theory wrong? Vanchurin proposes a way. All you have to do is "find a phenomenon which cannot be modeled with a neural network." That, of course, isn't actually so easy to pull off, as Vanchurin himself points out. We don't fully understand how neural network and machine learning work and need to grasp those processes first.
Vanchurin thinks his idea can accomplish another purpose that has been the goal of modern physics – to reconcile classical physics, which describes how the universe works on a large scale, and quantum mechanics, the study of the atomic and subatomic level of existence. The physicist thinks that if you view the universe as working essentially as a neural network, its behavior under certain conditions can be explained by both the quirky equations of quantum mechanics and the laws of classical physics like the theory of general relativity devised by Albert Einstein.
"The learning dynamics of a neural network can indeed exhibit approximate behaviors described by both quantum mechanics and general relativity," writes Vanchurin in his study.
Diving deeper into his theory, Vanchurin thinks it supports such apparent mechanisms of our world as natural selection. He suggests that in a neural network, particles and atoms, but even us, the "observers" would emerge from a natural-selection-like process. On the microscopic level of the network, some structures would become more stable while some would be less so. The stable ones would survive the evolutionary process, while the less stable ones would not.
'On the smallest scales I expect that the natural selection should produce some very low complexity structures such as chains of neurons, but on larger scales the structures would be more complicated," he shared with Futurism.
He sees little reason why this kind of process would only work on just the small scale, writing in the paper:
"If correct, then what we now call atoms and particles might actually be the outcomes of a long evolution starting from some very low complexity structures and what we now call macroscopic observers and biological cells might be the outcome of an even longer evolution."
While he posits the neural network explanation, Vanchurin doesn't necessarily mean we all live in a computer simulation, like proposed by philosopher Nick Bostrom, adding the caveat that even if we did, "we might never know the difference."
Vanchurin's idea has so far been received with skepticism by other physicists but he is undeterred. You can check out his paper for yourself on ArXiv.
Vanchurin on “Hidden Phenomena”:Vitaly Vanchurin speaking at the 6th International FQXi Conference, "Mind Matters: Intelligence and Agency in the Physical World." The Foundational Questions...
Researchers discover how to use light instead of electricity to advance artificial intelligence.
A breakthrough in artificial intelligence promises to take machine learning to the next level. Researchers figured out how to use light rather than electricity to carry out computations.
This new method, devised by researchers from George Washington University, can lead to substantial advancements in the speed and efficiency of the neural networks involved in machine learning. The approach also allows the AI to teach itself independently, without supervision. When a neural network becomes trained, it can use inference to figure out classifications for objects and patterns, finding signatures in the data.
The main advantage of this method is that normally cranking large amounts of data requires a tremendous amount of power for the processors. There are also limitations on transmission rates for the data flowing from the processor to the memory.
The scientists found a way to get around such issues by utilizing photons in neural network (tensor) processing units (TPUs), leading to efficient and powerful AI. The photon TPU they built outperformed an electric TPU by 2-3 orders of magnitude.
Does conscious AI deserve rights? | Big Think | YouTube
Mario Miscuglio, the paper's co-author from GWU's department of electrical and computer engineering, shared their conclusions:
"We found that integrated photonic platforms that integrate efficient optical memory can obtain the same operations as a tensor processing unit, but they consume a fraction of the power and have higher throughput," he explained. "When opportunely trained, [the platforms] can be used for performing interference at the speed of light."
What good is all this speed? Possible applications of the technology include super-fast processors for 5G and 6G networks and huge data centers, where "photonic specialised processors can save a tremendous amount of energy, improve response time and reduce data centre traffic," shared Dr. Miscuglio.
Check out the new paper by him and co-author Volker Sorger in Applied Physics Reviews.
MIT and Google researchers use deep learning to decipher ancient languages.
- Researchers from MIT and Google Brain discover how to use deep learning to decipher ancient languages.
- The technique can be used to read languages that died long ago.
- The method builds on the ability of machines to quickly complete monotonous tasks.
There are about 6,500-7,000 languages currently spoken in the world. But that's less than a quarter of all the languages people spoke over the course of human history. That total number is around 31,000 languages, according to some linguistic estimates. Every time a language is lost, so goes that way of thinking, of relating to the world. The relationships, the poetry of life uniquely described through that language are lost too. But what if you could figure out how to read the dead languages? Researchers from MIT and Google Brain created an AI-based system that can accomplish just that.
While languages change, many of the symbols and how the words and characters are distributed stay relatively constant over time. Because of that, you could attempt to decode a long-lost language if you understood its relationship to a known progenitor language. This insight is what allowed the team which included Jiaming Luo and Regina Barzilay from MIT and Yuan Cao from Google's AI lab to use machine learning to decipher the early Greek language Linear B (from 1400 BC) and a cuneiform Ugaritic (early Hebrew) language that's also over 3,000 years old.
Linear B was previously cracked by a human – in 1953, it was deciphered by Michael Ventris. But this was the first time the language was figured out by a machine.
The approach by the researchers focused on 4 key properties related to the context and alignment of the characters to be deciphered – distributional similarity, monotonic character mapping, structural sparsity and significant cognate overlap.
They trained the AI network to look for these traits, achieving the correct translation of 67.3% of Linear B cognates (word of common origin) into their Greek equivalents.
What AI can potentially do better in such tasks, according to MIT Technology Review, is that it can simply take a brute force approach that would be too exhausting for humans. They can attempt to translate symbols of an unknown alphabet by quickly testing it against symbols from one language after another, running them through everything that is already known.
Next for the scientists? Perhaps the translation of Linear A - the Ancient Greek language that no one has succeeded in deciphering so far.
You can check out their paper "Neural Decipherment via Minimum-Cost Flow: from Ugaritic to Linear B" here.
Noam Chomsky on Language’s Great Mysteries
What is the danger in creating something smarter than you? You can't control it, and pretty soon it could control you.
Some of the most intelligent people at the most highly-funded companies in the world can't seem to answer this simple question: what is the danger in creating something smarter than you? They've created AI so smart that the "deep learning" that it's outsmarting the people that made it. The reason is the "blackbox" style code that the AI is based off of—it's built solely to become smarter, and we have no way to regulate that knowledge. That might not seem like a terrible thing if you want to build superintelligence. But we've all experienced something minor going wrong, or a bug, in our current electronics. Imagine that, but in a Robojudge that can sentence you to 10 years in prison without explanation other than "I've been fed data and this is what I compute"... or a bug in the AI of a busy airport. We need regulation now before we create something we can't control. Max's book Life 3.0: Being Human in the Age of Artificial Intelligence is being heralded as one of the best books on AI, period, and is a must-read if you're interested in the subject.
Artificial intelligence has the capability to far surpass our intelligence in a relatively short period of time, but we have to lay the right groundwork now while we still can.
Artificial intelligence has the capability to far surpass our intelligence in a relatively short period of time. But AI expert Ben Goertzel knows that the foundation has to be strong for that artificial brain power to grow exponentially. It's all good to be super-intelligent, he argues, but if you don't have rationality and empathy to match it the results will be wasted and we could just end up with an incredible number-cruncher. In this illuminating chat, he makes the case for thinking bigger. Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.