Buh-Bye, ‘Traditional’ Neural Networks. Hello, Capsules.

The man who first demonstrated the power of neural networks introduces capsule networks.

mind blown
(JIRSAK via SHUTTERSTOCK)

If you have a recent phone, odds are you have a neural net in your pocket or handbag. That’s how ubiquitous neural nets have become. They’ve got caché, and manufacturers brag about using them in everything, from voice recognition to smart thermostats to self-driving cars. A search for “neural network” on Google nets nearly 35 million hits. But “traditional” neural nets may already be on their way out, thanks to Geoff Hinton of Google itself. He’s introduced something even cutting-edgier: “capsule” neural nets.


The inspiration for traditional neural networks is, as their name implies, the neurons in our brains, and the way these tiny bodies are presumed to aggregate understanding through complex interconnections between many, many individual neurons, each of which is handling some piece of an overall puzzle.

 

(VITSTUDIO via SHUTTERSTOCK)

Neural nets are more properly referred to as “artificial neural networks,” or “ANNs for shot. (We’ll just call them “neural networks” or “nets” here.) A neural network is a classifier that can sort an object into a correct category based on input data.

The foundation of a neural network is its artificial neurons, or “ANs,” each one assigning a value to information it’s received according to some rule. Groups of ANs are arranged in layers that together come to a prediction of some sort that’s then passed on to the next layer, and so on, until understanding is achieved. In convolutional neural networks, insights travel up and down the layers, continually modifying the ANs’ rules to fix errors and deliver the most accurate outputs.

(SIN14 via SHUTTERSTOCK)

Neural nets predate our modern computers, with the first instance being the Perceptron algorithm developed by Cornell’s Frank Rosenblatt in 1957. To take advantage of their potential, though, requires modern computation horsepower, and therefore not much was done in the way of neural nets’ development until University of Toronto professor Geoff Hinton demonstrated in 2012 just how good they could be at recognizing images. (Google and Apple now use neural-nets in their photo platforms, with Google recently revealing that theirs has 30 layers to it.) Neural networks are a critical component in the development of artificial intelligence (AI).

One appealing aspect of neural networks is that programmers don’t need to tell them how to do their job. Instead, a massive database of samples — images, voices, etc. — is provided to the neural network, which then figures out for itself how to identify what it’s been given. It takes the neural net a while, and lots of samples, to establish the identities of similar items presented under different conditions: in different positions, for example. As Tom Simonite writes in WIRED:

To teach a computer to recognize a cat from many angles, for example, could require thousands of photos covering a variety of perspectives. Human children don’t need such explicit and extensive training to learn to recognize a household pet. 

This is a problem: Their need for so many samples has been limiting neural nets’ usefulness to things for which massive data sets exist. In order for AI to be of use with more limited data sets — for analyzing medical imagery, for example — it needs to be smarter with less input data. Hinton recently told WIRED, “I think the way we’re doing computer vision is just wrong. It works better than anything else at present, but that doesn’t mean it’s right.”

Hinton has been pondering a potential solution since the late 90s: capsules, and has just published two articles — at arXiv and OpenReview — detailing a functional “capsule network” because, as he says, “We’ve finally got something that works well.”

Geoff Hinton (UNIVERSITY OF TORONTO)

With Hinton’s capsule network, layers are comprised not of individual ANs, but rather of small groups of ANs arranged in functional pods, or “capsules.” Each capsule is programmed to detect a particular attribute of the object being classified, thus getting around the need for massive input data sets. This makes capsule networks a departure from the “let them teach themselves” approach of traditional neural nets.

A layer is assigned the task of verifying the presence of some characteristic, and when enough capsules are in agreement on the meaning of their input data, the layer passes on its prediction to the next layer.

So far, capsule nets have proven equally adept at as traditional neural nets at understanding handwriting, and cut the error rate in half for identifying toy cars and trucks. Impressive, but it’s just a start. The current implantation of capsule networks is, according by Hinton, slower than it will have to be in the end. He’s got ideas for speeding them up, and when he does, Hinton’s capsules may well spark a major leap forward in neural networks, and in AI. 

 

This is what aliens would 'hear' if they flew by Earth

A Mercury-bound spacecraft's noisy flyby of our home planet.

Image source: sdecoret on Shutterstock/ESA/Big Think
Surprising Science
  • There is no sound in space, but if there was, this is what it might sound like passing by Earth.
  • A spacecraft bound for Mercury recorded data while swinging around our planet, and that data was converted into sound.
  • Yes, in space no one can hear you scream, but this is still some chill stuff.

First off, let's be clear what we mean by "hear" here. (Here, here!)

Sound, as we know it, requires air. What our ears capture is actually oscillating waves of fluctuating air pressure. Cilia, fibers in our ears, respond to these fluctuations by firing off corresponding clusters of tones at different pitches to our brains. This is what we perceive as sound.

All of which is to say, sound requires air, and space is notoriously void of that. So, in terms of human-perceivable sound, it's silent out there. Nonetheless, there can be cyclical events in space — such as oscillating values in streams of captured data — that can be mapped to pitches, and thus made audible.

BepiColombo

Image source: European Space Agency

The European Space Agency's BepiColombo spacecraft took off from Kourou, French Guyana on October 20, 2019, on its way to Mercury. To reduce its speed for the proper trajectory to Mercury, BepiColombo executed a "gravity-assist flyby," slinging itself around the Earth before leaving home. Over the course of its 34-minute flyby, its two data recorders captured five data sets that Italy's National Institute for Astrophysics (INAF) enhanced and converted into sound waves.

Into and out of Earth's shadow

In April, BepiColombo began its closest approach to Earth, ranging from 256,393 kilometers (159,315 miles) to 129,488 kilometers (80,460 miles) away. The audio above starts as BepiColombo begins to sneak into the Earth's shadow facing away from the sun.

The data was captured by BepiColombo's Italian Spring Accelerometer (ISA) instrument. Says Carmelo Magnafico of the ISA team, "When the spacecraft enters the shadow and the force of the Sun disappears, we can hear a slight vibration. The solar panels, previously flexed by the Sun, then find a new balance. Upon exiting the shadow, we can hear the effect again."

In addition to making for some cool sounds, the phenomenon allowed the ISA team to confirm just how sensitive their instrument is. "This is an extraordinary situation," says Carmelo. "Since we started the cruise, we have only been in direct sunshine, so we did not have the possibility to check effectively whether our instrument is measuring the variations of the force of the sunlight."

When the craft arrives at Mercury, the ISA will be tasked with studying the planets gravity.

Magentosphere melody

The second clip is derived from data captured by BepiColombo's MPO-MAG magnetometer, AKA MERMAG, as the craft traveled through Earth's magnetosphere, the area surrounding the planet that's determined by the its magnetic field.

BepiColombo eventually entered the hellish mangentosheath, the region battered by cosmic plasma from the sun before the craft passed into the relatively peaceful magentopause that marks the transition between the magnetosphere and Earth's own magnetic field.

MERMAG will map Mercury's magnetosphere, as well as the magnetic state of the planet's interior. As a secondary objective, it will assess the interaction of the solar wind, Mercury's magnetic field, and the planet, analyzing the dynamics of the magnetosphere and its interaction with Mercury.

Recording session over, BepiColombo is now slipping through space silently with its arrival at Mercury planned for 2025.

Learn the Netflix model of high-performing teams

Erin Meyer explains the keeper test and how it can make or break a team.

Videos
  • There are numerous strategies for building and maintaining a high-performing team, but unfortunately they are not plug-and-play. What works for some companies will not necessarily work for others. Erin Meyer, co-author of No Rules Rules: Netflix and the Culture of Reinvention, shares one alternative employed by one of the largest tech and media services companies in the world.
  • Instead of the 'Rank and Yank' method once used by GE, Meyer explains how Netflix managers use the 'keeper test' to determine if employees are crucial pieces of the larger team and are worth fighting to keep.
  • "An individual performance problem is a systemic problem that impacts the entire team," she says. This is a valuable lesson that could determine whether the team fails or whether an organization advances to the next level.
Keep reading Show less
Photo by Martin Adams on Unsplash
Culture & Religion
She was walking down the forest path with a roll of white cloth in her hands. It was trailing behind her like a long veil.
Keep reading Show less
Scroll down to load more…
Quantcast