Crazy dreams help us make sense of our memories
A new theory suggests that dreams' illogical logic has an important purpose.
Overfitting
<p>The goal of machine learning is to supply an algorithm with a data set, a "training set," in which patterns can be recognized and from which predictions that apply to other unseen data sets can be derived.</p><p>If machine learning learns its training set too well, it merely spits out a prediction that precisely — and uselessly — matches that data instead of underlying patterns within it that could serve as predictions likely to be true of other thus-far unseen data. In such a case, the algorithm describes what the data set <em>is</em> rather than what it <em>means</em>. This is called "overfitting."</p><img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDc4NTQ4Ni9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY2NDM4NDk1Mn0.bMHbBbt7Nz0vmmQ8fdBKaO-Ycpme5eOCxbjPLEHq9XQ/img.jpg?width=980" id="5049a" class="rm-shortcode" data-rm-shortcode-id="f9a6823125e01f4d69ce13d1eef84486" data-rm-shortcode-name="rebelmouse-image" data-width="1440" data-height="585" />Big Think
The value of noise
<p>To keep machine learning from becoming too fixated on the specific data points in the set being analyzed, programmers may introduce extra, unrelated data as noise or corrupted inputs that are less self-similar than the real data being analyzed.</p><p>This noise typically has nothing to do with the project at hand. It's there, metaphorically speaking, to "distract" and even confuse the algorithm, forcing it to step back a bit to a vantage point at which patterns in the data may be more readily perceived and not drawn from the specific details within the data set.</p><p>Unfortunately, overfitting also occurs a lot in the real world as people race to draw conclusions from insufficient data points — xkcd has a fun example of how this can happen with <a href="https://xkcd.com/1122/" target="_blank">election "facts."</a></p><p>(In machine learning, there's also "underfitting," where an algorithm is too simple to track enough aspects of the data set to glean its patterns.)</p><img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDc4NTQ5My9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyMDE5NjY1M30.iS2bq7WEQLeS34zNFPnXwzAZZn9blCyI-KVuXmcHI6o/img.jpg?width=980" id="cd486" class="rm-shortcode" data-rm-shortcode-id="c49cfbbffceb00e3f37f00e0fef859d9" data-rm-shortcode-name="rebelmouse-image" data-width="1440" data-height="810" />Credit: agsandrew/Adobe Stock
Nightly noise
<p>There remains a lot we don't know about how much storage space our noggins contain. However, it's obvious that if the brain remembered absolutely everything we experienced in every detail, that would be an awful lot to remember. So it seems the brain consolidates experiences as we dream. To do this, it must make sense of them. It must have a system for figuring out what's important enough to remember and what's unimportant enough to forget rather than just dumping the whole thing into our long-term memory.</p><p>Performing such a wholesale dump would be an awful lot like overfitting: simply documenting what we've experienced without sorting through it to ascertain its meaning.</p><p>This is where the new theory, the <a href="https://arxiv.org/pdf/2007.09560.pdf" target="_blank">Overfitting Brain Hypothesis</a> (OBH) proposed by Erik Hoel of Tufts University, comes in. Suggesting that perhaps the brain's sleeping analysis of experiences is akin to machine learning, he proposes that the illogical narratives in dreams are the biological equivalent of the noise programmers inject into algorithms to keep them from overfitting their data. He says that this may supply just enough off-pattern nonsense to force our brains to see the forest and not the trees in our daily data, our experiences.</p><p>Our experiences, of course, are delivered to us as sensory input, so Hoel suggests that dreams are sensory-input noise, biologically-realistic noise injection with a narrative twist:</p><p style="margin-left: 20px;">"Specifically, there is good evidence that dreams are based on the stochastic percolation of signals through the hierarchical structure of the cortex, activating the default-mode network. Note that there is growing evidence that most of these signals originate in a top-down manner, meaning that the 'corrupted inputs' will bear statistical similarities to the models and representations of the brain. In other words, they are derived from a stochastic exploration of the hierarchical structure of the brain. This leads to the kind structured hallucinations that are common during dreams."</p><p>Put plainly, our dreams are just realistic enough to engross us and carry us along, but are just different enough from our experiences —our "training set" — to effectively serve as noise.</p><p>It's an interesting theory.</p><p>Obviously, we don't know the extent to which our biological mental process actually resemble the comparatively simpler, man-made machine learning. Still, the OBH is worth thinking about, maybe at least more worth thinking about than whatever <em>that</em> was last night.</p>Objects in lucid dreams are perceived as real, study discovers
It's all about smooth pursuit.
- While lucid dreaming, we use the same eye movement patterns as when we observe physical actions.
- However, we use different eye patterns when we imagine movement.
- Researchers believe this might help add to our understanding of consciousness.
Getty Images
<p>To understand smooth pursuit you can <a href="https://blogs.scientificamerican.com/illusion-chasers/what-lucid-dreams-look-like/" target="_blank">try this experiment</a>: Track your index finger, held out at arm's length, from left to right several times. Your eyes follow the pattern in a reliably smooth pattern. On the contrary, when you try to imagine the same exact movement, your eyes will not flow smoothly left to right, but jump ahead to particular points. This is due to your saccadic system, a name derived from the French word for "jolt." In both lucid dreaming and awakened perception, your smooth pursuit system is engaged. </p><p>The researchers write that vividness relies on intensity of neural activation. Imagining images compete with our normal sensory process, but when asleep our sensory input system — absorbing the world around us — is suppressed. External objects that could bombard our perception processes are eliminated. They continue, </p><blockquote>Our findings suggest that, in this respect, the visual imagery that occurs during REM sleep is more similar to perception than imagination. . . . Under conditions of low levels of competing sensory input and high levels of activation in extrastriate visual cortices (conditions associated with REM sleep), the intensity of neural activation underlying the imagery of visual motion (and therefore its vividness) is able to reach levels typically only associated with waking perception.</blockquote>Practicing Skills In Your Sleep Can Be as Effective as Physical Training
Just imagining movement fires the same neurons as if we were actually moving. A new study shows we can wake our sleeping mind to practice motor skills in our dreams.
Mental training is arguably as important as physical fitness. That argument is gaining strength as a growing body of literature unravels the once-mysterious connections between consciousness and movement. We know that the murky domain of subconscious and autonomic actions greatly influences our waking lives. Now we’re learning how to train our unconscious selves for the benefit of our daily actions.
What Do a Robot's Dreams Look Like? Google Found Out
They may look odd, but it’s all part of Google’s plan to solve a huge issue in machine learning: recognizing objects in images.
When Google asked its neural network to dream, the machine begin to generating some pretty wild images. They may look odd, but it’s all part of Google’s plan to solve a huge issue in machine learning: recognizing objects in images.
To be clear, Google’s software engineers didn’t ask a computer to dream, but they did ask its neural network to alter the images based on an original photo they fed into it, by applying layers. This was all part of their Deep Dream program.
The purpose was to make it better at finding patterns, which computers are none too good at. So, engineers started by “teaching” the neural network to recognize certain objects by giving it 1.2 million images, complete with object classifications the computer could understand.
These classifications allowed Google’s AI to learn to detect the different qualities of certain objects in an image, like a dog and a fork. But Google’s engineers wanted to go one step further, which is where Deep Dream comes in, which allowed the neural network to add those hallucinogenic qualities to images.
The Evolutionary Biology of Dreams, Explained
Dreams might be a whole lot sexier than we thought – but not because of their narrative content. Neurologist Patrick McNamara's theory links the biological changes in our brains during sleep to human's inherent desire to procreate.
Carl Jung battled his one-time friend and mentor, Sigmund Freud, on a number of topics, though perhaps none as perniciously as dreaming. An entire cottage industry of depth psychology and journaling workshops grew out of Jung’s theories of individuation—integrating the conscious and unconscious. To Jung, dreams—the primal material of the unconscious—unlocked humanity’s archetypal code, revealing more than they concealed, in direct contradiction to Freud’s ideas.
