New studies show that some people can hear and respond to questions while dreaming.
- Four research teams in four countries independently communicated with sleeping volunteers.
- A total of 36 participants correctly responded to questions 18.6% of the time.
- Researchers believe this could open up new avenues for treating anxiety, depression, and trauma.
Dream Hacking: Watch 3 Groundbreaking Experiments on Decisions, Addictions, and Sleep I NOVA I PBS<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="77f2961e9a759ae62924a8efd37a61f0"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/7M06fJxiayo?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p>Participants in this study certainly experienced their imagination stretching, with one volunteer "hearing" the math problem (what is eight minus six?) through a car radio while another dreamer was questioned by a movie narrator.</p><p>The results were not overwhelmingly positive mind you, yet still proved successful enough to warrant further researcher. One researcher called this "<a href="https://www.sciencemag.org/news/2021/02/scientists-entered-peoples-dreams-and-got-them-talking" target="_blank">proof of concept</a>" more than total confirmation. Over 60 percent of the questions went unanswered. Another 17.7 percent were unclear, while just over 3 percent answered wrong. Yet 18.6 percent of respondents were on the money, an impressive feat for the sleeping. </p><p>While the researchers aren't stealing secrets from the subconscious, they hope this discovery could open up new avenues of therapeutics in the treatment of anxiety, depression, and trauma. The idea of accessing "dream content" that they can inform with new content could lead to non-invasive forms of treatment—or "Inception."</p><p>As the team writes, </p><p style="margin-left: 20px;">"The scientific investigation of dreaming, and of sleep more generally, could be beneficially explored using interactive dreaming. Specific cognitive and perceptual tasks could be assigned with instructions presented via softly spoken words, opening up a new frontier of research."</p><p>Of course, more research is needed, though volunteers will likely not be hard to find. Peeling back the layers of consciousness is both a philosophical pursuit and a nighttime hobby, one that continues to reveal possibilities as we evolve our understanding of the unconscious. </p><p> --</p><p><em>Stay in touch with Derek on <a href="http://www.twitter.com/derekberes" target="_blank">Twitter</a> and <a href="https://www.facebook.com/DerekBeresdotcom" target="_blank" rel="noopener noreferrer">Facebook</a>. His most recent book is</em> "<em><a href="https://www.amazon.com/gp/product/B08KRVMP2M?pf_rd_r=MDJW43337675SZ0X00FH&pf_rd_p=edaba0ee-c2fe-4124-9f5d-b31d6b1bfbee" target="_blank" rel="noopener noreferrer">Hero's Dose: The Case For Psychedelics in Ritual and Therapy</a>."</em></p>
Night owl or early bird?
As with almost all life on Earth, human beings also function in cycles of light and dark. Look what happens to the human organism (and psyche) every day.
Heard about the phenomenon of FNE, or 'first night effect'?
Have you ever woken up in a new place and noted with disappointment that you are still tired?
A new theory suggests that dreams' illogical logic has an important purpose.
Overfitting<p>The goal of machine learning is to supply an algorithm with a data set, a "training set," in which patterns can be recognized and from which predictions that apply to other unseen data sets can be derived.</p><p>If machine learning learns its training set too well, it merely spits out a prediction that precisely — and uselessly — matches that data instead of underlying patterns within it that could serve as predictions likely to be true of other thus-far unseen data. In such a case, the algorithm describes what the data set <em>is</em> rather than what it <em>means</em>. This is called "overfitting."</p><img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDc4NTQ4Ni9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY2NDM4NDk1Mn0.bMHbBbt7Nz0vmmQ8fdBKaO-Ycpme5eOCxbjPLEHq9XQ/img.jpg?width=980" id="5049a" class="rm-shortcode" data-rm-shortcode-id="f9a6823125e01f4d69ce13d1eef84486" data-rm-shortcode-name="rebelmouse-image" data-width="1440" data-height="585" />
The value of noise<p>To keep machine learning from becoming too fixated on the specific data points in the set being analyzed, programmers may introduce extra, unrelated data as noise or corrupted inputs that are less self-similar than the real data being analyzed.</p><p>This noise typically has nothing to do with the project at hand. It's there, metaphorically speaking, to "distract" and even confuse the algorithm, forcing it to step back a bit to a vantage point at which patterns in the data may be more readily perceived and not drawn from the specific details within the data set.</p><p>Unfortunately, overfitting also occurs a lot in the real world as people race to draw conclusions from insufficient data points — xkcd has a fun example of how this can happen with <a href="https://xkcd.com/1122/" target="_blank">election "facts."</a></p><p>(In machine learning, there's also "underfitting," where an algorithm is too simple to track enough aspects of the data set to glean its patterns.)</p><img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDc4NTQ5My9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyMDE5NjY1M30.iS2bq7WEQLeS34zNFPnXwzAZZn9blCyI-KVuXmcHI6o/img.jpg?width=980" id="cd486" class="rm-shortcode" data-rm-shortcode-id="c49cfbbffceb00e3f37f00e0fef859d9" data-rm-shortcode-name="rebelmouse-image" data-width="1440" data-height="810" />
Credit: agsandrew/Adobe Stock
Nightly noise<p>There remains a lot we don't know about how much storage space our noggins contain. However, it's obvious that if the brain remembered absolutely everything we experienced in every detail, that would be an awful lot to remember. So it seems the brain consolidates experiences as we dream. To do this, it must make sense of them. It must have a system for figuring out what's important enough to remember and what's unimportant enough to forget rather than just dumping the whole thing into our long-term memory.</p><p>Performing such a wholesale dump would be an awful lot like overfitting: simply documenting what we've experienced without sorting through it to ascertain its meaning.</p><p>This is where the new theory, the <a href="https://arxiv.org/pdf/2007.09560.pdf" target="_blank">Overfitting Brain Hypothesis</a> (OBH) proposed by Erik Hoel of Tufts University, comes in. Suggesting that perhaps the brain's sleeping analysis of experiences is akin to machine learning, he proposes that the illogical narratives in dreams are the biological equivalent of the noise programmers inject into algorithms to keep them from overfitting their data. He says that this may supply just enough off-pattern nonsense to force our brains to see the forest and not the trees in our daily data, our experiences.</p><p>Our experiences, of course, are delivered to us as sensory input, so Hoel suggests that dreams are sensory-input noise, biologically-realistic noise injection with a narrative twist:</p><p style="margin-left: 20px;">"Specifically, there is good evidence that dreams are based on the stochastic percolation of signals through the hierarchical structure of the cortex, activating the default-mode network. Note that there is growing evidence that most of these signals originate in a top-down manner, meaning that the 'corrupted inputs' will bear statistical similarities to the models and representations of the brain. In other words, they are derived from a stochastic exploration of the hierarchical structure of the brain. This leads to the kind structured hallucinations that are common during dreams."</p><p>Put plainly, our dreams are just realistic enough to engross us and carry us along, but are just different enough from our experiences —our "training set" — to effectively serve as noise.</p><p>It's an interesting theory.</p><p>Obviously, we don't know the extent to which our biological mental process actually resemble the comparatively simpler, man-made machine learning. Still, the OBH is worth thinking about, maybe at least more worth thinking about than whatever <em>that</em> was last night.</p>
Getting plenty of sleep just became even more important.
- A new study finds that people without sleep fare better in learning what to fear and not fear than those getting only some sleep.
- Test subjects learned to associate colors with electric shocks, but only some unlearned it.
- The findings could be used to help create new treatments for those at risk of PTSD or anxiety.