Crazy dreams help us make sense of our memories

A new theory suggests that dreams' illogical logic has an important purpose.

Crazy dreams help us make sense of our memories
Credit: Paul Fleet/Adobe Stock
  • If consolidating memories as we sleep is like machine learning, maybe dreams keep our "algorithms" on track.
  • Machine learning is optimized by the injection of a certain amount of nonsense data.
  • Maybe dreams are just weird enough to do the same for us as we sleep.


    • For a while now, the leading theory about what we're doing when we dream is that we're sorting through our experiences of the last day or so, consolidating some stuff into memories for long-term storage, and discarding the rest. That doesn't explain, though, why our dreams are so often so exquisitely weird.

      A new theory proposes our brains toss in all that crazy as a way of helping us process our daily experiences, much in the way that programmers add unrelated, random-ish nonsense, or "noise," into machine learning data sets to help computers discern useful, predictive patterns in the data they're fed.

      Overfitting

      The goal of machine learning is to supply an algorithm with a data set, a "training set," in which patterns can be recognized and from which predictions that apply to other unseen data sets can be derived.

      If machine learning learns its training set too well, it merely spits out a prediction that precisely — and uselessly — matches that data instead of underlying patterns within it that could serve as predictions likely to be true of other thus-far unseen data. In such a case, the algorithm describes what the data set is rather than what it means. This is called "overfitting."

      Big Think

      The value of noise

      To keep machine learning from becoming too fixated on the specific data points in the set being analyzed, programmers may introduce extra, unrelated data as noise or corrupted inputs that are less self-similar than the real data being analyzed.

      This noise typically has nothing to do with the project at hand. It's there, metaphorically speaking, to "distract" and even confuse the algorithm, forcing it to step back a bit to a vantage point at which patterns in the data may be more readily perceived and not drawn from the specific details within the data set.

      Unfortunately, overfitting also occurs a lot in the real world as people race to draw conclusions from insufficient data points — xkcd has a fun example of how this can happen with election "facts."

      (In machine learning, there's also "underfitting," where an algorithm is too simple to track enough aspects of the data set to glean its patterns.)

      Credit: agsandrew/Adobe Stock

      Nightly noise

      There remains a lot we don't know about how much storage space our noggins contain. However, it's obvious that if the brain remembered absolutely everything we experienced in every detail, that would be an awful lot to remember. So it seems the brain consolidates experiences as we dream. To do this, it must make sense of them. It must have a system for figuring out what's important enough to remember and what's unimportant enough to forget rather than just dumping the whole thing into our long-term memory.

      Performing such a wholesale dump would be an awful lot like overfitting: simply documenting what we've experienced without sorting through it to ascertain its meaning.

      This is where the new theory, the Overfitting Brain Hypothesis (OBH) proposed by Erik Hoel of Tufts University, comes in. Suggesting that perhaps the brain's sleeping analysis of experiences is akin to machine learning, he proposes that the illogical narratives in dreams are the biological equivalent of the noise programmers inject into algorithms to keep them from overfitting their data. He says that this may supply just enough off-pattern nonsense to force our brains to see the forest and not the trees in our daily data, our experiences.

      Our experiences, of course, are delivered to us as sensory input, so Hoel suggests that dreams are sensory-input noise, biologically-realistic noise injection with a narrative twist:

      "Specifically, there is good evidence that dreams are based on the stochastic percolation of signals through the hierarchical structure of the cortex, activating the default-mode network. Note that there is growing evidence that most of these signals originate in a top-down manner, meaning that the 'corrupted inputs' will bear statistical similarities to the models and representations of the brain. In other words, they are derived from a stochastic exploration of the hierarchical structure of the brain. This leads to the kind structured hallucinations that are common during dreams."

      Put plainly, our dreams are just realistic enough to engross us and carry us along, but are just different enough from our experiences —our "training set" — to effectively serve as noise.

      It's an interesting theory.

      Obviously, we don't know the extent to which our biological mental process actually resemble the comparatively simpler, man-made machine learning. Still, the OBH is worth thinking about, maybe at least more worth thinking about than whatever that was last night.


      How New York's largest hospital system is predicting COVID-19 spikes

      Northwell Health is using insights from website traffic to forecast COVID-19 hospitalizations two weeks in the future.

      Credit: Getty Images
      Sponsored by Northwell Health
      • The machine-learning algorithm works by analyzing the online behavior of visitors to the Northwell Health website and comparing that data to future COVID-19 hospitalizations.
      • The tool, which uses anonymized data, has so far predicted hospitalizations with an accuracy rate of 80 percent.
      • Machine-learning tools are helping health-care professionals worldwide better constrain and treat COVID-19.
      Keep reading Show less

      Designer uses AI to bring 54 Roman emperors to life

      It's hard to stop looking back and forth between these faces and the busts they came from.

      Meet Emperors Augustus, left, and Maximinus Thrax, right

      Credit: Daniel Voshart
      Technology & Innovation
      • A quarantine project gone wild produces the possibly realistic faces of ancient Roman rulers.
      • A designer worked with a machine learning app to produce the images.
      • It's impossible to know if they're accurate, but they sure look plausible.
      Keep reading Show less

      Dark matter axions possibly found near Magnificent 7 neutron stars

      A new study proposes mysterious axions may be found in X-rays coming from a cluster of neutron stars.

      A rendering of the XMM-Newton (X-ray multi-mirror mission) space telescope.

      Credit: D. Ducros; ESA/XMM-Newton, CC BY-SA 3.0 IGO
      Surprising Science
    • A study led by Berkeley Lab suggests axions may be present near neutron stars known as the Magnificent Seven.
    • The axions, theorized fundamental particles, could be found in the high-energy X-rays emitted from the stars.
    • Axions have yet to be observed directly and may be responsible for the elusive dark matter.
    • Keep reading Show less

      Put on a happy face? “Deep acting” associated with improved work life

      New research suggests you can't fake your emotional state to improve your work life — you have to feel it.

      Credit: Columbia Pictures
      Personal Growth
    • Deep acting is the work strategy of regulating your emotions to match a desired state.
    • New research suggests that deep acting reduces fatigue, improves trust, and advances goal progress over other regulation strategies.
    • Further research suggests learning to attune our emotions for deep acting is a beneficial work-life strategy.
    • Keep reading Show less
      Surprising Science

      World's oldest work of art found in a hidden Indonesian valley

      Archaeologists discover a cave painting of a wild pig that is now the world's oldest dated work of representational art.

      Scroll down to load more…
      Quantcast