Skip to content
Who's in the Video
David Eagleman is a neuroscientist at Stanford University and an internationally bestselling author. He is co-founder of two venture-backed companies, Neosensory and BrainCheck, and he also directs the Center for[…]
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

We’re entering a very interesting stage of human history right now where we can start importing technology to enhance our natural senses or perception of the world, says neuroscientist David Eagleman.

David Eagleman: We’re entering a very interesting stage of human history right now where we can start importing technology to enhance our natural senses or perception of the world.  So as it stands now, as biological creatures, we only see a very small strip of what's going on.  Take electromagnetic radiation: there's a little strip of that that we can see, and we call it “visible light,” but the whole rest of that spectrum -- radio waves, television, cell phone, gamma rays, x-rays -- it’s invisible to us because we don't have biological receptors for it.  So CNN is passing through your body right now and you don't know it because you don't have the right receptors for it.  Well, it turns out that the part that we see of the spectrum is one ten trillionth of it, so we’re not seeing most of what's going on, right?  

What's very interesting, I think, as we keep pushing forward with technology, is we’ll be able to take more and more data from those invisible parts of the world and start feeding them into our brain. So, for instance, snakes see in the infrared range and honey bees see in the ultraviolet range. Well, there's no reason why we can't start building devices to see that and feed it directly into our brains.  

It turns out what the brain is really good at doing is extracting information from streams of data, and it doesn't matter how you get those data streams there. One of the things my lab is doing is building a vibratory vest so that we can feed in sensory information through the skin of your torso rather than through more typical sensory channels.  So, for example, we’re doing this for people who are deaf who want to be able to hear.  We set up a microphone on the vest and then the auditory stream is turned into this matrix of vibrations on your skin, and what that does is it feeds in electrical signals into the brain that represent the auditory information.  

And if it sounds crazy that you would ever be able to understand all these signals through your skin, remember that all the auditory system is doing is taking signals and turning them into electrical signals in your brain.  

So we’re developing this right now so that deaf people will be able to hear through their skin, but our next stage is to feed not just auditory information but other data streams into the vest -- for example, stock market data or weather data -- and people will be able to perceive these data streams just by walking around all day and unconsciously having the stream of information coming into their body, and it will expand their sensory world.  

I think this is where technology and the brain have a very fertile meeting ground, is that we will be able to enhance the window of reality that we’re able to see.


Directed / Produced byJonathan Fowler & Elizabeth Rodd


Related