Skip to content
Who's in the Video
Marcelo Magnasco is the Head of the Laboratory of Mathematical Physics at Rockefeller University, where he leads a group of physicists who use living beings as a source of inspiration[…]
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Our ears do more than hear. They can sense when someone is stressed, relaxed, or angry, and they can recognize the shininess of bathroom walls.

Question: How do we distinguish between noise frequencies?

Marcelo Magnasco: Okay so hearing is a difficult sense to understand on the theoretical viewpoint because it is pretty much unlike the other senses in many, in many relevant ways. So for instance we have in both our retinas we have about two-hundred million photo preceptors so each one of our eyes is a hundred mega-pixel camera. We have about a hundred million oral factor receptors in our noses. We have well over ten to twenty million for touch and for pain and for temperature on our skin yet the total number of auditor receptors we have in both of our cochlear is something like seven to eight thousand so there is a miniscule amount of cells that are being input from sound and therefore the nervous system really needs to extract as much information as it can from every, each one of them okay.

The information density being coerced out of each one of these detectors is much higher so it puts a lot of demand on the capability of the nervous system to process information. In addition, we do not have understand very well sound exactly the geometry of sound is in the sense that we can understand vision. We hear some words okay we hear a few seconds of phonings and these give rise to a multitude of very different percepts in your brain so on one side one stream you get out is the actual text being spoken. Then you also hear the accent of the speaker, the emotional stance of the speaker, many features related to the identity of the speaker if you know somebody you can easily recognize their voice but even if you don’t know them you immediately know whether it’s male or female, a child, and so on and so forth. All of these are impressions that are sort of separated by the brain into different persons that do not go with the same stream so it’s sort of difficult to try to understand in a unified sense what is it exactly that our hearing does.

Then there is all the spatial aspects of hearing that we normally attribute to our sight but actually, a lot of them are derived from hearing so when you hear somebody speaking you know perfectly well if they are talking towards you or towards a wall. You know whether they are turned around or not simply because of the muffling that would happen if the speaker is, is looking away from you. You know the position of the speaker with a fair amount of accuracy. We have very precise models in our brain of how the human voice sounds when you are yelling or when you are whispering and of course then there is an **** measuring the volume of the sound of the ear so you can distinguish whether somebody’s whispering in your ear from somebody shouting far away even though the volume of the ear would be precisely the same simply because you can interpret whether the pattern of the sound is that of a stressed voice stressed because of shouting or the muffled sound of somebody whispering.

You also have a clear impression of the space in which the conversation takes place; this is for instance the very space I’m in is very quiet space because echoes have been suppressed but not entirely. You have an impression for instance everybody is probably familiar with somebody in their house calling them from a different room and from the sound of the voice knowing exactly which room they are. Okay if somebody calls you from the bathroom you recognize that the shininess in the bathroom walls. You would recognize the more muffled sound of somebody calling you from the bedroom where you know the mattress and stuff absorb the sound. So there are all these variety of different percepts and so it’s sort of difficult to try and integrate auditory status into single category because the brain itself is separating different areas of the source and different areas of the space in which the communication is taking place in a very rapid and dramatic fashion.

Question: Do you work with music imagery at all?

Marcelo Magnasco: No we keep yes, we keep an interest in human perception of fairly complex and sort of idiosyncratic sounds like perception of music and the like because that’s ultimately what we are interested in understand, how human beings understand their world right. But we need to simplify our space of hypothesis and so there has been historically in new science studies a little bit of tension between extremes in sensory research. In one you try to present a subject and experimental animal a person with extremely simplified stimuli like in vision it would be just a dot of light, a single line, a grating a regular spacing of bars or something like that. These are extremely abstract stimuli but on the other hand they are very easily described mathematically in terms of a very small number of perimeters so if I’m describing to you a dot of light, it’s only attributed to a position, horizontal and vertical position and so on. Therefore the approach has a lot of power in trying to see if a particular neuron in my brain or in the brain of an animal responds to dots of light. Does it respond specifically to dots that happen in a specific location? Then it’s relatively easy to go and reconstruct exactly what is it that’s causing these neurons to respond okay. On the other hand the stimuli are extremely artificial, they are unrelated to the forces that shape the evolution of the brain, and it has been extremely difficult to try to put together much more complex sense from this very simply evidence. So if we try to study then how neurons in the brain respond to combinations of lights the number of perimeters blows up so rapidly that it has proven very difficult to get a lot of insight out of the root of building the world you know one line or one simple element at a time.

Another school of thought yeah sure so another school of thought has said on the other hand let’s try to do what the brain is supposed to do; the brain wasn’t evolved so that we could recognize dots and lines the brain was evolved so that we could recognize our predators, our prey, our mates in the context of a natural visual scene like a woods or something like that so how do we recognize the presence of one of our predators? Do we see a lion among the leaves? How do we find our mates? These are very complex natural scenes but this is what the brain was evolved to do so people have said okay let’s look at what characterizes these natural scenes as oppose to random television snow which is the most random possible stimulus okay.

What characterizes the very structured scenes of the natural world? Unfortunately those scenes are uncontrolled mathematically speaking namely, you go out in the wood and you make a film and you present it to your subject and you see whatever your neurons respond to but you do not have the power that you had in the previous approach of these very tight mathematical descriptions. So what we’ve been looking for in the auditory world is to try to find sounds that are natural but that are ecologically relevant to the given animal in the sense that being a sound that an animal has been evolved to recognize and to find or flee from and that we can describe in mathematical terms in very close form. And we have been study what we call Auditory Textures. These are sounds that occur sort of in a steady state so that one piece of the sound sounds quite like any other piece of the sound.

For instance the sound of running water you’re next to a brook you hear **** and you hear that sound continuously and if you record it for an hour the first minute and the last minute they sound exactly alike. The sound of fire, a fireplace cracking that’s a very nice, steady sound, which is statistically homogenous so you can look at one part, another part and nothing has changed. Or you know the sound of flies buzzing in the air or any other you know natural sound that has this you know this consistency and then we tried to take these sounds and abstract them so that we can describe them and resynthesize them computationally with the minimum possible number of perimeters.

In the case of water, we have found a remarkable way of actually synthesizing a very accurate sound of running water with only three perimeters, making it virtually as low-dimensional as mathematically tightly controlled as one line on a blackboard so and this approach then you know is trying to unite the power of you know the low-dimensional description of the simple stimulus with using a very natural and ecological and relevant sound like the sound of running water. Ever living being needs to be able to recognize water and go and find water or run away from water in case of flight so these are sounds that the brain was evolved to recognize very, very efficiently and this is where we stand with we’re studying, we’re trying to study how the brain recognizes these classes of sounds and how we can parse them and categorize.


Related