People who go ballistic over other people's eating sounds aren't just cranky — they have misophonia.
- Some people are driven absolutely bonkers when they hear other people eating or even breathing.
- Such people likely have a condition called "misophonia," or "hatred of sound."
- fMRI brain scans reveal a surprising cause for the condition.
Maybe it's happened to you. You're sitting there quietly munching away on something, and suddenly, you feel someone's eyes burning into you. When you turn toward the stare, you encounter eyes filled with rage.
"What?" is your likely response. That "what" is the sound of your chewing — you've just driven someone who has misophonia over the edge. This condition affects somewhere between 6 percent and 20 percent of us. Maybe you have it and have wondered what's been making you so mad.
Misophonia — it means "hatred of sound" — is a hypersensitivity to certain sounds made by other people. These may include noises made by chewing, drinking, or breathing. It can prompt anger, anxiety, disgust, irritation, and even violent rage coupled with a strong flight impulse.
A study from the University of Newcastle published in the Journal of Neuroscience may reveal, for the first time, what's going on in people with misophonia. It's not the sounds themselves after all, but an unwanted mirroring response they elicit in the listener.
According to lead author Sukhbinder Kumar, "Our findings indicate that for people with misophonia there is abnormal communication between the auditory and motor brain regions — you could describe it as a 'supersensitized connection.'"
The first clue
Credit: Sammy Williams / Unsplash
Increased connectivity in the brain between the auditory cortex and the motor control regions affecting the mouth, face, and throat appears to be what causes misophonia. The study is based on fMRI scans of 17 subjects with misophonia and 20 control subjects.
When all the participants were exposed to recordings of human eating and chewing, all of their auditory cortexes responded similarly. However, for those individuals with misophonia, the researchers also observed increased communication between the auditory cortex and the mouth, face, and throat motor control areas. These regions were strongly activated by the sounds.
The second clue
Credit: Caleb Woods / Unsplash
It's not just sound that can trigger misophonia, apparently.
Says Kumar, "What surprised us was that we also found a similar pattern of communication between the visual and motor regions, which reflects that misophonia can also occur when triggered by something visual."
That both sonic and visual inputs can trigger the condition prompted the researchers to consider what the two responses have in common. "This led us to believe," says Kumar, "that this communication activates something called the 'mirror system,' which helps us process movements made by other individuals by activating our own brain in a similar way — as if we were making that movement ourselves."
Invasion of the body snatchers
"We think," Kumar says, "that in people with misophonia, involuntary overactivation of the mirror system leads to some kind of sense that sounds made by other people are intruding into their bodies, outside of their control."
Put another way, this hypothesis suggests that the anger and revulsion inside a person with misophonia are an emotional response to an unconscious — and highly unwelcome — sense someone else is attempting to take over control of their mouth, face, and throat.
A trick shared with the researchers by some people with the condition seems to support this:
"Interestingly, some people with misophonia can lessen their symptoms by mimicking the action generating the trigger sound, which might indicate restoring a sense of control. Using this knowledge may help us develop new therapies for people with the condition."
Senior author of the study is Newcastle's Tim Griffiths, who says of the study's findings: "The study provides new ways to think about the treatment options for misophonia. Instead of focusing on sound centers in the brain, which many existing therapies do, effective therapies should consider motor areas of the brain as well."
It uses radio waves to pinpoint items, even when they're hidden from view.
"Researchers have been giving robots human-like perception," says MIT Associate Professor Fadel Adib. In a new paper, Adib's team is pushing the technology a step further. "We're trying to give robots superhuman perception," he says.
The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.
The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper's lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a PhD student at Georgia Tech.Play video
As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That's in part because robots struggle to locate and grasp objects in such a crowded environment. "Perception and picking are two roadblocks in the industry today," says Rodriguez. Using optical vision alone, robots can't perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don't pass through walls.
But radio waves can.
For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader.
The reflected signal provides information about the location and identity of the tagged item. The technology has gained popularity in retail supply chains — Japan aims to use RF tracking for nearly all retail purchases in a matter of years. The researchers realized this profusion of RF could be a boon for robots, giving them another mode of perception.
"RF is such a different sensing modality than vision," says Rodriguez. "It would be a mistake not to explore what RF can do."
RF Grasp uses both a camera and an RF reader to find and grab tagged objects, even when they're fully blocked from the camera's view. It consists of a robotic arm attached to a grasping hand. The camera sits on the robot's wrist. The RF reader stands independent of the robot and relays tracking information to the robot's control algorithm. So, the robot is constantly collecting both RF tracking data and a visual picture of its surroundings. Integrating these two data streams into the robot's decision making was one of the biggest challenges the researchers faced.
"The robot has to decide, at each point in time, which of these streams is more important to think about," says Boroushaki. "It's not just eye-hand coordination, it's RF-eye-hand coordination. So, the problem gets very complicated."
The robot initiates the seek-and-pluck process by pinging the target object's RF tag for a sense of its whereabouts. "It starts by using RF to focus the attention of vision," says Adib. "Then you use vision to navigate fine maneuvers." The sequence is akin to hearing a siren from behind, then turning to look and get a clearer picture of the siren's source.
With its two complementary senses, RF Grasp zeroes in on the target object. As it gets closer and even starts manipulating the item, vision, which provides much finer detail than RF, dominates the robot's decision making.
RF Grasp proved its efficiency in a battery of tests. Compared to a similar robot equipped with only a camera, RF Grasp was able to pinpoint and grab its target object with about half as much total movement. Plus, RF Grasp displayed the unique ability to "declutter" its environment — removing packing materials and other obstacles in its way in order to access the target. Rodriguez says this demonstrates RF Grasp's "unfair advantage" over robots without penetrative RF sensing. "It has this guidance that other systems simply don't have."
RF Grasp could one day perform fulfilment in packed e-commerce warehouses. Its RF sensing could even instantly verify an item's identity without the need to manipulate the item, expose its barcode, then scan it. "RF has the potential to improve some of those limitations in industry, especially in perception and localization," says Rodriguez.
Adib also envisions potential home applications for the robot, like locating the right Allen wrench to assemble your Ikea chair. "Or you could imagine the robot finding lost items. It's like a super-Roomba that goes and retrieves my keys, wherever the heck I put them."
The research is sponsored by the National Science Foundation, NTT DATA, Toppan, Toppan Forms, and the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS).
We can't ask them, so scientists have devised an experiment.
- Humans have the capacity for conscious awareness of our visual world.
- While all sighted animals respond to visual stimuli, we don't know if any of them consciously take note of what they're seeing in the way that we do.
- Researchers from Yale have devised experiments that suggest that rhesus monkeys share this ability.
All day long, our brains are busy receiving sensory information: smells, sounds, sights, and so on. We absorb much of this without really thinking about it. However, every now and then something we see grabs our attention, maybe a stunning landscape or a beautiful sunset. We stop what we're doing and spend a moment taking it in. Are we the only animal that can stop and take conscious notice of what we see?
A study just published in the Proceedings of the National Academy of Sciences suggests that we're not. It appears that at least one other animal — the rhesus monkey, Macaca mulatta — shares our ability to pay deliberate attention to what it sees. The authors of the study infer this ability, paradoxically, from the manner in which the monkey deals with visual inputs it doesn't consciously notice.
Credit: Amanda Dalbjörn/Unsplash
It has been known for some time that even when visual stimuli escape our conscious attention, we respond to it subliminally, says Yale psychologist Laurie Santos, co-senior author of the paper along with Yale psychologist Steve Chang and Ran Hassin of Hebrew University. Even so, she says, "We tend to show different patterns of learning when presented with subliminal stimuli than we do for consciously experienced, or supraliminal stimuli." ("Supraliminal" describes visual stimuli that are consciously noted.)
The authors of the study set out to see if rhesus monkeys exhibited a similar "double disassociation" in the way they respond to supraliminal vs. subliminal visual stimuli.
Ask a monkey a question
Credit: Jamie Haughton/Unsplash
Obviously, research on animals is hampered by our inability to question critters. As a result, scientists need to be creative in designing experimental methods that allow them to draw conclusions based strictly on empirical observation.
"People have wondered for a long time whether animals experience the world the way we do, but it's been difficult to figure out a good way to test this question empirically," says first author of the study, Moshe Shay Ben-Haim, a postdoctoral fellow at Yale University.
The researchers came up with a series of experiments in which both humans and rhesus monkeys could observably demonstrate how they process subliminal and supraliminal visual stimuli.
In the experiments, participants were tasked with predicting the side of a computer screen on which a target image would appear depending on the position of a visual cue, a small star symbol, provided by the researchers.
When the researchers displayed the cue on one side of the screen long enough to ensure that it was noticed — that is, it was a supraliminal signal — both humans and monkeys learned to look for the target image on the opposite side of the screen.
On the other hand, when the star flashed on the screen only very briefly, both humans and monkeys consistently looked to the side on which this subliminal signal had appeared, anticipating the target image's appearance there.
In the first case, the subjects learned the significance of the cue's position. In the second, their response simply mirrored the subliminal cue. This, say the authors, demonstrates the different ways in which humans — and monkeys apparently — react to visual stimuli that are consciously noticed or not.
Ben-Haim summarizes the authors' interpretation of the experiment:
"These results show that at least one non-human animal exhibits both non-conscious perception as well as human-like conscious visual awareness. We now have a new non-verbal method for assessing whether other non-human creatures experience visual awareness in the same way as humans."
Scientists observe how the halves of the brain keep us informed about everything everywhere.
Imagine you're about to cross a busy street. You look right and see a car coming towards you two short blocks away. You look the other way, and no cars are coming. Should you cross? No. Why not? Because your brain retains the memory of that approaching car even when you look the other way.
The ability to remember things we're not currently looking at allows us to construct and maintain a cohesive picture in our working memory of the physical context in which we find ourselves.
"You need to know where things are in the real world, regardless of where you happen to be looking or how you are oriented at a given moment," says Scott Brincat, senior author of a new study from researchers at The Picower Institute for Learning and Memory at MIT in Cambridge, Massachusetts. "But the representation that your brain gets from the outside world changes every time you move your eyes around."
The study, published in Neuron, describes what a fancy bit of brainwork this is.
Two sides of the big picture
Credit: Jake Schumacher/Unsplash
In our working memories, the left and right hemispheres work independently when it comes to memory storage — what we see on our left is immediately stored in the right hemisphere and vice versa.
The Picower researchers have found, however, that things get substantially more interesting when we shift our gaze in the opposite direction, or if an object we're looking at moves from one side to the other.
Using our street-crossing example, when you look to the right and spot the approaching vehicle, a memory of the car is stored in our brain's left hemisphere. When you look left, a copy of that memory is quickly sent to the right hemisphere, but the copy is somehow marked in such a way that the brain understands it's not actually located on your left but is just a memory of something that's currently out of view on your right. The net result is that your working memory remains aware of traffic on both sides even when it's just looking in one direction.
"If you didn't have that," says Earl Miller, senior author of the study and in whose lab the research was conducted, "we would just be simple creatures who could only react to whatever is coming right at us in the environment, that's all. But because we can hold things in mind, we can have volitional control over what we do. We don't have to react to something now, we can save it for later."
Games animals play
Credit: Eric Isselée/Adobe Stock
For the study's experiments, monkeys were taught to identify onscreen objects that didn't match something they had viewed moments earlier, such as an image of a banana. To do this, they had to hold a memory of the original object in memory to make the comparison.
As this happened, researchers monitored the electrical activity of hundreds of neurons in the prefrontal cortices of both hemispheres. The researchers observed memory transfers as they happened thanks to characteristic patterns in the synchronization of brainwave frequencies that occurred each time a memory was stored, an action that takes mere milliseconds. A software decoder identified the telltale patterns.
The trials began with the monkeys staring at one side of the screen as an object appeared in the screen's center. As the monkeys perceived the object as belonging primarily to one side or the other, the researchers saw the original memory being stored in the corresponding hemisphere and a copy being made in the other.
Monkeys were also instructed at times to look from one side to the other, reassigning the central object to a new primary side as the researchers observed the memories being re-written. The speed with which monkeys could spot non-matching objects slowed down during these shifts, giving some hint of the complicated memory gymnastics going on. "It feels trivial to us, but it apparently isn't," says Miller.
An ensemble surprise and mystery
The memory is transferred from a group, or ensemble, of neurons in one hemisphere to another ensemble on the other side. One of the surprises in the study is that even though the original memory and its copy describe the same object in the same location, they use completely different neuron ensembles on each side to represent it.
Miller notes that it used to be believed that individual neurons stored memories but that over time it became clear that groups, or ensembles, of neurons were the actual memory receptacles. Now however, if the same memory is stored in two different types of ensembles due to a difference in their role within a particular hemisphere, maybe things are even more complex than that. "Perhaps even ensembles aren't the functional units of the brain," he surmises. "So what is the functional unit of the brain? It's the computational space that brain network activity creates."
Is your red the same as mine? Probably not.
Each of us lives in our own multi-colored universe. And there's scientific proof of it.
I'm basking inside the sun. It's hot and stuffy – and that's putting it lightly. Everything around me is bathed in a storm of UV- and X-rays, masses of plasma roll all around, white-hot from nuclear fusion. The temperature is two million degrees Celsius, but the gas is almost a proper gas by now, its density has dropped to bearable fractions of a kilo per millilitre – not like deeper inside, on the edge of the solar core, where a millilitre of gas compressed in the gravitational vice weighs over 50 kilograms and spits gamma rays in all directions.
My photon (more on why it's mine later) has spent about 100,000 years arduously crawling through the sun's radiation zone. For millennia it would disappear, having collided with atoms of scorching gas, only to be reborn again a second later – and so on, over and over again. Now it's close to the photosphere, the external layer of the sun, the one that appears like the blinding surface of the star when it's seen from Earth. Ah, here it comes, just squeezing through that border now. After its thousands-of-years journey through the inside of the star, it breaks away from the sun. It flies off from the surface at a dizzying speed of almost 300,000 km/s. Alongside it go the billion trillion trillion trillion other photons produced every second by the sun. There's no point even trying to imagine that kind of number, our brains aren't used to conceiving of so many powers of ten. And besides, what I'm interested in is that single specific photon and what it will do for me. In eight minutes time, having crossed the 150 million kilometres of frozen cosmic vacuum that separate the sun from the Earth, it will cross into Earth's atmosphere. Unphased by anything, in just a fraction of a fraction of a second, it'll fly through the mass of air, something that we humans only managed to do by building steel machines and jet engines. And there the photon will make color.
The butterfly effect
Just under eight minutes ago, my photon was launched out of the sun. I'm currently in the Low Beskids, somewhere between the former village of Żydowskie and Ożenna. It's also stuffy, muggy and sticky – there's a storm on the horizon. The road is hot, the fresh blackness of new asphalt greedily guzzles all the beams of light that land on its surface. Every dozen or so metres, the road is dotted with the body of some long-dead creature. A gallery of now-flattened, previously three-dimensional amphibian and insect beings. The widening and resurfacing of the road that cuts right through the heart of the Magura National Park couldn't pass unnoticed by the local wildlife. In front of me lies another of the new road's victims. It is more recent, a cloud of butterflies hovers above the sticky remains. Some of them just love to suck the bitter juice from the decaying carcass. Perhaps that's why Death in Jacek Malczewski's art has the wings of a butterfly?
I come closer, one of the butterflies is bigger than the others. It's a lesser purple emperor – I recognize the characteristic shape of the white spots on its wing. Sampling the carcass with its proboscis, it turns slightly. I suddenly see its wing from the proper angle – and that's also when my photon lands. Formed hundreds of thousands of years and eight minutes ago in the centre of a very average star, it strikes the butterfly's wing, bounces off it and passes through a contractile opening into my eye. There it triggers avalanches and launches machinery that is just as incredible as the solar nuclear fusion that formed the photon. But we'll come to that in a moment. Because right now, I see a flash of bright, incredibly vivid, intense blue. I see the colour created by the beam of light fired out of the sun and reflected off a butterfly wing, a wing shaped by evolution even before the solar photons now bombarding me were formed.
Light is basically a cannonade of extraordinarily small, devilishly fast particles, scraps carrying radiant energy that can collide with other particles. They are photons, like the one we just followed so closely. Collisions often end tragically for photons: if, for instance, we place a thick black curtain in its path, nearly all the photons will get stuck as they strike the material, falling into a thick paste of particles and being completely absorbed by them. Of course, nothing in nature is lost, and energy in particular faces an absolute ban on disappearing, so the photons' energy doesn't vanish. At most, the photons become invisible, turn into less interesting thermal radiation photons, or their kicks provoke the particles in the curtain to tremble and do somersaults.
Fortunately (for the world and our psyches), the vast majority of the world is not made up of black velvet curtains. So aside from becoming hopelessly mired in matter, photons have another option: reflection. The fate that awaits a given photon once it collides with a piece of our world depends on that piece of matter itself, but also on how much energy the photon is carrying. Matter often acts as a filter, absorbing only those photons with a certain amount of energy and allowing all others to bounce off and fly onwards.
That's exactly what happened a moment ago. The photon I was waiting for had such a specific amount of energy (to be precise, just over 2.7 eV) that the butterfly wing didn't absorb it and allowed it to bounce off and head towards my pupil. Of course, the same thing happened with all the other photons with that amount of energy (though I still have a soft spot for that particular one). So is that colour now? We'd get very different answers to this question depending on whom we asked. A physicist (with disarming concision) would say: "Yes, that's colour. Now let me get back to discovering the theory of everything." For physicists, what matter does with light is absolutely sufficient for measuring colour and encapsulating it in a dry number. It's just as well a neurobiologist is standing next to the physicist. This is his answer:
"Colour starts very simply. A photon lands in your eye. In the blink of an eye, if you'll pardon the expression, it crosses its dark interior and hits the retina. If it's lucky, it will hit a very specific part of the retina: one of the tens of millions of photoreceptors that are just waiting for beams of light to strike them. Some of those photoreceptors (known as cones because of their shape) only react more strongly to photons carrying a certain amount of energy. My eye and yours (assuming that we aren't colour blind) have three types of these cones. The photon we're tracking was carrying quite a lot of energy, towards the upper limit of what our eyes can see. It also just so happens that it hit a receptor specializing in deciphering that type of high-energy photon. There it dislocated the bent arm of a special particle, strategically placed in the folds of the membranes that fill the receptor. It doesn't seem like much, just a single particle, but it was completely sufficient for the entire cell to vibrate with excitement, and countless reactions going on inside it as well as the biochemical pathways suddenly received the information: pass this on. In a lightning-fast, extraordinarily effective Chinese whisper, the cone sent news of the photon's kick onwards, and, like a chain reaction, in silent collisions and separations of electrical impulses, that news travelled to the brain so that the brain could think: 'Blue!'"
It's beautiful, isn't it? While for the physicist, the colour of my lesser purple emperor was obvious from the moment its wings voraciously devoured all the photons except for the 'blue' ones, true colourful delight was only possible when that light signal – via many tangled chemical reactions and electrical impulses – reached the brain. And there's more! Some colours are actually formed in there, too.
Between violet and red
I've always been unnerved by pink. What kind of colour is that? Painfully raw, fiery and yet cold at the same time. Static, but energetic. As a matter of fact, it'd be hard to find a colour that provokes such mixed reactions. You either love it or hate it – there's not much room in the world for pink neutrality. But in terms of its physical basis, pink really is a masterpiece of colour.
Let's start by saying that pink doesn't exist. Such a colour should not exist. Look at a rainbow – the sequence of colours that make up white light when you place a prism in its path, a pyramid of transparent glass. Red, orange, yellow, green, blue, indigo, violet. There's no pink. Among the colours of the rainbow – so-called 'visible colours' – we will never find pink, purple, lilac or magenta – these are all so-called 'extra-spectral colours'. They don't exist as a single specific wavelength of light, one specific energy load of a photon. We'll also never see purple or pink if our eyes are only bombarded with photons of a single colour, bearing a single amount of energy. Physically, this means that we can't build a source of light (such as a laser) that, by producing radiation with a single specific wavelength, will produce pink light. Pinks and purples can only be produced when two cones are stimulated at once – the ones sensitive to high-energy photons (known as blue cones) and those sensitive to red (red cones). The brain adds the rest of the story itself by filling a new colour (a purple or pink one) into the empty space between violet and red, between the opposite ends of the light spectrum.
It's quite possible that each of us sees these non-spectral colours differently. In the end, in these cases, much more than for other colours, the cooperation of several different cones and the brain is crucially important in creating a colour that formally doesn't exist as a component of white light. Has our perception of pinks in one way and not another been shaped by the evolution of our species? Indeed, a whole range of flowers and fruits are pink – quickly recognizing and remembering objects of a non-spectral colour could have been key to the life of our anthropoid ancestors. It could have been so essential that it formatted the perception of this 'impossible' colour as one of the brightest, clearest colours. When it comes to colours, evolution might have a great deal more to say, precisely by giving the brain absolute power over where and how we perceive colour.
The brain and grey strawberries
There's a picture of some strawberries in front of me. It has a sea-green glow, as if the strawberries are being lit by a beam of turquoise light. The fruits are dark-red, my brain has no doubt about it. Even if I know that's physically not true.
Each and every pixel of the strawberry is perfectly grey. Darker or lighter grey, but categorically lacking any red hue. If I cover the picture so that I can only see a part of the strawberry, I see the greyness. But as soon as I uncover the rest of the picture, the strawberry becomes red again.
I feel cheated – all the more so since I'm doing it to myself. The structurally perfect colour-measuring device that is my eyes, is connected to a jelly-like mass that 'knows better'. In this classic illusion, called the Land effect, and found in many other colour versions, it's easiest to see how deceptive it can be to treat colour as purely the 'energeticness' of photons injected through the pupil into our eye. My brain, receiving signals from my eyes, isn't simply a dry rapporteur of the given 'measurement' of light. Colour is the product of what the eye sees and what the brain decides should be there. The situation described above is an example of what's known as 'colour constancy' – the way in which the brain deals with seeing the same objects in different lights. Thanks to colour constancy, for instance, we can correctly match the colours of objects that are in light or in darkness. It's a skill – it has to be said – that's fairly crucial for survival: if our ancestors had treated the colour of fruit in the dark as something completely different to the colour of fruit in sunlight, they might have unnecessarily refrained from eating one or the other. In a world in which eating one apple might save you from starvation, the perception of colour constancy is quite an essential brain superpower. But it clearly cannot cope with unnatural situations like the grey strawberries 'lit' in turquoise light. For the brain, grey has no right to be there – so it fills it in with the most 'sensible' filler colour: red.
Does that kind of colour – that is, red – actually exist? Let's add some fuel to this fire. I once carried out an experiment in which I showed a similar illusion to several dozen people. I asked them to show the colour they thought they'd seen (by moving a few colour sliders) on a computer screen, when in fact they had been looking at grey. A degree of fluctuation was to be expected – just because of the imprecise nature of the method used by subjects to indicate the colour. But it turned out that each person saw a completely different 'red' in place of the grey, and sometimes it wasn't even red but a certain shade of violet or purple. Does this mean that in a given unequivocal physical situation, many colours exist – as many as the number of people looking at it? Have a think, look around you – everyone you see might well have colourful worlds of their own, private funfairs of multicoloured delights that only they know.
That's quite a terrifying view of the world, but also a fascinating one. Because it suddenly turns out that mocking men's colour illiteracy isn't really very fair. And the widespread belief that women are better at distinguishing between the slightest shades of colour isn't really defensible. Because it's quite possible that we all have our own colourful cosmoses. We only agree on what to name the colours (though here we have ample room to show off our talents, and the cultural evolution of this terminology has also confused matters, but that's another story). It's not impossible that looking into the head of a random person and seeing the world through their tinted glasses would be a similar shock for the average person as seeing reality through the eyes of someone who's colour blind. We take too much of our visual perception of the world around us as a physical certainty – but colour isn't just about physics. It's a dance: of photons, chemical particles, electrical impulses, turbulent cytoplasms and ancient whims of the brain. There are simply too many elements involved for it to work in an identical way for each of us…
An afterimage that's lighter than white
Do a simple experiment. Draw a bright red square on a piece of white paper, and draw a small black spot in the middle. Then stare at that black spot trying not to move your eyes. Take a long time – 30-40 seconds is best. Next, shift your gaze to a brightly lit white piece of paper. You'll see a new colour in the middle of it! A weird colour, as if glowing 'from inside'. It'll probably be a light bluey-green or turquoise. What's interesting – you'll be convinced that the colour you're seeing is lighter than white, even though that's impossible.
What you have seen is a commonly observed phenomenon, when one of the three coloured cones in the eye 'gets tired'. Looking at the red shape, you exhausted the cones that see mostly red light. That exhaustion lasts a few seconds, maybe a little more – enough for the temporary lack of perception of that colour to leave a shape on the surface of the white paper that's coloured with 'a lack of red'. For our brain, it's the result of a reaction of the two remaining types of cones; we see something between a green and turquoise colour. Interestingly, this colour arises from the lack of a given colour, so the complementary visible colours are artificially intensified and seem much brighter – indeed brighter than the white paper on which they appear. The impression of this colour arises only in the eye and the brain, no source of light could produce such a hue.
Translated from the Polish by Zosia Krasodomska-Jones