The case of a 7-year-old Australian boy who was supposed to lose sight at two weeks old but can still see has stunned scientists.
Researchers in Australia recently presented a study of a 7-year-old boy who is missing most of his visual cortex but surprisingly can still see. It is the first known case of this kind.
When he was only two weeks old, the boy suffered serious damage to his visual cortex, the part of the brain that manages sensory nerve impulses from our eyes, as a result of a rare metabolic disorder called medium-chain acyl-Co-A dehydrogenase deficiency. This condition prevents tissues from converting some types of fats into energy.
The boy, referred to only as “B.I.” by the researchers from the Australian Regenerative Medicine Institute at Monash University, ended up without of his visual cortex. This is usually a situation that would result in cortical blindness, an illness where the brain can still get visual input but cannot process what it is seeing, making the person feel like they have sight but not actually allowing them to see. The boy, however, can see almost anything on par with other kids his age, able to play soccer or video games and read emotions on people’s faces.
The scientists studied the unusual case, hoping to understand what makes B.I.’s condition so unique. Through MRI-scanning they found a remarkable instance of the brain’s neuroplasticity, with the boy’s visual pathway of neural fibers in the back of the brain enlarged. This adaptation means that the pathway allows the boy to see by doing the work of the visual cortex.
"Despite the extensive bilateral occipital cortical damage, B.I. has extensive conscious visual abilities, is not blind, and can use vision to navigate his environment," write the researchers in the study.
You can read their study here.
A new study overturns the conventional thinking about how we focus our visual attention.
Imagine that moment right as you’re viewing a photo you’ve never seen before: How do your eyes go about choosing which parts of the image to look at first?
The leading theory in attention studies says that our eyes are drawn to things that are visually salient — or, those that “stick out” to us. However, a new study from UC Davis overturns that model by showing that “meaning” seems to be the primary guider of visual attention, while salience plays a secondary role.
“A lot of people will have to rethink things,” said John Henderson, professor at the UC Davis Center for Mind and Brain, who led the research. “The saliency hypothesis really is the dominant view.”
The findings, published in the journal Nature Human Behavior, come from a study that tracked the eye movements of people viewing images of real-world scenes — a kitchen, a boat dock, a messy room — for the first time.
To define which parts of the images were “meaningful,” researchers divided the images into circular overlapping tiles and submitted them individually to the online crowd-sourcing service Mechanical Turk, where users rated the tiles for meaning. For example, a tile showing only part of a window curtain would be rated low in meaning by users, while one showing a painting of purple grapes would be rated higher.
Rating the meaning of the tiles effectively turned the images into “meaning maps,” as researchers dubbed them. Salience was also mapped out, though by a much easier process — a computer program measured each part of the images for relative contrast and brightness.
The question now was: which of these maps would best predict the eye movements of the participants? To find out, researchers showed participants each image for 12 seconds and recorded the points at which they fixated their vision.
Real-world images alongside data representations of eye tracking (center left), meaning (center right) and salience (far right).
Results showed that “meaning was better able than visual salience to account for the guidance of attention through real-world scenes,” even though salience often overlapped with meaning.
So, why are our eyes drawn to meaning over what's bright and shiny? The researchers suggest the reason might be that, when viewing real-world scenes, we use knowledge representations to help us prioritize where to look. For example, when we see a photo of a kitchen, we have a cognitive model that tells us what a kitchen is and where we might find meaning in that photo — among the objects by the sink, for instance, and not necessarily in the brightly colored curtains.
Henderson and postdoctoral researcher Taylor Hayes said they don't yet have solid data on what exactly constitutes meaning in visual information. But they suggest their findings could have important implications for computer vision, such as training algorithms to scan security footage or identify and caption photos online.
On a broader level, the findings seem to echo a claim made by the phenomenological psychologist Ludwig Binswanger in his book “Being in the World”:
What we perceive are “first and foremost” not impressions of taste, tone, smell or touch, not even things or objects, but rather, meanings.
Binswanger essentially argues that we perceive the world with meaning detection before object recognition. This order of perception could be advantageous from an evolutionary perspective because determining the meaning of something is often more relevant than recognizing its exact nature. In other words, if you're in the jungle and you spot a tiger rushing toward you, the first thought you want to have is danger, not necessarily that's a female Bengal tiger.
That's a big yes, as an incredible new study from University of Melbourne researchers found.
Symbols matter. Companies spend tons of money and many patient months developing logos that represent the soul of their mission. The idea is to associate that mission with a visual symbol so that every time a consumer views the mark those ideals are inseparable from the graphic.
Could such a symbol affect personal creativity, however? That’s what researchers aimed to find out when briefly exposing over 300 students to the Apple and IBM logos. By design, Apple wanted its brand to suggest creativity, whereas IBM has long been a stalwart of responsibility and integrity.
After subliminally exposing students to each logo researchers administered the unusual uses test, a measure for creativity in which you’re shown an everyday object to test how many different applications you can dream up. Sure, a paper clip binds papers, but would you imagine it as an earring? One measure of the test is that it must be realistic—circumnavigating the planet flying on your magic clip is not an acceptable response.
As it turned out the students who were exposed to the Apple logo scored higher. As marketing and psychology professor Adam Alter writes:
Merely exposing people to a symbol that implies creativity for less than a tenth of a second can cause them to think more creatively, even when they have no idea they’ve seen the symbol.
Creativity is associated with ways of seeing, to borrow a phrase from John Berger, but could our actual visual perception affect creative output? That’s what three Australian researchers tried to find out. Trading course credit for their time, 134 undergrads at the University of Melbourne were tested on binocular rivalry. Using a guide to five major personality traits, the researchers were especially interested on openness, which “predicts real-world creative achievements, as well as engagement in everyday creative pursuits.”
Binocular rivalry. Image: Luke Smillie and Anna Antinori, University of Melbourne.
Two different images—in this case, a green patch and a red patch—were simultaneously presented to each eye of the participant. In some cases, “rivalry suppression” occurred, in which both images seem to blend to form one patchwork image. The researchers concluded:
Across three experiments, we found that open people saw the fused or scrambled images for longer periods than the average person. Furthermore, they reported seeing this for even longer when experiencing a positive mood state similar to those that are known to boost creativity.
The more open you are, the more you see, which is why researchers have long used the following video to highlight the dangers of inattentional blindness—being so focused on one task you engage in a sort of tunnel vision (like stopping your car in the middle of the street to text).
Thanks to neuroplasticity reorienting perception is possible at any age. How we see influences what we see, a bi-directional process that involves both inner beliefs and outside stimulation. As it turns out, our eyes have more influence over our mind than we might have believed. As psychiatrist Norman Doidge writes:
In the visual system, neuroplastic change begins not in the brain but in the eyes.
Doidge warns that too much screen time is limiting our perceptual relationship with the world, which therefore impedes our brain’s ability to change. You cannot isolate your mental processes from your environment. By the same logic, your environment greatly influences your thoughts. Creativity is only one example of how we process stimulation, but it proves to be an important one for both survival and sheer enjoyment. If you want to be more creative, you have to open your eyes.
The Australian researchers cite cognitive training interventions and even psilocybin as potential catalysts for cultivating openness and thereby stimulating creativity. They also warn that too much openness has its own attendant dangers, such as hallucinations and other aspects of mental illness. As in the unusual uses test, your visions have to have some potential application in reality to be of any use.
Derek's next book, Whole Motion: Training Your Brain and Body For Optimal Health, will be published on 7/17 by Carrel/Skyhorse Publishing. He is based in Los Angeles. Stay in touch on Facebook and Twitter.
These glycerin "smart glasses" may be the only specs you'll need – although they do need a design intervention at some point.
Sometimes, wearing eyeglasses can be a pain. You need to change them with every new prescription and in addition they don't always serve you well enough. Reading glasses, for example, help you focus up close but become useless if you need to go back to doing other activities. According to the American Academy of Ophthalmology more than 150 million Americans use some type of corrective eyewear, spending $15 billion each year. We may all need glasses at some point, because as a natural side-effect of aging, the lens inside our eyes that adjusts the focal depth depending on what we look at, loses its ability to change focus.
A team of engineers and electricians from University of Utah wants to solve part of this problem by reducing the need of wearers to have several types of glasses or to take glasses on and off depending on the situation. Engineering professor Carlos Mastrangelo and doctoral student Nazmul Hasan have created "smart glasses" that can automatically adjust the focus of what a person is seeing making the scene always clear, no matter if it is near or far.
University of Utah electrical and computer engineering professor Carlos Mastrangelo, right, and doctoral student Nazmul Hasan with their “smart glasses”.
The smart glasses lenses are made of glycerin – a colorless liquid, which is enclosed by flexible membranes in the front and back. A mechanism in the rear membrane of the lenses allows them to change their curvature, which in turn changes the optical power of the lens. How do the glasses know how to adjust? An infrared light meter in the bridge of the glasses measures the distance between them and the objects the wearer looks at. The frames (which will need a design intervention at some point) house additional electronics and a battery that can last more than 24 hours per charge.
But what is a new invention without an app to go with it? The glasses do come with an app, in which the wearer inputs his or her prescription, so that the glasses know how to adjust initially. This is all it takes – no need for custom lenses, and even if the prescription changes in the future, the glasses don’t have to.
The team has already created a startup company with the goal to commercialize their invention, hoping to hit the market in about three years.
Photos: University of Utah