Debunking Common Brain Myths
Sam Wang is an associate professor, Department of Molecular Biology and the Princeton Neuroscience Institute.
Wang grew up in California and studied physics at the California Institute of Technology. Seeking his Ph.D. at Stanford University, he switched to neuroscience. He has worked at Duke University as a postdoctoral fellow and aided political leaders as a Congressional Science Fellow. After completing his postdoctoral studies, he spent two years at Bell Laboratories in Murray Hill, N.J., where he learned to use pulsed lasers to study brain signaling before coming to Princeton.
Wang, who has published more than 40 articles on the brain in leading scientific journals. His educational reach extends past the laboratory and classroom in his books, popular articles and efforts to convey neuroscience to interested nonscientists.
Topic: Sam Wang on the 10% myth
Sam Wang: The 10% myth is a funny one because it doesn’t come from an identifiable neuroscientific discovery. The earliest mention of the idea that you only use 10% of your brain comes from the motivational speaker and writer Dale Carnegie in “How to Win Friends and Influence People.” Actually, Lowell Thomas said in the preface. And before that, the only statement that resembles that is again, as I mention before, this pioneer of psychology and as it turns out neuroscience, William James. He told audiences that we only meet a fraction of our full potential.
And I think that statement is true because it certainly the case, for instance, that IQ scores have gone up a few points per decade in modern times, which suggest that there’s some influence in the environment on what we can become.
In that sense, we maybe still exploring our full potential. But the 10% idea is literally not true. If you damage any part of the brain, you can… If any part of the brain is damage, it can lead to deficit as a function. For instance, if I damage… If a person experiences damage to the cerebellum, which guides smooth movements, then people are unable to move smoothly, unable to learn new things like a dance step or a tennis stroke. And you can come up with similar kinds of observations for all parts of the brain so you really need the whole thing.
You need a 100% of your brain. If any part of your brain went missing in action, you would notice and you’d be sorry. Well, depending… Actually, depending on the part of the brain, you might not be sorry.
Topic: Sam Wang on alcohol and the brain
Sam Wang: A really common belief about the brain that you hear at parties is the idea that drinking alcohol kills brain cells. People make light to this all the time. The truth is that alcohol can damage the brain but it’s highly unlikely to kill brain cells. When people look at the brains of alcoholics’ postmortem, it takes decades of drinking before loss of brain cells becomes apparent, okay? So… And in fact, loss of brain cells, when it does eventually happen, it’s associated with a profound loss of memory, a thing called Korsakoff’s syndrome. And that happens after decades of drinking. Now, having been said, alcohol does cause damage to the brain.
So one thing that has been observed is that heavy drinking for years does lead to shrinkage of the brain. And that is probably the source of the idea that alcohol kills brain cells that you can observe shrinkage of the brain.
Now, let’s put these two together. Alcohol causes shrinkage of the brain but the cells don’t diminish the number so what that suggest is there something about each individual cell that shrinking. And what’s believed is that the trees, the dendritic trees that neurons have, may retract a little bit and they’re constantly growing and shrinking. And so, what’s believed is that, perhaps, they retracted a little bit. And the practical implication of this is that if you stop drinking, then they, perhaps, will grow back. Okay. So that’s the practical implication.
And so, even though the idea is a myth, it’s still important to remember that drinking alcohol is not a good thing for your brain when you do it for many years at a time. I have more that I forgot to give so can I fill that in? Sorry about that. Okay. The other side of the coin is that moderate amounts of alcohol are not harmful and, in some cases, can even be helpful. And there’s a very specific case, which is red wine. It turns out that moderate amounts of red wine seem to have some kind of protective effect on brain function especially as you are getting older.
And it’s not known exactly why but there’s a general principle, which is that things that are good for your heart and your cardiovascular system are also good for your brain. And so, there are health benefits that have been demonstrated of red wine, some of which are mediated through the compound resveratrol. And what’s been observed is that up to 3 glasses of red wine per day for men and up to 2 glasses for women is either neutral or slightly beneficial for brain function. And that adds up to a bottle for a couple per day so that’s easy to remember.
Topic: Sam Wang on drugs and the brain
Sam Wang: All drugs exert their effects by acting on some kind of receptor on the brain. In many cases, that receptor is identified.
To pick an example, cocaine and methamphetamine act upon a molecule whose job it is to suck up dopamine back in to cells after it’s been released. And so, those are blockers of dopamine re-uptake. Another example is LSD, which acts upon very specific receptors for the neurotransmitter serotonin. So all of these acts upon different receptors to cause their effects to cause hallucinations or intoxication or feelings of confidence or what have you. And a consequence of that is that they’re addictive to different degrees. So, for instance, alcohol is addictive. Nicotine in cigarettes is addictive. Cocaine and methamphetamine are highly addictive. And so, those tend to lead to dependencies and bad effects that you feel for many years and you get trapped in using those things.
Other drugs seem to not have much addictive potential, and you mention LSD. LSD is not addictive so far as anyone can tell and does not seem to lead to long-term harmful effects. No, it does not. All these drugs act on different things. I mean, they’re not clean laboratory design experiments that we can perform in our brains. They tend to have lots and lots of different effects. I think that if alcohol were to appear as a new drug today, I think it would be basically banned immediately because of all the drugs that are available, it’s this nasty solvent that can kill cells, that can cause dendrites to shrink, that can lead to us to be unable to operate machinery safely, and can cause people to beat up their love ones and spouses, I mean, it’s a fairly serious drug. But it’s been around long as since though through familiarity, we think its okay. Wine, beer, liquor, whatever, it’s fine. And the flipside of that is something like, say, LSD, something that’s relatively recently come into use. And there’s no denying if you were to take LSD, you would need to be in a safe place and be important to put yourself in a position where you didn’t hurt anybody or hurt yourself.
There are not long-term damaging effects of LSD the way that they are for alcohol. And so, there’s this curious property of these drugs that sometimes are… perceptions of them as a society are not quite the same as the mechanisms of what they do to the brain. One drug that’s pretty unambiguously bad is cocaine methamphetamine. A rule of thumb seems to be that drugs that blocked dopamine uptake have a lot of potential for addiction. And that’s because addiction seems to work through dopamine pathways on the brain. And they lead to dependencies that can be very bad for you. So dopamine uptake blockers seem to be bad for the brain.
Topic: Sam Wang on book smarts vs. street smarts
Wang: I think Goleman’s ideas are very interesting. This old school idea that basically no one believes anymore, which is that intelligence is just described by one factor, a perimeter that people use to call G for general intelligence. And either you had more of it or you had lots of it. And a certain element of that is still true because there is a thing that you can measure on tests that has to do with fluid problem-solving that one… that people still call G. And so, that part’s true.
But I think one thing is interesting there, with Goleman, is the idea that there are many mental capacities that we have and they are, to a certain degree, independent with one another. Now, it turns out they are less independent than one might imagine. It turns out that on average, people who are better at, say, IQ tests on average tend to be better in social intelligence. Now, that might be hard to believe when you go to university and quiz your average mathematician. Not all those people are all that socially advanced. But nonetheless, there is a positive relationship among these things and they rely on slightly different brain systems. So I think it’s possible for these capacities to be somewhat independent of one another.
Question: What is a creative brain?
Sam Wang: Creativity is a pretty general concept. And in order to reduce it into something you can study, you have to start thinking about a problem that you can give somebody in the laboratory.
And so, for instance, there’s a kind of problem-solving called divergent thinking, finding solutions that other people do not find at the same problem. Okay. So, for instance, an example of creativity, an animal might be; if you have a crow; imagine a box in which the box can either be opened by flipping the lid like this or by pulling on the lid like this. And what ethologists have discovered is that when you take a crow, most crows will learn how to open the box like this and flip it like that. Okay.
And occasionally, a crow will have the innovative thought of pulling the box the other way. Okay. So that’s an example of divergent thinking that you can identify in a crow. And crows even have social learning.
So, for instance, if one raven sees another raven do this, then, in fact, it can learn to do the innovative way of solving the problem right away. And that’s an example of social learning. And this is clearly something that’s very highly developed in us, right? Our cultural evolution occurs far, far faster than biological evolution. And so, that kind of creativity of tool making is something that crows have a surprising amount of… that we have. And so, one question is how to study these things. And you have to cook up tasks that are more germane So, for instance, one example is left-handers.
On average, left-handers seem to be more prone to divergent thinking and creative ways of solving problems at least in the laboratory context. So that’s one example of a demographic that seems to have more of this, whatever it is.
Topic: Sam Wang on left brain vs. right brain
Sam Wang: So the popular belief about the left brain and the right brain… And people are usually talking about the cerebral cortex when they talk about these things is that the left half of the brain is rational and problem solving and uncreative. And somehow, the right brain is creative. It can help you draw better, right? Now, the truth is more complicated. Because, in fact, the hemispheres of the brain… Because the hemispheres of the brain are heavily interconnected by the structure, that’s… basically, this dense band of fibers called the corpus callosum.
And what seems to be really the case is the left half of the brain is important for mathematical reasoning, for generating language, but it’s also a storyteller. So, for instance, in split brain patients who have their corpus callosum cut, when you show a picture and you play a picture association with the right half of the brain by showing the left half of the world… of the person’s world things and ask the right half of the brain to do something, the right half of the brain will happily play picture association. Then, if you ask the person, why did you make that association, the left half of the brain makes up a story.
So, for instance, you could show the right half of the brain a snow scene and a chicken claw and the left brain sees the chicken claw, the right brain sees the snow scene and then you ask the right brain what goes with it, and the right brain will pick a shovel. And then, you ask the left brain, why did you pick a shovel, and the left brain saw the chicken claw and the left brain makes up this crazy story. The left brain will say, well the chicken lives; that’s a claw, it goes with the chicken, the chicken lives in a coop, you need a shovel to shovel out the chicken coop. And it’s this totally made-up story. And so, it turns out that the left half of the brain is perfectly capable of making up stories. And so, that seems to be; that sort of creative, right? That doesn’t fit the stereotype.
I think what’s more the case is that the right brain is important for certain things. So, for instance, the left brain produces language, the right brain produces porosity, which is the emotional content of a language. In many ways, the right brain, on the other hand, is quite concrete. And so, I think the story is more complicated. And I think these cultural ideas about left brain, right brain have basically blown up all out of proportion to what scientists have actually found about the left and right brain.
Recorded April 24, 2009.
Sam Wang discusses his book, ‘Welcome to Your Brain.’
Once a week.
Subscribe to our weekly newsletter.
Measuring a person's movements and poses, smart clothes could be used for athletic training, rehabilitation, or health-monitoring.
In recent years there have been exciting breakthroughs in wearable technologies, like smartwatches that can monitor your breathing and blood oxygen levels.
But what about a wearable that can detect how you move as you do a physical activity or play a sport, and could potentially even offer feedback on how to improve your technique?
And, as a major bonus, what if the wearable were something you'd actually already be wearing, like a shirt of a pair of socks?
That's the idea behind a new set of MIT-designed clothing that use special fibers to sense a person's movement via touch. Among other things, the researchers showed that their clothes can actually determine things like if someone is sitting, walking, or doing particular poses.
The group from MIT's Computer Science and Artificial Intelligence Lab (CSAIL) says that their clothes could be used for athletic training and rehabilitation. With patients' permission, they could even help passively monitor the health of residents in assisted-care facilities and determine if, for example, someone has fallen or is unconscious.
The researchers have developed a range of prototypes, from socks and gloves to a full vest. The team's "tactile electronics" use a mix of more typical textile fibers alongside a small amount of custom-made functional fibers that sense pressure from the person wearing the garment.
According to CSAIL graduate student Yiyue Luo, a key advantage of the team's design is that, unlike many existing wearable electronics, theirs can be incorporated into traditional large-scale clothing production. The machine-knitted tactile textiles are soft, stretchable, breathable, and can take a wide range of forms.
"Traditionally it's been hard to develop a mass-production wearable that provides high-accuracy data across a large number of sensors," says Luo, lead author on a new paper about the project that is appearing in this month's edition of Nature Electronics. "When you manufacture lots of sensor arrays, some of them will not work and some of them will work worse than others, so we developed a self-correcting mechanism that uses a self-supervised machine learning algorithm to recognize and adjust when certain sensors in the design are off-base."
The team's clothes have a range of capabilities. Their socks predict motion by looking at how different sequences of tactile footprints correlate to different poses as the user transitions from one pose to another. The full-sized vest can also detect the wearers' pose, activity, and the texture of the contacted surfaces.
The authors imagine a coach using the sensor to analyze people's postures and give suggestions on improvement. It could also be used by an experienced athlete to record their posture so that beginners can learn from them. In the long term, they even imagine that robots could be trained to learn how to do different activities using data from the wearables.
"Imagine robots that are no longer tactilely blind, and that have 'skins' that can provide tactile sensing just like we have as humans," says corresponding author Wan Shou, a postdoc at CSAIL. "Clothing with high-resolution tactile sensing opens up a lot of exciting new application areas for researchers to explore in the years to come."
The paper was co-written by MIT professors Antonio Torralba, Wojciech Matusik, and Tomás Palacios, alongside PhD students Yunzhu Li, Pratyusha Sharma, and Beichen Li; postdoc Kui Wu; and research engineer Michael Foshey.
The work was partially funded by Toyota Research Institute.
How imagining the worst case scenario can help calm anxiety.
- Stoicism is the philosophy that nothing about the world is good or bad in itself, and that we have control over both our judgments and our reactions to things.
- It is hardest to control our reactions to the things that come unexpectedly.
- By meditating every day on the "worst case scenario," we can take the sting out of the worst that life can throw our way.
Are you a worrier? Do you imagine nightmare scenarios and then get worked up and anxious about them? Does your mind get caught in a horrible spiral of catastrophizing over even the smallest of things? Worrying, particularly imagining the worst case scenario, seems to be a natural part of being human and comes easily to a lot of us. It's awful, perhaps even dangerous, when we do it.
But, there might just be an ancient wisdom that can help. It involves reframing this attitude for the better, and it comes from Stoicism. It's called "premeditation," and it could be the most useful trick we can learn.
Broadly speaking, Stoicism is the philosophy of choosing your judgments. Stoics believe that there is nothing about the universe that can be called good or bad, valuable or valueless, in itself. It's we who add these values to things. As Shakespeare's Hamlet says, "There is nothing either good or bad, but thinking makes it so." Our minds color the things we encounter as being "good" or "bad," and given that we control our minds, we therefore have control over all of our negative feelings.
Put another way, Stoicism maintains that there's a gap between our experience of an event and our judgment of it. For instance, if someone calls you a smelly goat, you have an opportunity, however small and hard it might be, to pause and ask yourself, "How will I judge this?" What's more, you can even ask, "How will I respond?" We have power over which thoughts we entertain and the final say on our actions. Today, Stoicism has influenced and finds modern expression in the hugely effective "cognitive behavioral therapy."
Helping you practice StoicismCredit: Robyn Beck via Getty Images
One of the principal fathers of ancient Stoicism was the Roman statesmen, Seneca, who argued that the unexpected and unforeseen blows of life are the hardest to take control over. The shock of a misfortune can strip away the power we have to choose our reaction. For instance, being burglarized feels so horrible because we had felt so safe at home. A stomach ache, out of the blue, is harder than a stitch thirty minutes into a run. A sudden bang makes us jump, but a firework makes us smile. Fell swoops hurt more than known hardships.
What could possibly go wrong?
So, how can we resolve this? Seneca suggests a Stoic technique called "premeditatio malorum" or "premeditation." At the start of every day, we ought to take time to indulge our anxious and catastrophizing mind. We should "rehearse in the mind: exile, torture, war, shipwreck." We should meditate on the worst things that could happen: your partner will leave you, your boss will fire you, your house will burn down. Maybe, even, you'll die.
This might sound depressing, but the important thing is that we do not stop there.
Stoicism has influenced and finds modern expression in the hugely effective "cognitive behavioral therapy."
The Stoic also rehearses how they will react to these things as they come up. For instance, another Stoic (and Roman Emperor) Marcus Aurelius asks us to imagine all the mean, rude, selfish, and boorish people we'll come across today. Then, in our heads, we script how we'll respond when we meet them. We can shrug off their meanness, smile at their rudeness, and refuse to be "implicated in what is degrading." Thus prepared, we take control again of our reactions and behavior.
The Stoics cast themselves into the darkest and most desperate of conditions but then realize that they can and will endure. With premeditation, the Stoic is prepared and has the mental vigor necessary to take the blow on the chin and say, "Yep, l can deal with this."
Catastrophizing as a method of mental inoculation
Seneca wrote: "In times of peace, the soldier carries out maneuvers." This is also true of premeditation, which acts as the war room or training ground. The agonizing cut of the unexpected is blunted by preparedness. We can prepare the mind for whatever trials may come, in just the same way we can prepare the body for some endurance activity. The world can throw nothing as bad as that which our minds have already imagined.
Stoicism teaches us to embrace our worrying mind but to embrace it as a kind of inoculation. With a frown over breakfast, try to spend five minutes of your day deliberately catastrophizing. Get your anti-anxiety battle plan ready and then face the world.
Why mega-eruptions like the ones that covered North America in ash are the least of your worries.
- The supervolcano under Yellowstone produced three massive eruptions over the past few million years.
- Each eruption covered much of what is now the western United States in an ash layer several feet deep.
- The last eruption was 640,000 years ago, but that doesn't mean the next eruption is overdue.
The end of the world as we know it
Panoramic view of Yellowstone National Park
Image: Heinrich Berann for the National Park Service – public domain
Of the many freak ways to shuffle off this mortal coil – lightning strikes, shark bites, falling pianos – here's one you can safely scratch off your worry list: an outbreak of the Yellowstone supervolcano.
As the map below shows, previous eruptions at Yellowstone were so massive that the ash fall covered most of what is now the western United States. A similar event today would not only claim countless lives directly, but also create enough subsidiary disruption to kill off global civilisation as we know it. A relatively recent eruption of the Toba supervolcano in Indonesia may have come close to killing off the human species (see further below).
However, just because a scenario is grim does not mean that it is likely (insert topical political joke here). In this case, the doom mongers claiming an eruption is 'overdue' are wrong. Yellowstone is not a library book or an oil change. Just because the previous mega-eruption happened long ago doesn't mean the next one is imminent.
Ash beds of North America
Ash beds deposited by major volcanic eruptions in North America.
Image: USGS – public domain
This map shows the location of the Yellowstone plateau and the ash beds deposited by its three most recent major outbreaks, plus two other eruptions – one similarly massive, the other the most recent one in North America.
The Huckleberry Ridge eruption occurred 2.1 million years ago. It ejected 2,450 km3 (588 cubic miles) of material, making it the largest known eruption in Yellowstone's history and in fact the largest eruption in North America in the past few million years.
This is the oldest of the three most recent caldera-forming eruptions of the Yellowstone hotspot. It created the Island Park Caldera, which lies partially in Yellowstone National Park, Wyoming and westward into Idaho. Ash from this eruption covered an area from southern California to North Dakota, and southern Idaho to northern Texas.
About 1.3 million years ago, the Mesa Falls eruption ejected 280 km3 (67 cubic miles) of material and created the Henry's Fork Caldera, located in Idaho, west of Yellowstone.
It was the smallest of the three major Yellowstone eruptions, both in terms of material ejected and area covered: 'only' most of present-day Wyoming, Colorado, Kansas and Nebraska, and about half of South Dakota.
The Lava Creek eruption was the most recent major eruption of Yellowstone: about 640,000 years ago. It was the second-largest eruption in North America in the past few million years, creating the Yellowstone Caldera.
It ejected only about 1,000 km3 (240 cubic miles) of material, i.e. less than half of the Huckleberry Ridge eruption. However, its debris is spread out over a significantly wider area: basically, Huckleberry Ridge plus larger slices of both Canada and Mexico, plus most of Texas, Louisiana, Arkansas, and Missouri.
This eruption occurred about 760,000 years ago. It was centered on southern California, where it created the Long Valley Caldera, and spewed out 580 km3 (139 cubic miles) of material. This makes it North America's third-largest eruption of the past few million years.
The material ejected by this eruption is known as the Bishop ash bed, and covers the central and western parts of the Lava Creek ash bed.
Mount St Helens
The eruption of Mount St Helens in 1980 was the deadliest and most destructive volcanic event in U.S. history: it created a mile-wide crater, killed 57 people and created economic damage in the neighborhood of $1 billion.
Yet by Yellowstone standards, it was tiny: Mount St Helens only ejected 0.25 km3 (0.06 cubic miles) of material, most of the ash settling in a relatively narrow band across Washington State and Idaho. By comparison, the Lava Creek eruption left a large swathe of North America in up to two metres of debris.
The difference between quakes and faults
The volume of dense rock equivalent (DRE) ejected by the Huckleberry Ridge event dwarfs all other North American eruptions. It is itself overshadowed by the DRE ejected at the most recent eruption at Toba (present-day Indonesia). This was one of the largest known eruptions ever and a relatively recent one: only 75,000 years ago. It is thought to have caused a global volcanic winter which lasted up to a decade and may be responsible for the bottleneck in human evolution: around that time, the total human population suddenly and drastically plummeted to between 1,000 and 10,000 breeding pairs.
Image: USGS – public domain
So, what are the chances of something that massive happening anytime soon? The aforementioned mongers of doom often claim that major eruptions occur at intervals of 600,000 years and point out that the last one was 640,000 years ago. Except that (a) the first interval was about 200,000 years longer, (b) two intervals is not a lot to base a prediction on, and (c) those intervals don't really mean anything anyway. Not in the case of volcanic eruptions, at least.
Earthquakes can be 'overdue' because the stress on fault lines is built up consistently over long periods, which means quakes can be predicted with a relative degree of accuracy. But this is not how volcanoes behave. They do not accumulate magma at constant rates. And the subterranean pressure that causes the magma to erupt does not follow a schedule.
What's more, previous super-eruptions do not necessarily imply future ones. Scientists are not convinced that there ever will be another big eruption at Yellowstone. Smaller eruptions, however, are much likelier. Since the Lava Creek eruption, there have been about 30 smaller outbreaks at Yellowstone, the last lava flow being about 70,000 years ago.
As for the immediate future (give or take a century): the magma chamber beneath Yellowstone is only 5 percent to 15 percent molten. Most scientists agree that is as un-alarming as it sounds. And that its statistically more relevant to worry about death by lightning, shark, or piano.
Strange Maps #1041
Got a strange map? Let me know at firstname.lastname@example.org.
A study on charity finds that reminding people how nice it feels to give yields better results than appealing to altruism.
- A study finds asking for donations by appealing to the donor's self-interest may result in more money than appealing to their better nature.
- Those who received an appeal to self-interest were both more likely to give and gave more than those in the control group.
- The effect was most pronounced for those who hadn't given before.
Even the best charities with the longest records of doing great fundraising work have to spend some time making sure that the next donation checks will keep coming in. One way to do this is by showing potential donors all the good things the charity did over the previous year. But there may be a better way.
A new study by researchers in the United States and Australia suggests that appealing to the benefits people will receive themselves after a donation nudges them to donate more money than appealing to the greater good.
How to get people to give away free money
The postcards that were sent to different study subjects. The one on the left highlighted benefits to the self, while the one on the right highlighted benefits to others.List et al. / Nature Human Behaviour
The study, published in Nature Human Behaviour, utilized the Pick.Click.Give program in Alaska. This program allows Alaska residents who qualify for dividends from the Alaska Permanent Fund, a yearly payment ranging from $800 to $2000 in recent years, to donate a portion of it to various in-state non-profit organizations.
The researchers randomly assigned households to either a control group or to receive a postcard in the mail encouraging them to donate a portion of their dividend to charity. That postcard could come in one of two forms, either highlighting the benefits to others or the benefits to themselves.
Those who got the postcard touting self-benefits were 6.6 percent more likely to give than those in the control group and gave 23 percent more on average. Those getting the benefits-to-others postcard were slightly more likely to give than those receiving no postcard, but their donations were no larger.
Additionally, the researchers were able to break the subject list down into a "warm list" of those who had given at least once before in the last two years and a "cold list" of those who had not. Those on the warm list, who were already giving, saw only minor increases in their likelihood to donate after getting a postcard in the mail compared to those on the cold list.
Additionally, the researchers found that warm-list subjects who received the self-interest postcard gave 11 percent more than warm-list subjects in the control group. Amazingly, among cold-list subjects, those who received a self-interest postcard gave 39 percent more.
These are substantial improvements. At the end of the study, the authors point out, "If we had sent the benefits to self message to all households in the state, aggregate contributions would have increased by nearly US$600,000."
To put this into perspective, in 2017 the total donations to the program were roughly $2,700,000.
Is altruism dead?
Are all actions inherently self-interested? Thankfully, no. The study focuses entirely on effective ways to increase charitable donations above levels that currently exist. It doesn't deny that some people are giving out of pure altruism, but rather that an appeal based on self-interest is effective. Plenty of people were giving before this study took place who didn't need a postcard as encouragement. It is also possible that some people donated part of their dividend check to a charity that does not work with Pick.Click.Give and were uncounted here.
It is also important to note that Pick.Click.Give does not provide services but instead gives money to a wide variety of organizations that do. Those organizations operate in fields from animal rescue to job training to public broadcasting. The authors note that it is possible that a more specific appeal to the benefits others will receive from a donation might prove more effective than the generic and all-inclusive "Make Alaska Better For Everyone" appeal that they used.
In an ideal world, charity is its own reward. In ours, it might help to remind somebody how warm and fuzzy they'll feel after donating to your cause.