Skip to content

“You Want the Truth (about risk)? You Can’t HANDLE The Truth!”

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

            Numerous online tools are available to help you figure out your risk of a wide range of health outcomes; diabetes, stroke, heart disease, various kinds of cancer, (see Your Disease Risk from the Harvard School of Public Health), even how long you are likely to live (see The Wharton School of Economics’ Life Expectancy Calculator ).You enter your risk factors – age, weight, gender, medical history, family history, lifestyle factors like diet and exercise –  and the tool runs the numbers and tells you your risk.


            Many of these online do-it-yourself risk-o-meters have been designed by top experts in public health and health communication and rely on rigorous research, the best available information. But while they can be tremendously helpful, these tools rest on a dangerously shaky foundation; the assumption by those experts that giving people more information will lead to wise and healthy choices.

            More information, clearly presented, certainly helps. It is critical to any effective risk or health communication. But our choices about risk (or anything, for that matter) are rarely based on the facts alone. According to research on the psychology of risk perception, what matters at least as much, and probably more, is how we feel about those facts, based on our own personal lives and experiences and feelings, and based on a common set of psychological characteristics that for most of us make some things feel scarier than others, the facts notwithstanding.

     A revealing piece of research from the University of Michigan Comprehensive Cancer Center (summarized here, the paper itself is behind an academic journal paywall)  supports this important truth. 20 percent of the women (n 690) who used an online tool to calculate their risk of breast cancer in the next five years simply did not believe the results. They were asked to enter their own information about age, ethnicity, personal history of breast cancer, age at first menses, age at first live birth, number of first-degree relatives who have had breast cancer, and history of breast biopsies, all well-established risk factors for breast cancer.  One in five took a look at the results, results tailored to them personally, and simply chose not to believe them.

     There have been other studies like this. A 2004 analysis, Colon Cancer; Risk Perceptions and Risk Communication, found that half the people who used an online tool to calculate their risk of colon cancer rejected the answer. But the Michigan study went further, and asked women why they denied the results. Among the deniers, a third said they rejected the numbers because they didn’t think the tool adequately accounted for family history, even though it did (that question about first-degree relatives). Another large group of deniers rejected the answer because it didn’t say what they expected. A few thought the risk reported by the computer was too high, but most thought the on-line answer was too low. ‘‘2.1% risk sounds too good to be true’’, said one woman. ‘‘It just seemed low,’’ said another. A third said ‘‘The percentage was low compared to my concern’’.

            In the wonderfully understated language of acadam-ese, the authors say their findings suggest that “… health providers might be made aware that the risk information that they communicate to patients is not always taken at face value.” Much more fundamentally, this finding and others like it confirm what the social sciences have been SCREAMING about cognition for decades now; human perception, judgment and decision making are not dispassionately objective and solely fact-based. We are not, and can not be rational, if one’s definition of rationality means doing only what the numbers and cold hard facts say. The facts alone, even when we have them all, are not enough.

            As firmly as this fundamental truth about human cognition and risk perception has been established, it is surprising that the health communication community that develops these tools still focuses so much effort on making the facts – especially the numbers – clear. To be fair, a large body of valuable research by Steve Woloshin, Lisa Schwartz, Isaac Lipkus, Ellen Peters and others has dramatically improved the effectiveness of communicating risk numbers, which has improved patient decision making. And to be fair, there is a whole body of research in health communication into how to ‘tailor’ the numbers by accounting for the realities of affective cognition – the feelings part of our perceptions – so they will have the effect the communicator hopes. The Michigan authors suggest “…addressing the patients’ personal circumstances may lead to greater acceptance.”

     But while personally ‘tailoring’ communication and improving the understandability of risk numbers certainly help, they are still based on the general belief of many health and risk communication experts, that the right messages, delivered via the right media, by the right sources, can get people to make the ‘right’ choice about the risk, the ‘rational’ choice, whether it’s a medical decision, or a question of what people believe about chemicals or GMOs or climate change. And that remains a long way from accepting the reality that the Michigan research illustrates, that no matter how well done, communication about health risk, or any risk, faces insurmountable limitations imposed by the intrinsically subjective nature of human perception. The ‘truth’ “about risk is not just about the facts, and any effort to shape how people feel about a risk that deals only with the facts, will fail.

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next