Skip to content

When the Policy Makers Who Are Supposed to Protect Us, Get Risk Wrong

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

            You and I make risk judgments for ourselves all the time, based on a few facts and a lot of subjective, instinctive emotional factors. As a result we sometimes worry more than the evidence says we need to, or less that the evidence says we should, and that Risk Perception Gap, as I call it, can lead to mistakes that are dangerous all by themselves. But don’t we want government risk regulators to be smarter than that. Don’t we want the experts who are supposed to protect us to make decisions that are better informed, more objective, more rational?


          Well, they try, using a science-based definition of risk. But because risk is such an emotional thing, there are huge arguments about whether we should even use that science-based approach. And even when we do, it doesn’t always prevent government from making the same dangerous mistakes about risk that you and I sometimes make as individuals.

            To technocrats,

Risk (the probability of a certain consequence) =  equals Hazard X Exposure.

Hazard includes…

  – Is the ‘thing” (a product, an ingredient, a process, a behavior) hazardous

  – How hazardous?

  – At what dose?

  – To which people in particular?

           …while Exposure considers…

     – Are we exposed, since something’s that hazardous isn’t a risk until we’re actually exposed to it. (Think of a poisonous snake locked safely in a cage.) – How much?

      – In what ways?

     – How regularly or rarely?

     – At what age? (Stuff that’s bad for young kids may not do that harm to adults).

                 When you measure all those things and put them into the Risk = Hazard X Exposure formula, you can do a reasonable job defining the facts about a risk. But as fact-based and objective as the Hazard X Risk formula seems to be, because risk is so emotional there are serious questions about whether we should even use that seemingly logical approach.

                To help you understand the problem, imagine you are a government official in charge of chemical risk and public health, and you get a report suggesting that some industrial chemical may be hazardous, based on tests of lab animals. Toxicologists have tested a lot of animals but they still aren’t sure. Some tests say there is a risk. Some say there isn’t. And those are just uncertainties on the hazard side of the formula. There is practically no exposure information on this chemical at all. Are people exposed? How? To how much? In what ways? You don’t know. You have less than half the information you need, about only one half of the Hazard X Exposure formula, not enough to act. But there might be danger. What would you do?

                 Some people argue that in cases like this, regulators shouldn’t do anything, that they shouldn’t declare that something is hazardous until they know for sure, based on all the important facts from both sides of the Hazard X Exposure formula. This is sometimes called the risk-based approach, or what industry likes to call “Sound Science” – don’t regulate things until they are proven to be harmful. But others, especially environmentalists, say we should act at the first reasonable hint of hazard, rather than continue to experiment on the public while research is done to get all the facts, which can take years. This is called hazard-based risk management, similar to the Precautionary Principle, which essentially embeds in law the common sense and hazard-based approach of “Better Safe Than Sorry”. Ban suspected hazards until they are proven safe.

                  Most governments use a combination of the two because each approach has merits and flaws. The hazard-based/precautionary/’first prove it’s safe’ approach (like the way the FDA requires extensive testing on drugs before they can be sold) protects us against things that may turn out to be harmful (as the U.S. did with thalidomide). But if we’re too absolute with this approach and ban everything suspected of being hazardous, we could lose the benefits of products and processes that may turn out to be safe, or whose benefits outweigh the harm (like genetically modified food).

             The risk-based/’don’t ban it until it’s proven dangerous’ approach has pros and cons too. Government effort to protect us from things that aren’t really risky diverts time and money away from where they could do us more good, and gives us a false sense of safety. But waiting until we have all the facts about some product or ingredient or behavior means that later research might show that the early hints of hazard were right, and we’ve been exposing ourselves to harm, some of which may be irreversible, while we were waiting for all the ‘sound science’ to come in.

                  There are lots of arguments about which risk management approach is better, wiser, safer. But even when the science is well-established and the Hazard X Exposure formula clearly tells us what the risk is (or isn’t), public opinion often forces government to do something that conflicts with that evidence, something that makes us feel safer even if it doesn’t make us much safer. In other words, it doesn’t matter how thoughtful our policy makers try to be about risk management. Emotions often push the science aside anyway. Here’s one current example;

                The FDA has not banned Bisphenol A (BPA) in general because it says a full Hazard X Exposure risk-based analysis finds that BPA isn’t dangerous, at the doses to which we’re exposed. But environmental scientists, focusing on the hazard side of the equation, say it is, and their alarms have certainly stirred strong public fear. So the FDA has banned it from baby bottles and sippy cups. (This was done first in the marketplace, and then BPA manufacturers – who continue to say that BPA is safe – asked the FDA to make it official.) But this response, to our fears rather than any actual risk (according to the FDA), may make us feel safer but doesn’t even protect us from what the lab evidence says is the greatest possible risk of BPA…birth defects in infants who were exposed during fetal development in utero via the mother. Moms don’t drink from baby bottles or sippy cups.

                  This is more than some wonky discussion about preferred government approaches to risk management. This is about your health, and mine. When the subjective nature of our risk perceptions lead many of us to worry too much or not enough about the same things, together we push the government to protect us. We look to the experts who are supposed to make smart decisions on our behalf, to keep us safe. But given the emotional nature of risk, sometimes what the government does makes us feel safe more than it actually protects us. The Risk Perception Gap is dangerous when we get risk wrong as individuals, but it’s immensely dangerous when government makes such mistakes.

      Sign up for the Smarter Faster newsletter
      A weekly newsletter featuring the biggest ideas from the smartest people

      Related

      Up Next
      Why the harp? Why not, answers Gillian Grassie, who says she was raised by “Quaker hippie parents in the woods without television.” While picking up the harp may not have been […]