Here's the psychology that explains why many economists prefer to be narrowly right yet broadly wrong (they suffer from professional "rigor distortis").
1. Isn’t it odd that real economies often defy economists? Has post-financial-crisis criticism made us savvier consumers of economic ideas? Let’s consider the precise professional habits that generate “cry wolf”, “what wolf?", and “rigor distortis” errors in many economists.
2. A rigor-loving psychology, paradoxically, predisposes many economists to prefer being narrowly right yet broadly wrong. Their precision-seeking methods rigorously misrepresent reality (“rigor distortis”).
3. Cry-wolf “economists have had another terrible year,” writes right-wing journalist Jeremy Warner. A “substantial majority of economists” predicted “market mayhem” after Trump’s election. The IMF expected a “profound shock to global confidence” after the Brexit vote. Neither happened.
5. How are such smart experts seduced into “rigor distortis”? Their approved methods (“methodological monism”) permit only precise, rigorous logic which excludes factors lacking data (+ equation filtering). And they’re predisposed to resist unavoidably imprecise reality-reinjecting adjustments (McNamara Fallacy).
6. “All-else-equal” thinking also worsens these rigorous-but-wrong habits. In reality, many factors shift simultaneously. And incentives often cut both ways—do higher taxes mean people work less, or more, to maintain prior spending?
8. For instance, pervasive incentive flaws arise because both sides of voluntary transactions “gain” from cost exclusions (”externalities”). However repulsive rigor-lovers find unavoidably imprecise externality adjustments, reality has precisely zero unaffected markets (all offerings consume energy… pollution externalities always > 0).
9. Indeed, there are no “unfailed” markets in reality (see Brad Delong’s unrealistic caveats).
10. The vast academic literature on externalities is used like “holy water” (Garrett Hardin), sprinkled then ignored. Shouldn’t noneconomists judge economic ideas as enacted (however selectively)?
11. A dysfunctional ethics-outsourced-to-markets game gives executives excuses to cherry-pick economic ideas. While “what-wolf?” economists ignore how routinely greed-guided businesses subvert market doctrines (e.g., economists mostly just assume away “pricing power”).
12. Reality-denying methods led Andrew Gelman to compare economics to Freudianism. Both are explain-all, know-the-answer-in-advance frameworks convincing to rich clients.
13. Another Freud-like habit of economists is to project their love of incentive optimization onto others. Many real humans find such calculated decisions stressful, and avoid them. Why organize life around a rare sort of rationality (rare even among economists)?
15. Always ask how economists adjust for known exclusions. And why given models presume causal stability. Unless they offer practical answers and adjustments for unmodeled effects, you can ignore them, just like real economies do.
16. Rationalist economics is almost self-refuting. Is it rational to continue paying experts whose models assume rationality, yet often fail to match reality?
17. Descriptive economics is useful (see Noah Smith’s minimum-wage research summary), but prescriptive, often reality-denying, market faith is far from irrational.
Illustration by Julia Suits, author of The Extraordinary Catalog of Peculiar Inventions, and The New Yorker cartoonist.
Is "science broken" or self-correcting? And who is going to do the grown-up thing and fix the game (instead of scoring points within it)?
1. Science needs some tough love (fields vary, but some enable and encourage unhealthy habits). And “good cop” approaches aren't fixing “phantom patterns” and “noise mining” (explained below).
3. Gelman is too kind; the “reproducibility crisis” is really a producibility problem—professional practices reward production and publication of unsound studies.
4. Gelman calls such studies “dead on arrival,” but they’re actually dead on departure, doomed at conception by “flaws inherent in [their] original design” (+much that’s “poorly designed” gets published).
5. Optimists say relax, “science is self-correcting.” For instance, Christie Aschwanden says the “replication crisis is a sign that science is working,” it’s not “untrustworthy,” it’s just messy and hard (it’s “in the long run… dependable,” says Tom Siegfried).
6. “Science Is Broken” folks like Dan Engber ask, “how quickly does science self-correct? Are bad ideas and wrong results stamped out [quickly]... or do they last for generations?” And at what (avoidable) cost?
7. We mustn’t overgeneralize—physics isn’t implicated, instructively it’s intrinsically less variable, (all electrons behave consistently). Biology and social science aren’t so lucky: People ≠ biological billiard balls.
10. Harris sees “no easy” fix. But a science-is-hard defense doesn’t excuse known-to-be-bad practices.
11. Engber’s “bad ideas and wrong results” are dwarfed by systemic generation-spanning method-level ills. For instance, Gelman calls traditional statistics “counterproductive”—badly misnamed “statistical significance” tests aren’t arbiters “of scientific truth," though they’re widely used that way.
12. Psychology brought “statistical significance” misuse to light recently (e.g.,the TED chart-topping “power pose”), but Deirdre McCloskey declared "statistical significance has ruined empirical… economics" in 1998, and traced concerns to 1920s. Gelman wants us to “abandon statistical significance.”
13. Yet “noise mining” abounds. Fields with inherent variability, small effects, and noisy measurements drown in datasets with phantom patterns, unrelated to stable causes (see Cornell’s “world-renowned eating... expert”)
14. No “statistical alchemy” (Keynes, 1939) can diagnose phantom patterns. Only further reality-checking can. “Correlation doesn’t even imply correlation” beyond your data. Always ask: Why would this pattern generalize? By what causal process(es)?
15. Basic retraining must emphasize representativeness and causal stability. Neither bigger samples, nor randomization necessarily ensure representativeness (see, mixed-type stats woes, pattern types).
16. Journalism that showcases every sensational-seeming study ill-serves us. Most unconfirmed science should go unreported—media exaggerations damage public trust.
18. Great science is occurring, but the “free play of free intellects” game, fun though it is, is far from free of unforced errors.
19. “Saving science” (Daniel Sarewitz) means fixing the game—not scoring points within it.
Illustration by Julia Suits, The New Yorker cartoonist & author of The Extraordinary Catalog of Peculiar Inventions