Political "Science" and Its Forecasting Failures
Here’s a distinguished political scientist—Jacqueline Stevens—who agrees with me that the NSF ought to cut the funding for political science. The Republicans in Congress think that these “scientists” are covertly pushing an ideological agenda that lurks behind all their jargon and “methods.” That’s somewhat true. When applied to the lives of human beings, everyone knows that the experts who begin their claims for authority with “studies show” are always less objective than they say they are.
I actually think that allegation applies less to political science than the other disciplines of social science. If you want to see ideological uniformity, go to a sociology conference (I have a couple of times). Go to the annual meeting of the American Political Science Association and you’re charmed by the sometimes bizarre bazaar of diversity. The gay and lesbian activists are meeting right next to the monarchists and traditionalist Catholics, and I, for one, am happy to see them all. Most mainstream political scientists, admittedly, are kind of boring, but there’s something about the “political” and residually philosophical dimensions of the discipline that produces a genuinely inclusive and welcoming environment.
My objection to NSF funding is simply that political science has failed, as our author says, in its effort to produce the kind of scientific discoveries the NSF is looking for. The NSF, at the end of the day, has a predominately technological understanding of science. The goal is to produce useful knowledge that will improve the health and other dimensions of physical well-being of Americans (that, of course, includes military power).
The original scientific pretension of political science (see the work of the genuine genius Harold Lasswell) was all about prediction and control. If, through science, we have some firm and quantifiable idea of what causes political change, we will be able to predict what happens. Armed with that knowledge, we will be able to exercise beneficent control over the future. We can develop a political technology.
But it turns out political scientists have a terrible record of prediction—worse by far than that of savvy politicians:
It’s an open secret in my discipline: in terms of accurate political predictions (the field’s benchmark for what counts as science), my colleagues have failed spectacularly and wasted colossal amounts of time and money. The most obvious example may be political scientists’ insistence, during the cold war, that the Soviet Union would persist as a nuclear threat to the United States. In 1993, in the journal International Security, for example, the cold war historian John Lewis Gaddis wrote that the demise of the Soviet Union was “of such importance that no approach to the study of international relations claiming both foresight and competence should have failed to see it coming.” And yet, he noted, “None actually did so.” Careers were made, prizes awarded and millions of research dollars distributed to international relations experts, even though Nancy Reagan’s astrologer may have had superior forecasting skills.
Political prognosticators fare just as poorly on domestic politics. In a peer-reviewed journal, the political scientist Morris P. Fiorina wrote that “we seem to have settled into a persistent pattern of divided government” — of Republican presidents and Democratic Congresses. Professor Fiorina’s ideas, which synced nicely with the conventional wisdom at the time, appeared in an article in 1992 — just before the Democrat Bill Clinton’s presidential victory and the Republican 1994 takeover of the House.
Alas, little has changed. Did any prominent N.S.F.-financed researchers predict that an organization like Al Qaeda would change global and domestic politics for at least a generation? Nope. Or that the Arab Spring would overthrow leaders in Egypt, Libya and Tunisia? No, again. What about proposals for research into questions that might favor Democratic politics and that political scientists seeking N.S.F. financing do not ask — perhaps, one colleague suggests, because N.S.F. program officers discourage them? Why are my colleagues kowtowing to Congress for research money that comes with ideological strings attached?
One reason among many for these failures to forecast change is the tendency of political scientists to think in terms of categories borrowed from "real" science—such as systems and stability. So they slight the behavior of great political actors as a fundamentally unpredictable cause. They also slight the "variable" of the actual truth or falsity of political opinion.
So it's worth noting that the ideologically confident Reagan predicted that the Soviet Union couldn't last much longer, because the regime was based on an increasingly self-evident lie. That's why he (prudently) engaged in ideological war against evil on behalf of the truth about who we are. When Reagan turned out to be right and virtually every political scientist wrong, the political scientists were quick to dismiss the possibility that Reagan might have a significant cause of the unpredicted big change.
I remember being in at a meeting of the Southern Political Science Association just before the election of 1994. An expert panel of political scientists with an array of cool models was unanimous in predicting that the Republicans couldn't pick up more than 30 seats in Congress. The subtext? The Democrats had controlled Congress for such a long time that there must be some enduring, "subpolitical" systematic cause for that stability. It couldn't be that the election could be determined by the Republicans' confident mobilization of opinion. It is really and truly true that the cabdriver who took me over to the conference site gave me a political lecture that included a correct prediction—based on the greatness of Newt Gingrich—of Republican takeover.
As for 1992, how could any political scientist have predicted Ross Perot?
Generally, political science does its best work when it begins with the perspectives of the statesman (or political leader) and the citizen and then goes on to refine and enlarge what's seen about political life by those who are actually engaged in it. The attempt to impose a scientific perspective alien to the phenomena almost always leads us to see a lot less than is really there.
Dominique Crenn, the only female chef in America with three Michelin stars, joins Big Think Live this Thursday at 1pm ET.
Scientists discover the inner workings of an effect that will lead to a new generation of devices.
- Researchers discover a method of extracting previously unavailable information from superconductors.
- The study builds on a 19th-century discovery by physicist Edward Hall.
- The research promises to lead to a new generation of semiconductor materials and devices.
Credit: Gunawan/Nature magazine
The number of people with dementia is expected to triple by 2060.
The images and our best computer models don't agree.
A trio of intriguing galaxy clusters<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDQzNDA0OS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxNTkzNzUyOH0.0IRzkzvKsmPEHV-v1dqM1JIPhgE2W-UHx0COuB0qQnA/img.jpg?width=980" id="d69be" class="rm-shortcode" data-rm-shortcode-id="2d2664d9174369e0a06540cb3a3a9079" data-rm-shortcode-name="rebelmouse-image" />
The three galaxy clusters imaged for the study
Mapping dark matter<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="d904b585c806752f261e1215014691a6"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/fO0jO_a9uLA?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p>The assumption has been that the greater the lensing effect, the higher the concentration of dark matter.</p><p>As scientists analyzed the clusters' large-scale lensing — the massive arc and elongation visual effects produced by dark matter — they noticed areas of smaller-scale lensing within that larger distortion. The scientists interpret these as concentrations of dark matter within individual galaxies inside the clusters.</p><p>The researchers used spectrographic data from the VLT to determine the mass of these smaller lenses. <a href="https://www.oas.inaf.it/en/user/pietro.bergamini/" target="_blank" rel="noopener noreferrer">Pietro Bergamini</a> of the INAF-Observatory of Astrophysics and Space Science in Bologna, Italy explains, "The speed of the stars gave us an estimate of each individual galaxy's mass, including the amount of dark matter." The leader of the spectrographic aspect of the study was <a href="http://docente.unife.it/docenti-en/piero.rosati1/curriculum?set_language=en" target="_blank">Piero Rosati</a> of the Università degli Studi di Ferrara, Italy who recalls, "the data from Hubble and the VLT provided excellent synergy. We were able to associate the galaxies with each cluster and estimate their distances." </p><p>This work allowed the team to develop a thoroughly calibrated, high-resolution map of dark matter concentrations throughout the three clusters.</p>
But the models say...<p>However, when the researchers compared their map to the concentrations of dark matter computer models predicted for galaxies bearing the same general characteristics, something was <em>way</em> off. Some small-scale areas of the map had 10 times the amount of lensing — and presumably 10 times the amount of dark matter — than the model predicted.</p><p>"The results of these analyses further demonstrate how observations and numerical simulations go hand in hand," notes one team member, <a href="https://nena12276.wixsite.com/elenarasia" target="_blank">Elena Rasia</a> of the INAF-Astronomical Observatory of Trieste, Italy. Another, <a href="http://adlibitum.oats.inaf.it/borgani/" target="_blank" rel="noopener noreferrer">Stefano Borgani</a> of the Università degli Studi di Trieste, Italy, adds that "with advanced cosmological simulations, we can match the quality of observations analyzed in our paper, permitting detailed comparisons like never before."</p><p>"We have done a lot of testing of the data in this study," Meneghetti says, "and we are sure that this mismatch indicates that some physical ingredient is missing either from the simulations or from our understanding of the nature of dark matter." <a href="https://physics.yale.edu/people/priyamvada-natarajan" target="_blank">Priyamvada Natarajan</a> of Yale University in Connecticut agrees: "There's a feature of the real Universe that we are simply not capturing in our current theoretical models."</p><p>Given that any theory in science lasts only until a better one comes along, Natarajan views the discrepancy as an opportunity, saying, "this could signal a gap in our current understanding of the nature of dark matter and its properties, as these exquisite data have permitted us to probe the detailed distribution of dark matter on the smallest scales."</p><p>At this point, it's unclear exactly what the conflict signifies. Do these smaller areas have unexpectedly high concentrations of dark matter? Or can dark matter, under certain currently unknown conditions, produce a tenfold increase in lensing beyond what we've been expecting, breaking the assumption that more lensing means more dark matter?</p><p>Obviously, the scientific community has barely begun to understand this mystery.</p>
Scientists have found evidence of hot springs near sites where ancient hominids settled, long before the control of fire.