More on the Mathematics of War
There has been some pushback about that Nature paper which claimed there’s a power-law “signal” in the seemingly random events of guerrilla wars against standing armies.
They really don’t like this idea over at Danger Room. The other day Katie Drummond launched a head-on assault, repeating her claim from last spring that the gizmo doesn’t actually work. She’d said in that earlier post that the model made predictions that were wrong about the war in Iraq in 2007. And she embedded a TED talk by one of the paper’s co-authors, Sean Gourley, in which he supposedly admits this fact.
Yikes. But if you actually watch the video, at about the 6:00 mark you’ll find what Gourley said is that he expected one outcome and was surprised when the opposite happened. He doesn’t say the model failed to predict events. In fact, as he then describes the work, it’s not supposed to predict the future. It’s supposed to help policymakers understand the present. In which case the right question isn’t “what will happen in Falluja in July 2011?” but “why is this war today diverging from the usual pattern?”
Now, it may still be that the pattern isn’t really there. Or that there’s nothing to be learned from this common “signal” of insurgency around the world. That’s where there’s room to argue about the theory set out in the paper. But slamming a scientific model for not being a crystal ball? It’s like complaining that your radio can’t collect your email. Okay, you’re right, I guess. Except have you noticed that that’s not what it’s for?
Secondly, Drummond argues that there’s no good data about numbers of people killed in war, so the model’s numbers can’t be accurate. After all, wars with a lot of journalists attached (Iraq, for instance) will be more reported than those with little press (like the endless battles in Congo). And press reports on the number of casualties from an incident can be all over the map.
Maybe so, but the issue isn’t whether data are scantier in one place than another; it’s whether data from one place are less representative than data from another. If there’s a pattern in insurgent war, with enough time it should emerge in the notes of two reporters as surely as it would with 20. And if all sources of information are unreliable, then why is there a pattern in the first place? The data should be a mess. (In fact, if the “signal” is as strong as the recent paper claimed, then it might be a way to check the reliability of news reports: Stories that didn’t fit the profile would be the ones less likely to be true.)
These and Drummond’s other criticisms had, to my ear, a surprising tone (for a technology blog) of anti-intellectualism. In some paragraphs, she seemed to say abstraction itself is bad, because it makes a map that is less detailed and complex than reality. (“Forget Iraq’s religious and tribal divides; never mind Afghanistan’s mess of local warlords, jihadist zealots, and Taliban restorationists. This physicist’s model for war is clean and simple. If only real conflicts were so easy to predict.”) And she implied that an interesting idea, if it doesn’t work right away, ought to be abandoned. “Last summer, Gourley admitted that he failed to accurately predict the outcome of the 2007 military surge in Iraq — and that his predictions sprang from some rather dubious data.” OK, then, let’s close up shop!
Trouble is, all scientific theories are abstractions–maps that are much smaller and simpler than the reality they’re trying to describe. Data is never as good as people would like (as any climate scientist will tell you if you aren’t denouncing her for failing to predict last week’s blizzard). And most theories haven’t worked perfectly in their first stages.
OK, those aren’t reasons to accept every proposal that comes down the pike. But because they’re common in all scientific endeavors, they aren’t good reasons to throw an idea away, either.