Our species’ history appears to be aligned to the length of our weapons: how far, how much, how long can we keep attacking, killing, damaging? Men with bullets became men and women with giant machinery which became robots with bombs. But what began with fists and stones has now evolved into crippling not knees but computer-systems.
Considering recent cyber-attacks such as the cyber-worm Flame and the Stuxnet and Duqu viruses, it is easy to see why some are suggesting the morality of war must be changed. No longer are we talking “simply” about where and when it is OK to kill combatants. Now we are engaging in a war with no direct flesh and blood causalities; instead countries are crippling others’ nuclear enrichment programs, taking screenshots and monitoring instant-messaging services. It is, therefore, easy to see why some might see Just War Theory as mostly too antiquated, with its Latin phrasings and Aquinian roots, to have anything worth contributing to this discussion. But Just War Theory, like all other forms of moral discussions, has been like the weapons waged in the wars: evolving and taking in larger areas as its target. The ethics of war, therefore, don’t require an overhaul simply because we’re talking about computer viruses as opposed to bullets; it just needs more engagement than before.
Such concerns should warrant everyone’s attention. As Patrick Lin (perhaps the most important ethicist writing on technology and policy today), Fritz Allhoff and Neil Rowe correctly point out:
How we justify and prosecute a war matters. For instance, the last U.S. presidency proposed a doctrine of preventive or preemptive war, known as the "Bush doctrine," which asked, if a nation knows it will be attacked, why wait for the damage to be done before it retaliates? But this policy breaks from the just-war tradition, which historically gives moral permission for a nation to enter war only in self-defense. This tradition says that waging war -- a terrible evil that is to be avoided when possible -- requires a nation to have the righteous reason of protecting itself from further unprovoked attacks.
How does crippling nuclear programs, for instance, fit into “defensive” action? One could say that this prevents the use of nuclear weapons, but of course, the targeted country could indicate that such programs are necessary for general societal use by average citizens for, say, electricity. Or worse, they could use the same policy against us. As Lin, et al., point out, expanding “the triggers for war” like this could backfire horribly for international policy: “For instance, [imagine if] Iran reports contemplating a preemptive attack on the U.S. and Israel, because it believes that one or both will attack Iran first. Because intentions between nations are easy to misread, especially between radically different cultures and during an election year, it could very well be that the U.S. and Israel are merely posturing as a gambit to pressure Iran to open its nuclear program to international inspection. However, if Iran were to attack first, it would seem hypocritical for the U.S .to complain, since the U.S. already endorsed the same policy of first strike.”
These are the kinds of discussions that matter. The reason this ties in with cyber-attacks is that one could make an attack without directly harming individuals, thus you could attempt a claim that it is not “an attack” at all. What constitutes an “attack” if no one is directly harmed, but systems are merely crippled? Furthermore, unlike other measures of attacks, cyber-weapons can have built into them designs to make them moral, as Lin, et al., suggest. “By building ethics into the design and use of cyberweapons, we can help ensure that war is not more cruel than it already is.”
For example, you don’t attack files that are essential to national security which aren’t backed up (perhaps you can attack files that could be replaced but would take several months to do so. Unlike buildings or machines, digital files can be restored almost as if they had never been destroyed in the first place); or you don’t cripple systems that would significantly harm civilians (if you took out electricity in certain areas, people could be, for example, left starving because of a lack of refrigeration).
However, notice that this has happened with current weapons, too: police use Tasers, rubber-bullets and so on. Though these are classified as “non-lethal”, they still result in severe suffering, a significant number of deaths; indeed, more than we’d like to think. Of course, unlike cyber-attacks that are deliberately attempting to be non-lethal, there is perhaps less danger – though, again, we don’t know. We thought that when implementing non-lethal weapons. But, for example, police are more likely to draw these weapons, since they’re non-lethal, than if they only had their normal guns. Police think that at least the nonlethal won’t kill (though, as these articles indicate, that’s evidently not true!).
The point is that though we like to think our weapons are changing, they are not doing so in a morally significant way. Our measures of just wars and moral applicability are always grey areas regardless of whether we’re shooting a combatant with a gun or crippling his cities’ power supply. These are not easy decisions and require constant engagement with the evidence, ethical systems and analysing the laws of war. We shouldn’t be deceived just because our attacks are dressed in languages of the “non-lethal” or, even, “the digital”. The outcome remains the same: people are affected, usually for the worse. Figuring out if such actions are morally permissible, however, remains an ongoing discussion of which we must all be concerned.
Image Credit: wawritto/Shutterstock