The Fetish of National Security: How America's Worst Crimes Are Perpetrated and Excused
The following isPart 1of an excerpt from Corey Robin's book"The Reactionary Mind: Conservatism from Edmund Burke to Sarah Palin." Part 2 will run tomorrow.
The twentieth century, it’s often said, taught us a simple lesson about politics: of all the motivations for political action, none is as lethal as ideology. The lust for money may be distasteful, the desire for power ignoble, but neither will drive its devotees to the criminal excess of an idea on the march. Whether the cause is the working class or a master race, ideology leads to the graveyard.
Although moderate-minded intellectuals have repeatedly mobilized some version of this argument against the “isms” of right and left, they have seldom mustered a comparable skepticism about that other idée fixe of the twentieth century: national security. Some writers criticize this war, others that one, but has anyone ever penned, in the spirit of Daniel Bell, a book titled “The End of National Security”? Millions have been killed in the name of security; Stalin and Hitler claimed to be protecting their populations from mortal threats. Yet no such book exists.
Consider the less than six degrees of separation between the idea of national security and the lurid crimes of Abu Ghraib. Each of the reasons the Bush administration gave for going to war against Iraq—the threat of weapons of mass destruction (WMD), Saddam’s alleged links to Al Qaeda, even the promotion of democracy in the Middle East—referred in some way to protecting the United States. Getting good intelligence from informers is a critical element in defeating any insurgency. U.S. military intelligence believed (perhaps still does believe) that sexual humiliation is an especially useful instrument for extracting information from recalcitrant Muslim and Arab prisoners.
Many critics have protested Abu Ghraib, but few have traced its outrages back to the idea of national security. Perhaps they believe such an investigation is unnecessary. After all, many of these individuals opposed the war on the grounds that U.S. security was not threatened by Iraq. Some of national security’s most accomplished practitioners, such as Brent Scowcroft and Zbigniew Brzezinski, as well as theoreticians like Stephen Walt and John Mearsheimer, claimed that a genuine consideration of U.S. security interests militated against the war. The mere fact, these critics could argue, that some politicians misused or abused the principle of national security need not call that principle into question. But when an idea routinely accompanies, if not induces, atrocities—Abu Ghraib was certainly not the first instance of a country committing torture in the name of security—second thoughts would seem to be in order. Unless, of course, defenders of the idea wish to join that company of ideologues they so roundly condemn, affirming their commitment to an ideal version of national security while disowning its actually existing variant.
In its ideal version, national security requires a clear-eyed understanding of a nation’s interests and a sober assessment of the threats to them. Force, a counselor might say to his prince, is a tool a leader may use in response to those threats, but he should use it prudently and without emotion. Just as he should not trouble himself with questions of human rights or international law, he should not be excited by his use of violence. Analysts may add international norms to a leader’s toolkit, but they are quick to point out, as Joseph Nye does inThe Paradox of American Power, that these rules may have to give way to “vital survival interests,” that “at times we will have to go it alone.” National security demands a monkish self-denial, where officials forego the comforts of conscience and pleasures of impulse in order to inflict when necessary the most brutal force and abstain from or abandon that force whenever it becomes counterproductive. It’s an ethos that bears all the marks of a creed, requiring a mortification of self no less demanding than that expected of the truest Christian.
The first article of this creed, the national interest, gives leaders great wiggle room in identifying threats. What, after all, is the national interest? According to Nye, “the national interest is simply what citizens, after proper deliberation, say it is.” Even if we assume that citizens are routinely given the opportunity to deliberate about the national interest, the fact is that they seldom, if ever, reach a conclusion about it. As Nye points out, Peter Trubowitz’s exhaustive study of the way Americans defined the national interest throughout the twentieth century determined that “there is no single national interest. Analysts who assume that America has a discernible national interest whose defense should determine its relations with other nations are unable to explain the failure to achieve domestic consensus on international objectives.” This makes a good deal of sense: if an individual finds it difficult to determine his or her own interest, why should we expect a mass of individuals to do any better? But if a people cannot decide on its collective interest, how can it know when that interest is threatened?
Faced with such confusion, leaders often fall back on the most obvious definition of a threat: imminent, violent assault from an enemy, promising to end the independent life of the nation. Leaders focus on cataclysmic futures, if for no other reason than that these are a convenient measure of what is or is not a threat, what is or is not security. But that ultimate threat often turns out to be no less illusory than the errant definition of security that inspired the invocation of the threat in the first place.
Hovering about every discussion of war and peace are questions of life and death. Not the death of some or even many people, but, as Michael Walzer proposes inArguing about War, the “moral as well as physical extinction” of an entire people. True, it is only rarely that a nation will find its “ongoingness”—its ability “to carry on, and also to improve on, a way of life handed down” from its ancestors—threatened. But at moments of what Walzer, following Winston Churchill, calls “supreme emergency,” a leader may have to commit the most obscene crimes in order to avert catastrophe. The deliberate murder of innocents, the use of torture: the measures taken will be as many and almost as terrible as the evils a nation hopes to thwart.
For obvious reasons, Walzer maintains that leaders should be wary of invoking the supreme emergency, that they must have real evidence before they start speaking Churchillese. But a casual reading of the history of national security suggests not only that the rules of evidence will be ignored in practice, but also that the notion of catastrophe encourages, even insists on, these rules being flouted. “In normal affairs,” Cardinal Richelieu declared at the dawn of the modern state system, “the administration of Justice requires authentic proofs; but it is not the same in affairs of state . . . . There, urgent conjecture must sometimes take the place of proof; the loss of the particular is not comparable with the salvation of the state.” As we ascend the ladder of threats, in other words, from petty crime to the destruction or loss of the state, we require less and less proof that each threat is real. The consequences of underestimating serious threats are so great, Richelieu suggests, that we may have no choice but to overestimate them. Three centuries later, Learned Hand invoked a version of this rule, claiming that “the gravity of the ‘evil’” should be “discounted by its improbability.” The graver the evil, the higher degree of improbability we demand in order not to worry about it. Or, to put the matter another way, if an evil is truly terrible but not very likely to occur, we may still take preemptive action against it.
Neither statement was meant to justify great crimes of state, but both suggest an inverse relationship between the magnitude of a danger and the requirements of facticity. Once a leader starts pondering the nation’s moral and physical extinction, he enters a world where the fantastic need not give way to the factual, where present benignity can seem like the merest prelude to future malignancy. So intertwined at this point are fear and reason of state that early modern theorists, less shy than we about such matters, happily admitted the first as a proxy for the second: a nation’s fear, they argued, could serve as a legitimate rationale for war, even a preventive one. “As long as reason is reason,” Francis Bacon wrote, “a just fear will be a just cause of a preventive war.” That’s a fairly good description of the logic animating the Cold War: fight them there—in Vietnam, Nicaragua, Angola—lest we must stop them here, at the Rio Grande, the Canadian border, on Main Street. It’s also a fairly good description of the logic animating the Nazi invasion of the Soviet Union:
These are by no means ancient or academic formulations. While liberal critics claim that the Bush administration lied about or deliberately exaggerated the threat posed by Iraq in order to justify going to war, the fact is that the administration and its allies were often disarmingly honest in their assessment of the threat, or at least honest about how they were going about assessing it. Trafficking in the future, they conjured the worst—“we don’t want the smoking gun to be a mushroom cloud”—and left it to their audience to draw the most frightful conclusions.
In his 2003 state of the union address, one of his most important statements in the run-up to the war, Bush declared: “Some have said we must not act until the threat is imminent. Since when have terrorists and tyrants announced their intentions, politely putting us on notice before they strike? If this threat is permitted to fully and suddenly emerge, all actions, all words and all recriminations would come too late.” Bush does not affirm the imminence of the threat; he implicitly disavows it, ducking behind the past, darting to the hypothetical, and arriving at a nightmarish, though entirely conjectured, future. He does not speak of “is” but of “if” and “could be.” These words are conditional (which is why Bush’s critics, insisting that he take his stand in the realm of fact or fiction, never could get a fix on him). He speaks in the tense of fear, where evidence and intuition, reason and speculation, combine to make the worst-case scenario seem as real as fact.
After the war had begun, the television journalist Diane Sawyer pressed Bush on the difference between the assumption, “stated as a hard fact, that there were weapons of mass destruction,” and the hypothetical possibility that Saddam “could move to acquire those weapons.” Bush replied: “So what’s the difference?” No offhand comment, this was Bush’s most articulate statement of the entire war, an artful parsing of a distinction that has little meaning in the context of national security.
Probably no one in or around the administration better understood the way national security blurs the line between the possible and the actual than Richard Perle. “How far Saddam’s gone on the nuclear weapons side I don’t think we really know,” Perle said on one occasion. “My guess is it’s further than we think. It’s always further than we think, because we limit ourselves, as we think about this, to what we’re able to prove and demonstrate . . . . And, unless you believe that we have uncovered everything, you have to assume there is more than we’re able to report.”
Like Bush, Perle neither lies nor exaggerates. Instead, he imagines and projects, and in the process reverses the normal rules of forensic responsibility. When someone recommends a difficult course of action on behalf of a better future, he invariably must defend himself against the skeptic, who insists that he prove his recommendation will produce the outcome he anticipates. But if someone recommends an equally difficult course of action to avert a hypothetical disaster, the burden of proof shifts to the skeptic. Suddenly she must defend her doubt against his belief, her preference for politics as usual against his politics of emergency. And that, I suspect, is why the Bush administration’s prewar mantra, “the absence of evidence is not evidence of absence”—laughable in the context of an argument for, say, world peace—could seem surprisingly cogent in an argument for war. “Better to be despised for too anxious apprehensions,” Burke noted, “than ruined by too confident a security.”
As Walzer suggests, an entire people can face annihilation. But the victims of genocide tend to be stateless or powerless, and the world has difficulty seeing or acknowledging their destruction, even when the evidence is undeniable. The citizens and subjects of great powers, on the other hand, rarely face the prospect of “moral as well as physical extinction.” (Walzer cites only two cases.) Yet their leaders seem to imagine that destruction with the greatest of ease.
We get a taste of this indulgence of the state and its concerns—and a corresponding skepticism about non-state actors and their concerns—in Walzer’s own ruminations on war and peace. Throughout Arguing about War, Walzer wrestles with terrorists who claim that they are using violence as a last resort and antiwar activists who claim than governments should go to war only as a last resort. Walzer is dubious about both claims. But far from revealing a dogged consistency, his skepticism about the “last resort” suggests a double standard. It sets the bar for using force much higher for non-state actors than it does for state actors—not because terrorists target civilians while the state does not, but because Walzer refuses to accept the terrorist’s “last resort” while he is ready to lend credence to the government’s, or at least is ready to challenge critics of the government who insist that war truly be a last resort.
For Walzer, the last resort argument of antiwar activists is often a ruse designed to make a government’s going to war impossible—and a muddy ruse at that. For “lastness,” he says, “is a metaphysical condition, which is never actually reached in real life; it is always possible to do something else, or to do it again, before doing whatever it is that comes last.” We can always ask for “another diplomatic note, another United Nations resolution, another meeting,” we can always dither and delay. Though Walzer acknowledges the moral power of the last resort argument—“political leaders must cross this threshold [going to war] only with great reluctance and trepidation”—he suspects that it is often “merely an excuse for postponing the use of force indefinitely.” As a result, he says, “I have always resisted the argument that force is a last resort.”
But when non-state actors argue that they are resorting to terrorism as a last resort, Walzer suspects them of bad faith. For such individuals, “it is not so easy to reach the ‘last resort.’” To get there, one must indeed try everything (which is a lot of things) and not just once. Even “under conditions of oppression and war,” he insists, “it is by no means clear when” the oppressed or their spokespersons have truly “run out of options.” Walzer acknowledges that a similar argument might be applied to government officials, but the officials he has in mind are those who “kill hostages or bomb peasant villages”—not those who claim they must go to war. Thus, Walzer entertains the possibility that governments, with all their power, may find themselves racing against time, while insisting that terrorists, and the people they claim to represent, invariably will have all the time in the world.
To be continued…