A large and impressive literature has arisen over the past fifteen years concerning the emergence, transfer, and sustenance of political norms in international life.Footnote 1 The presumption of this literature has been, for the most part, that the winds of normative change blow in a progressive direction, toward greater or more stringent normative control of individual or state behavior. Constructivist accounts detail a spiral of mutual normative reinforcement as actors and institutions discover the advantages of normative self- and other evaluation. There is also now much interesting research focused on the question of how to predict the emergence of future norms.Footnote 2
I focus, however, on a different issue here: the death of norms that had once seemed well internalized and institutionalized. The issue arises in relation to one of the most dramatic features in the defense policy of the United States since 2001: the crumbling of highly restrictive normative regimes prohibiting interrogatory torture and assassination as part of the “global war on terror.”Footnote 3 My aim here is to sketch what I take to be the central features of cases in which even norms that are clearly defined and apparently well internalized in a democracy nonetheless lose their grip on policy. The ultimate lesson, however, is an unappealing irony: While democracies surely do better than authoritarian regimes in adopting and internalizing certain kinds of constraints, in part because of a greater sensitivity to public mobilization around normative questions, that same sensitivity makes the long-term survival of these norms precarious. In particular, I suggest that force-constraining norms are most effectively internalized by coherent and relatively insulated professional cadres who see themselves as needing to act consistently over time. But in a democracy the values and arguments of those cadres are susceptible to being undermined by a combination of public panic and the invocation by policymakers of a public interest that can override the claims both of law and pragmatic restraint. Democracy, hence, can be at the same time both fertile and toxic: fertile as a source of humanitarian values and institutions, but toxic to the very institutions it cultivates.
The model I will describe may be of predictive use in helping us to see the special vulnerability of normative orders in democracies. But my hope is that it is also constructive in showing us how states and institutions committed to maintaining a certain normative order, especially democratic states, might best try to entrench those norms. While my argument is conceptual and philosophical, it draws on this recent history. I also add two qualifications to this article's title. First, I am not addressing all norms, but specific norms concerning the state use of force in national security policy. I therefore do not make claims about the generalizability of the conflict I describe to other norms, for example, norms of racial, sexual, or religious orthodoxy or hierarchy, or norms of reciprocal interaction.Footnote 4 Second, reports of a norm's death are frequently exaggerated, since norms can be latent, then resurrected. Arguably, the anti-torture norm was resuscitated by President Obama in 2009 when, as one of his first official acts as chief executive, he moved to prohibit cruel, inhuman, and degrading treatment of detainees.Footnote 5 I write here about the path of decay, whether or not that path is unidirectional, and why previously salient norms no longer seem to govern policy choice among political decision-makers.
Talking About Norms
Discussions of norms are now well entrenched in multiple literatures, and I do not mean to disturb, so much as make use of, some existing distinctions, drawn mainly from the philosophical literature. Nonetheless, because usage varies somewhat, I preface my discussion with some conceptual housekeeping.
To begin, I understand norms, at first approximation, as logical (propositional, in philosophers' jargon) reconstructions of actual social practice and judgment regarding behavior. Norm statements, when applied to individual or collective choices, actions, expressions, or feelings, yield a verdict in the register of the good or the right.Footnote 6 They differ from mere statements of regular behavior in that they implicitly or explicitly contain an evaluation of that behavior from some point of view. The comment that “People make eye contact when talking” states both a behavior and a norm, while “People blink every two seconds while talking” states a behavior only. We must be clear that the verbal statement of the norm is not necessarily the equivalent of the behavioral norm itself. For example, every human culture has norms concerning how close to stand to other speakers, depending on relative social status, age, gender, and many other social variables. These norms are well understood and regularly applied, both in positive behavior and as a basis for reaction and criticism. Yet these norms are very hard to articulate, except in a conclusory way: “Don't stand too close to your interlocutor.” As a more general matter, actual normative behavior will be consistent with different possible logical articulations. Thus, psychological norms are not identical with their verbal formulations.
Norms are the basis of accountability, both formal and informal, for actions. To say someone has done wrong or acted badly is to say that his behavior violates a norm. Their social reality is grounded in the fact that these rules have intersubjective support, in the sense that multiple members of the social group in question agree that such norms exist and share a practice of applying these norms to specific choices, acts, and so on—including a shared practice of arguing about whether the norm applies in a given instance. It is important to bear in mind that norms attach not only to behaviors but also to decisional processes (in terms both of considerations that should apply and of procedures for deliberation), feelings (for example, if one ought or ought not to feel ashamed in particular contexts), and expressions (that is, norms tightly regulate how and what one can and cannot say). Nonetheless, for simplicity I will usually refer to norms as governing acts.
Finally, the application of the norm typically includes a labeling of the act as roughly good/bad or right/wrong, and brings about emotionally-laden expressions of praise or criticism. Norm application also includes the possibility of particular sanctions whose legitimacy is grounded, again in the first instance, in the validity of the norm. This is abstract but not impossibly so, I hope. My point is that we want a conception of a norm broad enough to encompass not just the binary logic of permission and prohibition but also the weighted valuing of a wide range of the things people do. For example, norms exist in most (American) university and business cultures against the public display of excessively strong emotion in a meeting. Someone who bursts into tears or screams with rage will be subject to criticism for this behavior (in the dimension of good/bad, usually, rather than right/wrong). But the norm violation may, despite the criticism, still be quite effective, either intrinsically or because of the additional shock effect of being a norm violation. Some people do violate the expressive norm precisely to profit from the ripples of norm-violation. This symbolic dimension of norm violation is important, and I will return to it later.
The social reality of norms thus has two faces: an individual behavioral component and a social component, with the latter part constituted by the formal or informal institutions of labeling, (dis)approbation, and punishment. If the norm exists socially, and is understood to have wide (or universal) “jurisdiction”—that is, it is not a norm that applies only to a certain group, like a dress or speech code—then individual behavior will always be in the shadow of the norm, in the sense that behavior to which the norm could arguably apply can always be described as being in or out of compliance with the norm. Compliance is thus a weak notion: I may be in compliance with a norm just because I have no opportunity to violate it (if, for example, I would gladly drink and drive, but my host happens to have run out of alcohol). The stronger notion is that of norm-guidance, a concept that itself comes in stronger and weaker flavors. In the stronger flavor, an individual is norm-guided when the existence of the norm provides a reason for or against the act in question, the agent takes that reason into consideration, and assigns that reason significant, though not necessarily conclusive, weight. As an example, you are at a dinner party when the host makes a racist remark in passing. Norms of politeness might counsel (indeed require) you not to embarrass your host, while ethical norms might counsel (and indeed require) you to call him out. As you weigh the conflict of norms and social roles, you are guided by them, but only at most one of the norms can be decisive.
In the weaker version, the norm has a psychological but not necessarily occurrent reality: it functions as what philosopher Michael Bratman has called a “filter,” screening certain deliberative possibilities from arising in the first place and thus preserving the coherence of our more complex plans.Footnote 7 Faced, for example, with a sensation of hunger, I do not typically weigh and then reject the possibility of simply grabbing food from a sidewalk market; instead, I calculate whether my hunger is worth the extra cost of buying food on the run, rather than waiting until I get home. While the norm against stealing food is not present to mind in my deliberations, it has a counterfactual reality: if I had not internalized the norm, I would presumably survey the possibility of simply grabbing food from an open container, irrespective of my legal entitlement. Whether or not a norm is deliberatively occurrent depends on a great range of factors, both individual and institutional: how deeply (and by what process) it has been internalized.
The psychological dimension of norm-guidance requires some additional elaboration as well, given the variety of ways in which norms can enter into deliberative space. While I have said that one can be norm-guided even if the norm is not decisive, mere presence at mind cannot be sufficient to constitute guidance. As philosopher Bernard Williams remarked, a business person who discusses the assassination of a competitor, only to add immediately “but we can't do that, that would be wrong,” is not actually exhibiting norm-guidance.Footnote 8 The chief problem lies in distinguishing between being guided by the consequences of norm-violation (wanting to avoid a sanction), and being guided directly by the normative consideration. There is a further distinction, as well, between endorsing a norm, to oneself or in discussion, and actually following it. Indeed, one can find oneself guided by a norm while believing it to be irrational, as some find with family religious customs, or paying lip service to a norm that one finds ways to avoid. While it may be impossible for an outside observer to distinguish these cases,Footnote 9 I will say that behavior is minimally guided by a norm when the thought of violation occurs, and the benefits and costs of violation are weighed. Such weighing includes the reasons or values intrinsic to the norm itself (for instance, that it protects a right), and not merely the threat of an external sanction. Finally, a norm is weakly present in individual or collective deliberation when it is merely expressed as a possibly relevant consideration to the case at hand, even if only nominally.
To return to the example above, if I am very hungry and very broke, I may be disinhibited enough to consider theft as a solution to hunger. I will be weakly guided by the property norm if I weigh the putative wrongness of theft against the benefits to me (and perhaps an assessment of the actual harm I do the shopkeeper). And the norm will be weakly present just so long as I realize that I will be engaging in theft, but that fact has no intrinsic deliberative significance. Behavior can thus be norm-guided without being norm-compliant, such as when one weighs a norm but still violates it. And it can be norm-compliant without being norm-guided, as above, when norm-violation is not practical for independent reasons. Though it is a vexed semantic question whether behavior can be “guided” by a norm when one is only concerned with avoiding a related sanction, I will treat such cases of threat-based compliance as cases of compliance without guidance.Footnote 10
The metaphor of “norm death,” as I refer to it, suggests a reversal of the norm creation process: it is a waning process that moves from fully decisive filtering and guidance, to weighing, to what I have called weak presence—and potentially to the total irrelevance and invisibility of that norm.Footnote 11 In the domain of policy, norm death will be associated with certain distinctive transitions: (1) the emergence in discussion of policy options that were physically possible but were previously excluded from deliberation; (2) a shift from a discussion of norms couched in categorical terms to one couched in weighing terms; (3) the emergence of discussions in which the norm and its enforcement mechanisms figure centrally as obstacles to be minimized or avoided; and (4) the ultimate disappearance of even rhetorical evidence of the existence of a norm. To take an example with contemporary sting, in 1929 U.S. Secretary of State Henry Stimson closed the U.S. Cryptographic Office, which was charged with (among other things) the task of deciphering foreign embassies' communications, with the famous pronouncement, “Gentlemen do not read each other's mail.” But Stimson's concern quickly became quaint, as the new institutional norm of respect for diplomatic cables decayed into a practice of simply not getting caught reading them.Footnote 12
I do not mean to suggest that the path to norm death is always uniform, nor that each step along the path is always taken, or always visible. But it can nonetheless provide a model for plotting institutional change over time. I turn now to discussing two examples of decay and (possibly) death.
The Emergence of Violence-Restraining Norms
When we speak of the institutionalization of a norm, we generally have in mind a point on the spectrum, ranging from the weak presence of the norm as a nominal deliberative consideration, to intermediate internalization as a significant decisional factor, to its deep internalization as precluding countervailing considerations. The aim of norm entrepreneurs, according to recent (if still speculative) work on norm dynamics, is to introduce norms into both decisional space and public discussion along a sequence from potentially relevant, to a guiding factor, to strong internalization. The process of moving individuals toward partial or full internalization usually relies on institutional authorities developing a system of external sanctions and rewards that function both to mark the importance of the new norm and to provide assurance that one's own compliance will not be unilateral. Norm entrepreneurs can also be relevant at this stage, both in providing a pressure point for state agents who may be less than enthusiastic about enforcing the norm and in publicizing the success of the norm among the broader population or in other communities.Footnote 13
Much of the literature on norms and international relations concerns the efforts of norm entrepreneurs to propagate new norms, due to pressure from other nations and from NGOs, and focuses on the dynamics mentioned briefly above. Such norms might guide domestic conduct, establishing a new mode of behavior. For example, entrepreneurs have had significant success in helping to propagate norms condemning violence against women into territories and cultures in which such violence is commonplace.Footnote 14 While the direct influence of these anti-violence norms on the behavior of potentially violent actors is still hard to discern, they have in fact influenced political and governmental actors in places that had heretofore tolerated violence, increasing internal and external pressure to prevent and prosecute attacks on women. The recent and well-publicized gang rape of a woman riding a bus in India is just one sign of this effort.Footnote 15 Domestic and international norm entrepreneurs were able to seize upon and highlight the episode, demanding both an immediate prosecution of the individual wrongdoers and greater cultural awareness within India of the problem of violence against women.
There are, of course, stronger examples of norm change at the individual level, driven usually in coordination by state and private actors: the standard examples include the emergence of norms against public smoking, littering, drunk driving, and public urination. It is striking that these norms all have a reasonably common content, namely the treatment of public space, or the exposure of the body in public, and fit into a general neoliberal narrative of increasing personal responsibility. It is also unsurprising that such norms have grown fastest in the soils most fertile to neoliberal conceptions of individual relations to the public sphere, namely the United States and Western Europe. These success stories, heavily relied upon by legal scholars who study norms, are less a matter of transnational export than one of parallel (though obviously mutually self-aware) transnational developments.
We might usefully contrast these success cases with the deliberately transnational attempts by Western states (especially the United States) and economic actors to propagate norms concerning intellectual property protection in China. Content providers have met significant success in establishing property norms and licensing systems in the United States, but have failed in their efforts in China.Footnote 16 The norm propagation efforts will be stymied unless and until there is both active Chinese state support among Chinese political elites, and the effective reception of these norms by individual consumers.Footnote 17 Both levels of actors need to accept the norm for its guidance to be effective.
By contrast, norms that constrain state conduct directly do not need to be mediated through low-level communicative and enforcement practices. The post–World War II period, and especially post-Nuremberg, can be described as one of sustained norm internalization of violence-restrictive norms among states. While the process has been halting, with occasional backslides, through the Geneva Conventions a norm of target discrimination in aerial bombing has come to be deeply routinized in modern (predominantly NATO) militaries. Entrenchment of this norm has benefited from technological developments enabling target success while reducing collateral casualties, the implementation of highly legalistic target reviews processes, and inculcation in officer training. Norms against desecration of dead enemy combatants have also become deeply internalized, in individual psychology as well as institutional practice.Footnote 18
Torture
Until recently, one of the most impressive successes of the post-war period was the widespread acceptance of the norm against interrogatory torture. (The norm against terroristic torture has been in place for much longer.) In the United States, at least since the 1980s, formal prohibitions of torture in both law enforcement and intelligence contexts have become fully embodied and internalized norms affecting individual behavior. Police-station-house torture (“the third-degree”) in the United States (and, I suspect, in many European states as well) went from being a relatively routine practice until the 1940s and 1950s to a source of massive scandal and liability.Footnote 19
Torture has, of course, been prohibited under the Geneva Conventions, the Universal Declaration of Human Rights, and the International Covenant on Civil and Political Rights, and has arguably existed longer as a jus cogens customary norm.Footnote 20 And the practices constituting torture were already proscribed, as forms of battery, in both domestic criminal law and the military code of justice. Torture was therefore symbolically prohibited, but usually not subject to criminal enforcement. As Darius Rejali has argued, most democracies, including the United States, largely eliminated the traditional torture techniques of severe beatings, burns, and electrocutions after the 1960s, though the United States was a willing supervisor to forms of torture in Vietnam, substituting less messy, and hence more concealable, forms of the practice.Footnote 21 Such types of torture include forms of extreme physical pressure through, for example, “Palestinian hanging” by handcuffed arms, prolonged or intense subjection to cold, long-term sleep deprivation, shaking, false executions, and waterboarding. While the occasional judicial opinion has distinguished some of these techniques as merely cruel and inhumane (and therefore forbidden) but not, semantically, torture, others on this list clearly qualify as such; and their infliction by American, British, and Israeli military and covert personnel makes clear that they constitute state-inflicted torture.Footnote 22
The major change in the normative environment surrounding the practice was the passage in 1984 of the UN Convention Against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment, which required signatory states to agree to prosecute their own citizens who violate the treaty. Even the Reagan administration, which was generally hostile to international restrictions, became a signatory, albeit with a rider constraining the definition of torture in the United States to what is prohibited by the Eighth Amendment of the Constitution. The Convention was ratified by the United States during the Clinton administration in 1994, and was implemented in part through the adoption of domestic criminal legislation (18 U.S.C. 2340A) prohibiting torture.Footnote 23 It is true that the Clinton administration extensively “rendered” captives with potentially valuable intelligence to allies willing to use torture methods. But the “extraordinary rendition” practice—and the desire of a series of presidential administrations to maintain a deniable distance through the fig leaf of diplomatic assurances that the ally would respect the dignity of captives—is itself evidence that the domestic anti-torture norm had stuck.
More generally, direct torture was expunged from intelligence operations, both military and CIA, on the basis of four factors: the threat of criminal prosecution seems to have been very effective; the military institutionalized its anti-torture rules through explicit training of interrogators, convincing them that criminal sanctions awaited if they acted coercively with prisoners; interrogators were led to take professional pride in eliciting information without torture; and both lawyers and staff seem to have adopted the belief that U.S. engagement in torture would put American prisoners of war directly at risk.Footnote 24 The result was a stable military anti-torture culture, and a comparable culture in the FBI and in police departments: a normative conviction that torture was wrong, that violating the norm bore penal consequences for the torturer, and that the norm was supported by a culture of general international adherence. The anti-torture norm thus held in the United States for at least fifteen and probably more years across several administrations. As of September 10, 2001, it was safe to predict that the United States would continue to apply increasing international pressure on its allies in Latin America, Africa, and the Middle East to refrain from torture. It was hard to imagine that the United States would shortly be opening torture sites in Afghanistan, Iraq, Poland, and Thailand.
Assassination
The emergence of an anti-assassination norm is a longer process, with ebbs and flows. According to Ward Thomas, who has provided the best historical and analytical account of the practice, assassination—understood in its broad sense as the intentional killing, on or off the battlefield, of specific enemy political and military leaders—was proscribed by the Romans, then widely adopted as a technique of realpolitik in the Middle Ages and Renaissance.Footnote 25 (Even then, Thomas notes, the knightly code of chivalry disdained stealthy forms of killing.) Thomas quotes Hans Morgenthau on the practice in Venice: “The Republic of Venice, from 1415 to 1525, planned or attempted about two hundred assassinations for purposes of its foreign policy.”Footnote 26 Indeed, throughout the sixteenth century, in England, Spain, and France, assassination of heads of state was the “great game” of its day, vouchsafed by the Vatican. But the practice shifted dramatically in the seventeenth century; and, as Thomas plausibly argues, it shifted in synchrony with the emergence of Westphalian sovereignty.
By the early modern period writers had begun to remark on the distinction between treacherous assassination (that is, making use of trusted particulars to deliver a poison or dagger) and stealthy targeted killing. At the end of the sixteenth century Alberico Gentili had already laid the groundwork for the norm against assassination in his De Jure Belli, where he argued not only that the practice of political assassination was “shameful” but it also threatened the international system with instability, because it was a practice that would inevitably be reciprocated.Footnote 27 Hugo Grotius likewise distinguished between deceitful killings, which typically rely upon a particular subject's betrayal of his ruler, and what today we might call targeted killing. Regarding the latter, he says: “For to kill an Enemy anywhere is allowed, both by the Law of Nature and of Nations (as I have said already), neither is it of any Concern, how many or how few they be who kill or are killed.”Footnote 28
Nonetheless, Grotius recognized strong reasons to prohibit even nontreacherous targeted killings of individuals. First and foremost was the reciprocity concern: that the practice would render each ruler more vulnerable if it was unconstrained.Footnote 29 As Thomas further notes, the Westphalian system of mass armies offered an equilibrium in favor of strong states. These states could take and hold territory through direct military force; assassination, as a weapon of the weak, played against their strengths. Since these were the states that could mint new international law norms, it is little wonder that the norm would reflect the balance of their strategic interests.
By the nineteenth-century drafting of military codes, the norm is fully crystallized and universalized. Notably, the 1863 Lieber Code, Article 148, states:
The law of war does not allow proclaiming either an individual belonging to the hostile army, or a citizen, or a subject of the hostile government, an outlaw, who may be slain without trial by any captor, any more than the modern law of peace allows such intentional outlawry; on the contrary, it abhors such outrage. The sternest retaliation should follow the murder committed in consequence of such proclamation, made by whatever authority. Civilized nations look with horror upon offers of rewards for the assassination of enemies as relapses into barbarism.Footnote 30
It is worth emphasizing that Article 148, titled “Assassination,” does not distinguish either among means of killing, treacherous or otherwise, or among targets, whether general or private, ruler or subject. It is a blanket prohibition on individualized killing.Footnote 31 The Lieber Code reflected American military and intelligence practice until the Vietnam War, with the notable exception of the Yamamoto assassination. The 1960s through 1970 were exceptional, as activities during this time, led largely by the CIA, involved not only the nearly comic assassination attempts against Castro but also the much more lethal CIA-directed Phoenix program of targeted killing in Vietnam. However, following the Church Committee's congressional investigations into CIA practices, and President Gerald Ford's 1976 signing of Executive Order 11905Footnote 32 banning political assassination, the norm was restored to legal force in the United States. Indeed, it seems that the norm was internalized to such an extent that the Reagan administration wrestled with the question of how to present the targeting (by aerial bomber) of Muammar Qaddafi in 1986. The same was true for the Bush I administration's targeting of Saddam Hussein in 1990. These targeted killings were very plausibly legitimate attacks on commanders-in-chief under the UN Charter, Article 51, regarding the powers of self-defense and other categories of defense, and yet the White House clearly regarded the depictions of the attacks as extremely sensitive matters.
Perhaps the most dramatic evidence of the force of the anti-assassination norm in the United States is the U.S. response to the Israeli practice of targeted killing. By contrast to the United States (and in line with the older European practice), assassination has been a relatively overt tool of Israeli policy since before the founding of the state.Footnote 33 Moreover, Israel has made use of both deceptive and overt targeted killing techniques. For example, Israeli intelligence used mail bombs to kill Egyptian military leaders and German scientists in the 1950s; the 1970s saw the retaliatory killings (“Operation Wrath of God”) for the Black September PLO attack at the Munich Olympic games; and the 1980s and 1990s brought assassination attempts (through bombs and poison) against various PLO leaders. While these assassinations were robustly criticized outside Israel, they met with general acceptance (when they were successful) within. But with the emergence of the Second Intifada in 2000, the scale of Israel's targeted killing program, and public discussion thereof, dramatically increased, and gradually changed from a system of stealthy assassination (or “liquidation”) to a more militarily overt selection of individual targets, with a different proposed nomenclature: “preventive killings.”Footnote 34 The catalytic event was the killing by sniper of Fatah activist Dr. Thabet Thabet in December 2000. An overt act, roundly criticized globally and by the Left in Israel (who saw in Thabet a potential partner for peace), it prompted the Israeli military and civilian leadership to create targeted assassination approval structures and to litigate the program in the public eye. The resulting program, through which targets are identified and discussed within the Israeli security services, became a mainstay of Israeli defense policy. According to the estimates of the Israeli peace group B'Tselem, Israel killed 232 intended Palestinian targets between 2000 and 2008, which marked the beginning of the 2008–2009 Gaza incursion, and another 39 between 2008 and July 2014.Footnote 35
Israeli support for the targeted killing policy has been strikingly high, both among the public and policy elites, and there are no reports of significant change in this support over time.Footnote 36 Indeed, the generally pro–civil liberty Israeli Supreme Court interpreted legal sources to arrive at a relatively permissive standard for the targeted killing program.Footnote 37 By contrast, U.S. official and public attitudes to the Israeli practice were highly critical, consistent with a broad norm against assassination. Following the acceleration of Israel's program in July 2001, U.S. Ambassador to Israel Martin Indyk expressed harsh criticism of targeted killing on Israeli television, saying “The United States government is very clearly on the record as against targeted assassinations. They are extrajudicial killings, and we do not support that.”Footnote 38 An August 2001 poll showed American public disapproval of Israel's policy at 68 percent.Footnote 39 In 2002, State Department Spokesman Richard Boucher repeated the U.S. rejection of Israel's targeted killing policy in the course of distinguishing it from the United States' drone-based targeted killing in Yemen of Qaed Salim Sinan al-Harethi.Footnote 40 Most notably, reports by the U.S. State Department of Human Rights on Israel and the occupied territories include targeted killings by Israel under the rubric of “serious human rights abuses by Israel” (at least until its 2004 report, when targeted killings were included gingerly under a longer list of Israeli and Palestinian cases of “Excessive Force and Violations of Humanitarian Law”).
To be sure, the anti-assassination norm was much less securely embedded than the torture norm within international law. As in Israel, U.S. military law writers have long argued for a sharp distinction between “political assassinations,” understood as killings for political purposes and prohibited by Executive Order, and permissible individualized killings of military leaders who held simultaneous political office.Footnote 41 Moreover, under the self-defense rubric of Article 51 of the UN Charter, participants in hostilities can generally be targeted and killed, and state leaders could arguably count as such, given their control roles in military operations. While the International Covenant on Civil and Political Rights and other elements of human rights law and the law of armed conflict generally prohibit extrajudicial killings, they permit such killings where they are necessary to avert a deadly threat posed by a combatant (or civil threat in the law enforcement context) and are proportionate to that threat.Footnote 42 Furthermore, as a matter governed in the United States by executive order rather than criminal law (unlike torture), assassination is not generally subject to criminal prosecution.
When Everything Changed
On September 11, 2001, the Twin Towers fell. In very short order, the CIA was told, presumably by Vice President Cheney, to “take the gloves off,” and to “work the dark side” to prevent further attacks by al-Qaeda.Footnote 43 The CIA and the Department of Defense quickly began to engineer a torture program, euphemistically called “enhanced interrogation,” by charging the contract psychologists James Mitchell and Bruce Jessen to reverse engineer the Survival, Evasion, Resistance and Escape (SERE) program they had designed to teach American soldiers how to resist torture. Jessen and Mitchell developed the program of techniques mentioned above, of physical pressure and isolation, including waterboarding. These techniques were used by the CIA against so-called “high-value” prisoners in Afghanistan and Poland and propagated by the Department of Defense to Guantánamo, from which they then found their way (through the transfer of military intelligence officers) to Iraq and Abu Ghraib prison.
Of course, developing the physical techniques for torture was only one part of the revolution in norms. Because the anti-torture norm was embedded in the Code of Military Justice and Federal Criminal Law, legal teams were put to work to develop legal space in which torture could occur with minimal risk of later criminal prosecution. Led by John Yoo, and under the direction of Vice Presidential Chief of Staff David Addington, Justice Department lawyers created a set of memoranda that, however implausible their reasoning, could serve as a good-faith legal defense to CIA or military personnel.Footnote 44 (Since torture is a crime of specific intent, a good-faith defense would exonerate.) The torture program continued until the Bush administration shut it down in 2007, and it was formally closed by President Obama immediately after his inauguration.Footnote 45 During the period of 2002 to 2007 hundreds of detainees were subjected to a variety of forms of abuse, although only a handful were waterboarded. With the exception of a few low-level soldiers at Abu Ghraib prison, no one in the CIA or the military has faced criminal punishment for any acts of torture or other forms of illegal, cruel, inhuman, or degrading treatment of prisoners.
The anti-assassination norm has shifted more slowly, probably in large part because of the recent and rapidly evolving predator drone technology. Initial targeted killing efforts were done through Special Forces military squads, and were a continuous part of counterinsurgency efforts in Iraq. The first reported drone attack on an al-Qaeda suspect was in Yemen in 2002.Footnote 46 But the pace of drone strikes began to rise quickly under President Bush in 2008, and has become the anti-terrorist technique of choice for President Obama. The London-based Bureau of Investigative Journalism, a generally reputable source, estimates that there have been roughly 390 U.S. strikes in Pakistan, more than 1,000 in Afghanistan, and another 65 to 77 in Yemen. Targets include putative militants, drug lords intertwined with Taliban or al-Qaeda activities, and targets identified by their “signatures” alone—presumably groups of military-aged males meeting in remote locations.Footnote 47
From the perspective of the Obama administration, the new targeted killing policy represents a fully legitimate form of counterterrorist security policy. CIA Chief John Brennan, the principal architect of the drone program, gave a speech in 2012 in which he argued for the ethical value of drone strikes, as grounded in the international humanitarian law constraints of necessity (to use force only against imminent threats), proportionality (to ensure that collateral damage is not excessive), and humanity (to avoid causing unnecessary suffering).Footnote 48 One can, of course, dispute any of these characteristics of the drone program, subject to the usual forms of policy creep. Brennan's account also fails to mention the collateral harms to psyche and liberty suffered by people living under drone surveillance and in dread of being in the vicinity of a drone attack.Footnote 49 But the principal point is that, from the point of view of the policy elite, the norm of broad prohibition has disappeared and has become a norm of broad permission. As of this writing, the American public stands fully behind that normative shift: polling conducted in 2014 put U.S. support of drone strikes abroad at 66 percent of all voters.Footnote 50
What the Norms against Torture and Assassination Have in Common
Let us now consider if there are commonalities in the two stories of norm disintegration, notwithstanding the clear difference in the atmospheres in which the policies were developed. First, as Jane Mayer documents, the anti-torture norm was demolished in a mood of real panic by political elites, who feared not only the political costs should a new terror attack succeed but personally feared being targeted by terrorists. By contrast, the assassination policy seems to have been crafted more coolly, as a way of using technology to serve security interests at substantially lower American political and human risk.Footnote 51 Second, whereas implementing the torture policy involved direct conflict with members of the military and counterterrorist officials who saw both personal and reciprocal value in the anti-torture regime, there have been few direct signs of military resistance to the drone policy. Third, the legal environment, both domestic and international, is considerably more plastic with regard to assassination, although international opinion on the matter appears to be nearly as harsh on the drone policy as on torture. Finally, there appear to be more clearly demonstrable gross benefits to security policy from the drone strikes than from the torture policy. (In both cases, however, any gross benefits may well be swamped in net terms by the blowback these policies can potentially cause.)
That said, I believe we can nonetheless identify a number of common factors that enabled the swift collapse of these stringent norms. One of these is conceptual, the other is organizational.Footnote 52 The conceptual point is this: Stringent norms rest on a moral psychology of right and wrong, not of weighing good and bad. The anti-torture norm is grounded in the first instance in a categorical inhibition that gets its motivational force from a conception of the dignity of the person being tortured, and in the second instance by the integrity of those complicit in his torture. The wrongness of torture lies in the total control by the torturer of the psyche of the tortured. Torture not only degrades the person being tortured by annihilating his autonomy but also degrades the torturer by enabling libidinal impulses of dominance to surface and override the civilizing restraints of moral codes.Footnote 53 Torture, in other words, is no way for a warrior to fight. When the anti-torture norm has psychological bite, it is because it connects to a deep and shared conception of dignity for both agent and recipient. While the pragmatic arguments are founded in considerations of reciprocity and force, they are parasitic on this more basic conception of the wrongness of torture.
What defeated this torture norm was thus not fear itself, since fear is a constant in war. Instead, I want to suggest the norm was defeated by the utilitarianism of fear. By this I mean that policymakers' deliberations shifted from choosing the best among a principle-restricted set of options to considering a full set of options, where each was weighed in terms of probable U.S. lives saved versus non-U.S. lives lost.Footnote 54 I call it a utilitarianism of fear because the choice to reintroduce torture to the intelligence armory reflects the reductionism of a panic reaction, in which nonsurvival values are seen as irrelevant to the decision at hand.Footnote 55 The extensive effort by the Office of Legal Counsel to create an environment of broad legal permission for torture reveals that what had been a stringent intrinsic constraint against torture became a morally irrelevant institutional obstacle to a policy objective of maximal information gathering. Attempts to cash out the normative constraint in the pragmatic coin of reciprocity concerns were doomed to failure, given that there was no reason to think al-Qaeda members taking U.S. hostages would respect an anti-torture norm in the first place.
In the immediate post-9/11 environment, where the threat of weapons of mass destruction was taken very seriously, utilitarian reasoning led to a kind of singularity: if the threat was catastrophic, no matter how improbable, then anything was permitted on the utilitarian calculus. That torture can be made to represent an ethical, and not simply a ruthless, policy choice is essential to its psychological success. It has also come to be well supported by the American public: with some ebbs and flows, public support for the occasional or routine torture of suspected terrorists stood at 53 percent as of 2011.Footnote 56 Since 2004 (the revelations of Abu Ghraib) no more than 32 percent of Americans have said they always oppose torture of suspects. By contrast, even in November 2001, just two months after the attacks on the United States, 66 percent of those polled said they could not envision a scenario in which they would support the torture of terrorism suspects.Footnote 57
The emergent drone policy represents the same conceptual dynamic. While Brennan's defense of the U.S. drone policy appears to rest on the ethical considerations he articulated in his 2012 speech noted above, we should recognize that these are constraining norms rather than legitimating norms. That is, the norms of international humanitarian law (IHL) do not themselves justify the infliction of violence, except in terms of the underlying utilitarian norm of necessity, whose content is, effectively, that a given act is necessary in the sense that its performance is the only route to a net reduction in the relevant costs (here, U.S. strategic interests).Footnote 58 Indeed, the IHL norms are fully consistent with a utilitarian view that accords some weight to the utility of the third parties at risk; it merely operationalizes the weighing of the costs to those third parties. The anti-assassination norm, by contrast, was rooted in at least some values that are difficult to defend in utilitarian terms, namely, the values associated with openness and a fair fight and the rejection of perfidy—values of an emergent sense of military honor.
The values of honor and dignity are fragile, especially in a context of fear. But the anti-assassination norm is even weaker yet. The reciprocity argument against targeted killing is, outside the context of perfidy, purely one of international stability. Such concerns have little force in the conflicts in Afghanistan or with al-Qaeda, both because stability is already gone and because there is little real vulnerability to assassination of senior domestic U.S. figures in the conflict. Thus, under even very light conceptual pressure, the anti-assassination norm will fold easily. The chief problem, as I see it, is that with the collapse of the anti-assassination norm, so goes a broader norm that restricts interstate violence to cases of gross threats or wrongs. A drone-based killing policy, especially, normalizes interstate violence as a response to relatively low-grade threats, and so it has the effect, typical of a utilitarian policy, of lowering the bar in terms of when state interests justify war. The collapse of the anti-assassination norm thus rests on the collapse of a broader structure of anti-consequentialist thinking as well.
The organizational point parallels the conceptual point in the two cases. As I have mentioned, resistance to the collapse of the anti-torture norm could be found primarily among the cadres of professional interrogators, military and paramilitary, as well as among (many) military lawyers. These were the individuals with whom the considerations of honor had deep traction, and who saw themselves as subject to a pervasive sense of normative discipline. The moral and political transformation of the torture policy was created by civilian leaders, both policymakers and lawyers, who lacked any evident ethical mooring in a dignitarian or honor-centered conception of national security values. Civilian policymakers are, furthermore, directly or indirectly electorally responsible, and the electorate also does not share the deontological sentiments of the professional cadres. It was, therefore, retrospectively likely, if not inevitable, that a system of civilian control, in a context of directly perceived risk, would lead to the sidelining of the professional ethical concerns in favor of pragmatic, publicly visible values.
Similarly, the trend of relying on drones for a wide range of policy interests outside the “hot” battle zone is a direct function of a novel capacity of civilian leaders to control lethal military technology, rather than having to deploy that technology through the broader administration of a military general staff. The drone policy can accelerate much more quickly, and at much lower political cost or electoral threat, than a war mobilization into Waziristan, Libya, Mali, or Yemen, precisely because the decision-making can be easily centered in the Executive Branch and the operations relatively easily controlled (either via CIA or Air Force command centers). It is noteworthy that, as of this writing, despite suggestions by President Obama that he would be transferring even covert drone operations from the CIA to the Defense Department in order to increase “transparency,” there is no evidence of an actual shift in the process.Footnote 59
I recognize that this interpretation of the death of the torture and assassination norms is contentious in (at least) two ways.Footnote 60 First, it is hardly the only possible interpretation of the change in policies. One could argue instead not that a categorical norm prohibiting the conduct had decayed, but that the original norms at issue had submerged conditional exceptions—in particular, that they only prohibited torture and assassination if the stakes were sufficiently low. The attacks of 9/11 changed the stakes and so triggered the exceptions, but the same norms are still in play today. Second, I have interpreted the counter-position to the restrictive norms as essentially utilitarian, a matter of reducing prohibitions to negative weights. But one might instead interpret the counterargument as one of coming to treat the nonutilitarian mandatory norm of a duty to protect one's citizens as trumping the normative constraints in force. I will address these two points together.
On the first point, the historical evidence here, as always, underdetermines the appropriate interpretation, and it is possible to see this history as a continued application of discontinuous (exception-sensitive) norms. It is, however, important to see that the first objection is not a challenge to the best ontological account of the norms at issue. Ontologically, there may be no reason to prefer an account of an exceptionless norm that is suspended exceptionally versus an exception-containing norm that is applied.Footnote 61 But what is at issue is the psychological reality of these norms in the minds of decision-makers: whether they understood the prohibitory norms as merely presumptive rather than categorical. And here there is evidence to suggest that we have witnessed a discontinuous process of decay. First, the legal renderings of the torture and assassination norms were clear and categorical, providing for no exceptions or affirmative defenses. While the actual deliberative norms need not be identical to the formal legal statements, there is no evidence that these norms were understood as having silent exceptions during the time of their full-strength guidance. Military and FBI training, for example, emphasized the absolute nature of the anti-torture norm. While there were doubtless policymakers who understood the norms to be suspendable in times of emergency, these views were not articulated publicly until after 9/11. In the case of assassination, it is true that the post–Church Committee norm appears to have been suspended for purposes of (unsuccessfully) targeting Qaddafi and Hussein, but serious efforts were made to show that these were exceptional cases, justified as such, and not part of a general policy of permitting such killings whenever certain conditions were met. Post-2002, the CIA and U.S. military are clearly operating under a very different, exception-based regime, wherein the possibility of targeted attacks on state leaders are openly discussed (for example, Qaddafi in 2010, Assad in 2013). I think there is no plausible way to understand post-2002 developments as anything other than the emergence of a new deliberative paradigm concerning targeting killing. For both norms, then, the journalistic evidence suggests a deliberate policy shift from general prohibition to managed permission, albeit with a further shift in the torture norm regime after 2009.
As to my claim that honor- and dignity-based norms have been superseded by a utilitarian logic, here again my argument is about deliberation, not ontology. It is true that a norm insisting on the priority of citizen defense can mimic a utilitarianism that places no weight on the interests of non-citizens. Such a self-defense norm will presumptively include using all manner of interrogation or interdiction that will reduce the probability of an attack on a state's citizens, or on other national interests. Put another way, the logic of nationalism (a sort of collective ethical egoism) coincides with the logic of realism. Operationally, a self-defense norm coincides with a utilitarian norm of maximizing U.S. lives saved (or of minimizing the risk of U.S. lives lost). And the political economy of democracy, according to which officials fear voter reactions to lives lost, is consistent with reinforcing both logics. Both are hostile to the ways in which dignity- and honor-based norms exclude otherwise productive options from the scene of deliberation. Under the instrumental pressure of either self-defense or utilitarian values, the values of dignity and honor will come to seem quaint or fetishistic.
Conclusion: Death . . . and Rebirth?
What I have tried to provide is a road map for the collapse of norms whose institutionalization was hard fought. The unthinkable became thinkable, was thought, and then was done. The irony of this story is that democratic politics, while generally friendly to human rights norms and treaties, are less hospitable to those norms when put to the test. While values of equality, fairness, and due process obviously do well in democratic regimes, counter-utilitarian and honor-based norms may do poorly in periods of stress.
The story here is illustrative of a general tension within democratic polities, between the values of a nonaccountable group of policymakers and public accountability. This is a tension that often plays out in the design of institutions to protect certain forms of decision-making (notably central banking) from direct political pressure. What is striking about the examples of torture and assassination is that select policymakers—the military and FBI in particular—protected values rather than expertise. The task of institutional design, therefore, might be seen as one of ensuring the continuing salience of those values, ensuring that the deliberative institutions in which they retain a grip continue to play a prominent role in security policy, so that their views are not easily sidelined through bureaucratic maneuvering.
There is an important further question of whether the newly thinkable can become again unthought at the public level. Opinion polling on torture reveals a mercurial public. Immediately following the revelations of the Abu Ghraib abuses in 2004, 32 percent of Americans polled said they thought torture was never justified, against 43 percent saying it could often or sometimes (as opposed to rarely) be justified. Five years later, and following emphatic defenses of torture by former Vice President Cheney, the number saying that torture was never justified had dropped to 25 percent, with 50 percent saying that it could often or sometimes be justified.Footnote 62 While these are not enormous shifts in public opinion, they are significant, and it is striking that public opinion seems to have moved in a direction opposite to official action post-2009.
If, like Vice President Cheney, you regard the norms against torture and/or targeted killing as obstacles to a rational policy of national security, then the values of nonaggregative honor and dignity are themselves the problem to be overcome. Indeed, one could then justify the substitution of relatively precise drone-based strikes for imprecise ground strikes as a serious advance for humanitarian values—as truly bringing properly democratic values to bear in areas of national security. But I suspect that even advocates for such a policy would recognize that democratic accountability comes with its own set of evaluative costs, costs that can build up to the point of destroying values central to democracy itself.