Nations that want prosperity and security must innovate.Footnote 1 Scholars across many disciplines study why specific organizations are innovative. There is broad agreement that openness, defined loosely as organizations that encourage employees to internally share ideas with colleagues, spurs innovation.Footnote 2 Internally open organizations foster competition and collaboration between otherwise siloed divisions,Footnote 3 diffuse ideas,Footnote 4 and encourage a free flow of information.Footnote 5
One set of institutions consistently buck this trend: secretive intelligence and national security organizations. Agencies like the US's Central Intelligence Agency (CIA) and the Defense Advanced Research Projects Agency (DARPA), or the UK's MI6, often discourage internal sharing, but have consistently produced radical innovations. These include the satellite,Footnote 6 autonomous robots,Footnote 7 and lithium-iodine batteries.Footnote 8 Failed projects—from turning cats into listening devices, to psychic spying—also speak to their vision.Footnote 9
Are these organizations more innovative because of secrecy, or in spite of it? To answer this question, we study a principal–agent model of organizational innovation.Footnote 10 We adapt the payoffs to reflect the actors’ sensitivities to political costs and benefits.Footnote 11 We allow researchers secrecy during the conceptualization of innovation. We then contrast mechanisms for innovation in open versus secret public-sector institutions.Footnote 12
Secrecy allows actors to distribute the political costs of authorizing each phase of politically sensitive research. In open institutions, lower-level researchers cannot pursue pilot programs to determine whether a concept is viable without their manager knowing. When a manager learns of a novel but controversial idea, they will not even approve pilot research because they do not want to be responsible. Secrecy early in the innovation process gives an enterprising researcher cover to collect evidence (at a larger personal cost) that the novel idea is viable. If it shows promise, the researcher seeks manager approval.
Our mechanism generates two surprising results for international relations principal–agent theory.Footnote 13 First, the researcher turns to secrecy even if their preferences are perfectly aligned with the manager's. Second, the manager does not monitor the researcher even if monitoring is costless and perfect and the manager knows that the researcher is exploiting secrecy only to do something the manager would not allow them to do. These results follow from a don't-ask-don't-tell dynamic made possible because secrecy allows actors to distribute costs. Distributing costs alleviates preference asymmetry. The manager knows that if they monitor the researcher they will discover something unsavory and shut the research down. However, if they remain ignorant, they can incur a small share of the costs associated with highly controversial pilot research and still benefit when it shows promise.
We find that, on average, secretive national security institutions should produce different kinds of innovations from open institutions. All political organizations pursue ideas that in expectation serve the national interest and involve uncontroversial research practices. However, only secret organizations can pursue initial concepts that involve large risks and rewards. Before pilot research is carried out, these ideas are too controversial or novel for open public-sector organizations to pursue. With hindsight, they represent both the path-breaking innovations the intelligence community is known for, and some of its shameful failures.
We illustrate our theory using several cases: attempts to master mind control (MKULTRA) and innovations to facilitate overhead reconnaissance (via the CORONA satellite and, in the online supplement, the U-2 spy plane). We chose these cases because of historical features that provide inferential leverage. They are also important for, but overlooked by, international relations scholars. MKULTRA's exposure in the 1970s affected intelligence reforms and legislative oversight for decades afterward. The case is especially important given that, according to secondary accounts, the CIA and DARPA recently looked into brainwashing.Footnote 14 Existing studies examine the effects of reconnaissance technologies on conflict dynamics,Footnote 15 but overlook what it took to develop them. We show that these research programs might have been shelved but for internal secrecy.
We contribute to several debates. First, we provide a logic for the national security origins of radical innovations. For international security scholars, this clarifies selection into technology shocks that can causeFootnote 16 and offsetFootnote 17 conflict. It also helps explain the conditions under which state-led, highly productive innovations of interest to international political economy scholars occur.Footnote 18
Second, we refine theories of military innovation.Footnote 19 Conventional wisdom holds that while “militaries have strong incentives to innovate in order to succeed in war,” they are “slow to innovate” because of hierarchical structures and entrenched interests.Footnote 20 We connect agency-wide incentives to respond to international threats with individual-level incentives for innovation.Footnote 21 We highlight an unexplored bureaucratic feature: agencies that practice internal secrecy owing to fear of foreign rivals. Since the level of internal secrecy varies across organizations and time, we also explain unexplored variation in peacetime innovation. Moreover, we broaden this research to agencies beyond the military, and focus on technological rather than doctrinal innovation.
Third, we contribute to secrecy research in international relations. Most find that secrecy reduces welfareFootnote 22 because it creates uncertainty at the international levelFootnote 23 and facilitates domestic inefficiencies.Footnote 24 We expand the limited research on efficient secrecy by showing that it allows states to pursue welfare-enhancing projects, not only to avoid costs.Footnote 25 We also define internal and external secrecy and explain their connection.
Concepts
Our theory is closest to principal–agent models that examine rationalist, organizational innovation.Footnote 26 We adapt this model to fit public-sector employees, and secrecy. Others examine principal–agent problems in political institutions,Footnote 27 but they typically focus on policymaking or electoral accountability, not innovation.Footnote 28 Some examine principal–agent problems unique to international relations, but they emphasize interactions between two statesFootnote 29 or foreign militaries.Footnote 30 We focus on a handful of employees working within government agencies. We detail the differences in Appendix D (in the online supplement). Our theory shares a substantive focus with theories of national security innovation, but our approach is different. We review how we complement this literature in Appendix C. Here we further develop our two central concepts: innovation and internal secrecy.
Innovation
Innovation is the process of taking a novel idea and converting it into a working device or policy.Footnote 31 Innovation occurs only after (1) a novel idea, (2) pilot testing to validate and improve that insight, and (3) the decision to develop a product and deploy it in the field.Footnote 32 The last step is critical. It is not enough to conceive an idea. Innovation requires that it is developed into a working product.Footnote 33
Government agencies innovate to achieve their policy goals.Footnote 34 Consistent with others, we define an innovation's effects as whether the final product advances national goals such as military effectiveness, intelligence collection, and security and prosperity.Footnote 35 Many innovations have positive effects (that is, move the organization toward its goals). Others have no effect. And still others have negative effects (unintended consequences).Footnote 36 This could include escalating conflict, weakening defenses, or facilitating local rebellion.Footnote 37 While researchers hold expectations, they cannot be certain of the true outcome.
We distinguish between the effects of innovation and the costs of moving an idea through development phases.Footnote 38 Some development costs stem from the financial burden of research trials and prototype construction. But public-sector institutions are especially sensitive to political costs.Footnote 39 These can manifest at different stages of innovation. During pilot research, political costs can come from wasteful spending or from human subjects research without consent. During the deployment phase, they can follow from labor abuses during production or escalation with rivals.
Of course, not all research activates political costs.Footnote 40 But in many cases, national security employees do face costs, including from organizational cultures that perceive radical ideas as reckless.Footnote 41 Since they are dealing with public funds, risky spending (which is promoted in private firms) is often viewed as a violation of the federal code of conduct under the waste, fraud, and abuse standard. Many ideas also raise the risk of tragic accidents in which soldiers die or test satellites crash into foreign territory. When this happens, investigators scrutinize those who plan and approve these programs looking for mistakes. These anticipated costs can be large enough that researchers do not voice their ideas in the first place. This helps explain why militaries may fail to pursue novel ideas even though the problems are important and their budgets are large.
Later, these contextualizing details will help us interpret our theoretical findings. But in the end, our model is abstract. We assume only that different public-sector employees participate in the research process and derive benefits (positive or negative) depending on the effects of innovation. They also incur research and development costs as ideas go through the innovation process. The scope of these costs depends on how responsible they are for advancing an idea, and on their personal sensitivities.
Internal Secrecy
While democracies promote scrutiny of government agencies, national security agencies enjoy a special status. Specifically, they are allowed to keep secrets owing to fears of foreign threats.Footnote 42 We refer to this phenomenon as external secrecy. But to sustain external secrecy, they also often practice internal secrecy. That is, certain individuals and groups are exempt or discouraged from sharing information with colleagues, those who oversee them, or even their own superiors.Footnote 43
Internal secrecy is necessary to sustain external secrecy for two reasons. First, foreign threats may infiltrate national security agencies. Foreign penetration can be stemmed by limiting who knows important facts. Second, whistleblowers and leakers have historically revealed large document corpuses without fully understanding the potential for national security harm. By restricting access, agencies limit what they release publicly.Footnote 44
Some aspects of internal secrecy are institutionalized. In the United States, for example, program officers must alert contract officers to purchases, who then openly tender contracts. But “full and open competition need not be provided for when the disclosure of the agency's needs would compromise the national security.”Footnote 45 Other aspects of internal secrecy follow from a culture of need-to-know. Because of the “sensitive nature of their work, intelligence organizations have been reluctant to engage in bidirectional dialogue with decision makers and the larger public.”Footnote 46
Both monitoring and evaluation, and budget oversight, are mandatory for most public-sector agencies. But secretive intelligence agencies and certain parts of the military have access to unvouchered funds they can spend on research without explaining what it is for.Footnote 47 According to a senior Government Accountability Office official, “we have no access to certain CIA ‘unvouchered’ accounts and cannot compel our access to foreign intelligence and counterintelligence information … We have not actively audited the CIA since the early 1960s.”Footnote 48 A former CIA historian notes that “scrutiny of the [intelligence] budget ranged between ‘cursory and nonexistent’.”Footnote 49
One consequence of internal secrecy is that managers are partly forgiven for their ignorance when subordinates do things the manager did not expect.Footnote 50 When a scandal erupts in an open government organization, a manager cannot easily say they did not know what their staff was doing, because the public expects them to monitor employees. But national security employees are expected to maintain secrecy to guard against leaks and counterintelligence threats. This helps excuse managers who do not intrusively monitor their staff to learn about questionable choices. During the Iran-Contra affair, for instance, President Reagan avoided some of the worst costs by claiming that subordinates engineered the scheme without his knowledge.Footnote 51
We are interested in how internal secrecy impacts innovation in national security agencies. In short, secrecy is most salient during the early phases of innovation: periods where researchers develop prototypes or run laboratory tests and simulations without a manager or compliance officer knowing about it. Secondary accounts of DARPA program managers “start[ing], continu[ing], or stop[ping] research projects with little outside intervention” is a prime example.Footnote 52 As projects progress, even secretive agencies may exploit open research practices to refine ideas by sharing information broadly across the national security community. But in the absence of small teams pursuing initial testing in relative secrecy, many innovations may never make it that far.
To be clear, there are other parts of government with some internal secrecy. For example, in parliamentary democracies, cabinet documents are sealed for decades so that elected leaders can brainstorm policy innovations.Footnote 53 But this secrecy is confined to top-level policy discussion and does not cover the design and testing of products. Pilot studies and focus group research to support policies formulated by the cabinet are not privileged.
In practice, actors can exploit secrecy at different levels of a secret organization. To keep things simple, we detail a two-level institution that involves one decision maker and one researcher. However, in many historical examples we see variation in who knows devilish details and who does not. At one extreme, a handful of scientists know the controversial research activities but even their immediate superiors are unaware. At the other extreme, the executive is fully aware of the devilish details but legislators are not. In the middle, directors of intelligence agencies may know exactly what their subordinates are doing but not inform the executive.Footnote 54 If we add layers of management to the institution, our basic predictions still bear out so long as there is secrecy at some level of the organizational hierarchy. There must be at least one partition between insiders, who can pursue research without explaining their practices outside the group and who share the costs of authorization if things go wrong, and outsiders, who can escape some costs by remaining ignorant about what subordinates are up to but cannot stop programs for long.
Model
Our analysis plan is as follows. First, we set up a basic institution. Second, we formally define secret innovation and contrast the process of innovation in secret versus open organizations to explain the core mechanism driving secret innovation. Third, we use comparative statics to explore the innovations uniquely pursued in secret organizations. Fourth, we introduce two distinct information, agency, and monitoring problems to flesh out the mechanism and connect the model to the principal–agent literature. Finally, we consider the rationale for allowing secrecy given that it can lead to perverse outcomes.
Setup
We study an institution that employs two actors: a researcher (R, she) and a manager (D for decider, he). Figure 1 visualizes the game tree and payoffs. The dashed box is the subgame in which R exploits secrecy. In it, she can conduct pilot research without her manager knowing. Later, we contrast secret and open institutions. Open institutions remove the secret subgame but are otherwise identical.
We model the true effect of unleashing a new innovation on the world as π ∈ ℝ. When π is positive (negative), it means the innovation ultimately moves the institution closer to (further from) its goals. Of course, actors cannot anticipate all the consequences of unleashing new devices ex ante. Thus, D's choice to innovate is based on an expectation of the consequences. Define p(π) → ℝ as a density function that determines the effect of introducing the innovation. We assume both players know the density function p(), but not the true realization of π.
Along the way to innovation, actors can authorize pilot research, which has two effects. First, it improves the value of innovation by θ ≥ 0. Second, pilot research helps discover the true effect if innovation happens. We model this as a normally distributed signal m ~ N(π, σ) tied to the true consequences of innovation (π).Footnote 55
Actors pay political costs for participating in a controversial research process. We assume players pay one cost, k i, i ∈ {R, D}, if the institution engages in pilot research.Footnote 56 They pay a second cost, c i, if the project is deployed in the field. We assume that actors incur costs based on how responsible they are during the decision-making process. The total amount of cost to be apportioned is 1 + x. We distribute 1 unit of cost to the actor that chooses to take costly action (conduct research, authorize innovation), and a smaller portion x ∈ (0, 1) to the other actor who works at the same institution but did not directly take a costly action.Footnote 57
Analysis: Secret Innovation and the Cost-Passing Mechanism
We solve for subgame perfect equilibria in the main model and extensions unless otherwise stated. We define secret innovation as follows.
Definition: Suppose a fixed set of parameter values in the open institution where innovation cannot occur with probability on the path in any equilibrium. Then secret research facilitates innovation if innovation occurs with positive probability in any equilibrium in the secret institution with the same parameter values.
This definition highlights the counterfactual nature of our claim. Open institutions can innovate, but there are some ideas that only secret institutions will pursue.
Our first task is to identify the ideas open institutions will not pursue. There are two potential pathways. First, D can innovate absent research. Define e 0 as the actors’ prior expected value of π. Second, D can research and then decide to innovate if the research shows sufficient promise. We define two expectations at the moment D must authorize research (or not). Define λ = pr(𝔼[π|m] > c D − θ). Informally, this is D's pre-research belief that if research is conducted, he will observe a signal m that will lead to a posterior belief that the project is sufficiently likely to have benefits that outweigh the costs (𝔼[π|m] > c D − θ). That is, it is D's pre-research belief that D will innovate after observing research. Define e 1 = 𝔼[𝔼[π|m]|𝔼[π|m] > c D − θ]. Informally, this is D's pre-research expected value of π, given that D will observe an m sufficiently large that D is willing to approve research. Appendix A.1 gives more technical information on these expectations.
Lemma 1: Neither research nor innovation can happen in the open institution if
and
are satisfied. In every subgame perfect equilibrium player utilities are U D = U D = 0.
See Appendix A.2. If condition 2 was violated, D would conduct research to determine whether the project is viable. But two factors drive D to reject a request for pilot research. First, research involves political costs (k).Footnote 58 Second, at the point where D is asked to authorize controversial research, his expectation about that research is inextricably connected to his prior belief. When preexisting scientific research suggests the project is not promising, D expects future research to, on average, confirm that expectation.
We now turn to the secret institution. Since we are interested in the cases where secrecy facilitates innovation, we focus on the conditions where innovation cannot happen in the open institution.
Proposition 1: Secrecy facilitates innovation if conditions 1, 2 and
are satisfied. If they are, then in every subgame perfect equilibrium, R exploits secrecy to conduct pilot research, and D authorizes the project if and only if that research provides evidence the program will work. Off the path, if R attempts to pursue open research, D denies R's research and innovation does not happen.
See Appendix A.3. The result describes a condition where the researcher is willing to exploit secrecy to conduct research (condition 3 is satisfied), but her manager was unwilling to approve open research (condition 2 is satisfied). If research provides evidence the project is viable (m suggests π is higher than originally thought), the manager will approve the project, leading to innovation.
Notice that we can achieve secret research even if the manager's and the researcher's cost parameters are identical: k R = k D, c R = c D. This is surprising given what we know about principal–agent problems. In standard accounts, researchers exploit secrecy only when their preferences diverge from the manager. Why is a researcher with the same incentives as the manager willing to conduct research when her manager is not? The answer comes down to cost passing. Secrecy gives the researcher discretion to conduct pilot research to try to convince the manager, who is unwilling to pay the research costs, to approve if it shows promise. If her secret pilot research shows promise (m is large), she can take the results to her manager for approval. Thus, the researcher is willing to assume the up-front cost and risk of research because she can convince her manager to bear the brunt of deployment costs.
Predictions About Ideas: Secrecy Drives Innovation When Initial Ideas Are High-Risk, High-Reward
What are the kinds of initial ideas researchers need secrecy to pursue? Using a comparative static analysis, we expose two ideal-type pathways to secret innovation that are made possible because the manager and researcher weigh certain trade-offs differently. We provide technical support for these pathways in Appendix A.4. We visualize the results in Table 1. These pathways can interact. However, the basic trade-offs we identify are always present. Thus, it is valuable to consider them as distinct.
Notes: Rows represent different initial expectations about an innovation's effects, p(π). Superscripts 1 and 2 identify innovation pathways. Pathway 1 treats row 1 as the baseline, and raises variance, σ. Pathway 2 considers shift from low to high k D across columns.
The first pathway appreciates the actor's initial expectations about whether an idea will provide a benefit (p(π)). In real life, a researcher uses publicly available research on related problems to make predictions about what will happen if her idea is developed. Column 1 of Table 1 plots the initial expected consequences of four different concepts institutions could pursue. Row 1 is the baseline. The other three panels represent different ways initial beliefs can vary.
First, they vary in their average expected effects, e 0. As e 0 increases (row 2), it means the institution initially sees the idea as increasingly likely to yield a net benefit if it is developed and deployed in the field. The second way initial ideas vary is in the standard error of p(). We notate it σ. Substantively, a high standard error could represent two things. At the individual level (row 3), it represents an idea that is so novel there is little else to compare it to. In these cases, researchers do not know what to expect but accept that unleashing the idea on the world could have many unanticipated consequences. At the group level (row 4), σ represents disagreement about the potential consequences of innovation. The debate surrounding autonomous weapons is instructive. Proponents emphasize greater speed and stealth with fewer casualties. Critics point out that they might create greater instability and more crises.Footnote 59 Before these systems are deployed, it is hard to know whether they will provide benefit or cause harm.
The following expectation summarizes one pathway to research under the assumption that k D is low (column 3).
Pathway 1: Deep uncertainty. If the political cost associated with research is low, then secret research facilitates innovation if
• R is unsure whether the innovation will yield benefits or costs once deployed (e 0 ≈ 0)—if she were confident that it would yield benefits e 0 > >0, she would pursue open research—and
• The improvement value (θ) is not too large, and there is little preexisting scientific research. Therefore, the researcher is not confident in her initial expectation (σ is high). If she were more confident that she understood the idea's effects (σ was lower), she would scrap the idea.
Why does secrecy facilitate research when researchers are deeply uncertain about the project's effects? The logic relies on two steps. In Lemma 1 we showed that the manager pursues research only if her expected benefits for success are sufficiently high. Deep uncertainty means that an idea could generate large positive, or negative, effects. When D weighs these different outcomes, his expectation of benefits is near zero. This is what we observe in rows (a), (c), and (d) of Table 1. Of course, D could use research to learn more about whether the idea is viable. However, research is costly, and D's expectation that pilot research will show promise is tied to his initial expectation of the innovation's effects (approximately zero).
In Proposition 1 we showed that the researcher is also sensitive to expected benefits but is willing to pursue research under more conditions because she can distribute the costs. As a result, when the costs and expected benefits are both low, the researcher is willing to pursue secret research so long as she believes her research will convince the manager to approve her idea—that is, when σ is high.
There are two reasons for this. First, when there is little preexisting research, the researcher's pilot research carries a larger weight in the manager's overall expectation of success. Second, when projects are likely to have either extreme positive or negative consequences, pilot testing indicates which direction the program will go. If results are positive, D is confident the project will have major benefits and can accrue those by authorizing the project.
The second pathway to secret innovation relies on a trade-off between the political costs of research (k D) and the expected consequences of deploying a new innovation (e 0). Substantively, k D captures how sensitive the manager (and the institution at large) is to the expected moral and political costs associated with research when they authorize it.
Pathway 2: High stakes. Secret research facilitates innovation when
• the expected benefit of innovation is high (e 0 is high); and
• the manager's sensitivity to research costs is also high (k D is high), but either the researcher's sensitivity is lower (k R < <k D) or cost sharing is moderately calibrated to support proposition 1.
If the manager's political costs of research and production were lower, we would observe open research.
The logic for the basic trade-off is simple. There are initial ideas that show enormous promise. However, the research required to pursue these ideas involves political costs. Secrecy facilities innovation when the manager is unwilling to bear the large costs of research and the costs of approval on his own. But once the research is complete, the manager will happily approve the project. In cases like this, the researcher may bear the unit share of research costs knowing that the manager will bear approval costs once research is complete.
Predictions About Patterns of Innovation: Secret Institutions Generate Important Innovations that Open Institutions Do Not
In terms of aggregate patterns of innovation, what are the features of research projects and innovations we expect from secret versus open institutions? We find that secrecy allows organizations to consider ideas that seem bizarre, morally repugnant, or likely to fail when first conceived.Footnote 60 This leads to a straightforward expectation.
Expectation 1: A larger proportion of ideas are rejected after secret research than after open research.
We might intuit from this that secret innovation damages a nation's security in the aggregate. However, secret organizations are willing to pursue these ideas only because the potential upside is high. The initial idea must have a large enough chance of making a positive impact for a researcher to pursue it. If research confirms that the idea is harmful, the institution scraps it early on. In the rare cases when research suggests that an idea will provide benefits, these ideas are converted into innovations that change the world. This leads to a second prediction:
Expectation 2: Secret research leads to radical innovation. Consider two comparable cases, a baseline case where R pursues open research because e 0 and e 1 are sufficiently large, and a counterfactual case where R pursues secret research because the counterfactual values e 0α and e 1α are smaller. Then so long as the true effect of innovation (π) is large, increasing the true effect of innovation further increases the chance of innovation in the counterfactual case more than it does in the baseline case.
There are two parts to this reasoning. First, in cases where managers approve open research, they are basically sold on the concept. Thus, even if the pilot shows only moderate success, they will approve innovation. By contrast, in cases where researchers opt for secrecy, the manager starts out skeptical. Thus, the result of the pilot tests must be very strong to convince the manager to approve innovation. Second, the pilot test is correlated with the true effect. Thus, increasing the true effect has a greater impact on whether the pilot's result will induce secret research. This has an interesting empirical analog. For every handful of bizarre and shameful failed projects, such as bionic cat robots or nuclear-induced tsunamis,Footnote 61 secret institutions provide a radical success—the reconnaissance satellite, for example. With foresight, these innovations all sounded risky. With hindsight, some are radical innovations that shaped the industrial and digital revolution and medical sciences.
Connecting the Mechanism to the Principal–Agent Literature
The basic model identified how secret innovation allowed actors to distribute costs at different stages of the innovation process so they could pursue a wider range of novel ideas. However, the model did not fully explore the perverse incentives that arise given uncertainty and principal–agent situations. We now introduce these into the model. We show that our basic logic survives, and we derive additional implications about how researchers and managers collaborate to exploit secrecy in national security institutions.
Monitoring
We assumed that if researchers exploit secrecy, the manager is forced to take on a cost k Dx when the program comes to light. In practice, managers can monitor subordinates’ activities by asking for project details. Given that the researcher's actions can impose costs on the manager, why doesn't the manager monitor their activities?
This is at the heart of the principal–agent literature. Managers want to stop subordinates from taking actions they would not approve of. In this literature, there is agency loss because monitoring is difficult and expensive. However, if monitoring were costless and perfect, D would always monitor, and R would always behave.Footnote 62 This concern is relevant for our theory because R uses secrecy only because D will not approve.
We adjust the baseline model to capture monitoring as it is commonly studied in the principal–agent framework. First, we introduce uncertainty over research costs. We start with a simplifying assumption, k = k R = k D. We then add a step at the beginning of the game where Nature selects the cost associated with research, k ~ f(), where f() is supported on the nonnegative real numbers. Second, we assume that if the manager does not observe open research he has the opportunity to monitor the researcher's activities. If the manager chooses to monitor and discovers that the researcher started a secret research program, he has two options: to allow it to continue or to shut it down. If he allows it to continue, the game reverts to open research (and associated payoffs). If he shuts it down, he avoids research costs entirely, the researcher incurs costs k R, and the research has no effect (we do not realize θ, m).
We explicitly assume that D pays no cost to monitor, and if he does monitor, he perfectly observes R's behavior. Indeed, this is the exact condition the principal–agent literature suggests should drive complete monitoring. Define $\bar{k} = \lambda [ e_1 + \theta -c_Rx] $, and k = λ[e 1 + θ − c D]. Assume $0 < \underline{k} < \bar{k}$. Further define
This represents the expected cost k that D will incur if he fails to monitor, at the moment he must decide whether to monitor or not, and given his expectation that secret research (sr) has happened, and he did not observe research (nor).
Proposition 2: Don't-ask-don't-tell equilibrium. Suppose conditions 1, 2, and 3 can be satisfied for some k = k R = k D. Then in the model where D can perfectly monitor R, if
then the following pure strategies are a perfect Bayesian equilibrium.
• D does not monitor if research is unobserved. D approves open innovation if k < k. Regardless of how research occurs, D approves post-research innovation if 𝔼[π|m] > c D − θ. Off path, if D decides to monitor, D shuts down research with a cost profile k ≥ k, then does not approve innovation. Also off path, D rejects innovation absent research.
• R scraps the project if $k > \bar{k}$; conducts open research if k < k; and conducts secret research otherwise.
Secrecy facilitates innovation if $k\in [ \underline{k} , \;\bar{k}] $.
See Appendix A.6. This result is surprising. After all, the only reason the researcher does not ask for permission is that she knows the manager will not approve. Thus, when the manager observes the researcher hiding her activities, he should suspect something bad is happening and engage in monitoring. From the researcher's perspective, this is indeed what is going on: she is exploiting secrecy because she knows her manager will not approve her controversial research program. And yet, the manager elects not to monitor. Why? The logic follows a don't-ask-don't-tell dynamic made possible by cost passing. The manager knows that if he monitors he will learn the devilish details of what is happening and be forced to shut down the project, rendering a payoff of zero. However, if he does not monitor, he can reduce his costs through plausible deniability.
In this equilibrium, there are research protocols that are so controversial the manager does worse by allowing research to continue even though he incurs only a share x of the cost. Despite this extreme preference asymmetry, the equilibrium holds because the manager expects the researcher's protocol is too controversial to approve but not so controversial that the manager does not want the researcher to pursue it in secret. This has the following empirical implications.
Expectation 3 Don't-ask-don't-tell. When managers are alerted that a researcher is secretly researching and does not want to share the details, they elect not to monitor because they suspect the program is controversial. Managers allow secret research to progress so they can retain plausible deniability.
Expectation 4 Telling implies shutdown. If managers observe controversial details of a research program that a researcher secretly pursued, they shut down the parts of the program they observe.
Trust When the Researcher Can Fabricate Her Report
The preceding analysis emphasized that secrecy has positive effects because it provides researchers with autonomy; managers, cover from political costs; and both actors, the capacity to distribute costs between them. In practice, secrecy also creates opportunities for the researcher to fabricate reports or cherry-pick results. In theory, it could cause the entire secret research program to unravel. Secret research works only if the manager can trust the researcher's description of pilot results.
We adjust the baseline model to understand whether the manager can assign a researcher to a project who will pursue controversial pilot research if it is necessary, and credibly reveal the results of that pilot. First, we assume that if research is conducted in secret, only the researcher observes m. Second, we assume the researcher can write any (costless) report she likes: m R → ℝ.Footnote 63 When research happens in secret, the manager observes only the report m R. We say the research report is honest if m R = m and dishonest otherwise. Third, we allow D to set the researcher's cost profile c R, k R, which represents a manager's ability to assign projects to staff. In short, we want to know whether managers can find a researcher (1) who is willing to conduct secret research; (2) who is willing to write an honest report no matter the outcome of her pilot; and (3) who the manager will believe. Finally, we want to know whether the manager would like to employ a researcher who pursues secret research.
Lemma 2: If conditions 1–3 and
are satisfied, then the manager employs a researcher who is honest, trustworthy, and willing to conduct secret research.
See Appendix A.7 for a technical statement of lemma 2 and proof. Lemma 2 explains that it is possible to find a researcher who can facilitate secret innovation. But what does this researcher look like? We put the answer in terms of expectations.
Expectation 5 Secret research works only if the institution employs unscrupulous patriots. The researcher who takes on a secret research program and will report her results credibly and honestly must be
• insensitive to the political and moral issues associated with research (k R → 0), but
• highly sensitive to the foreign policy costs associated with deploying a project (c R = c D/x).
The first bullet point summarizes the condition where the researcher is willing to pay the cost to conduct controversial pilot research even if the manager is not. The second summarizes what it takes for the researcher to honestly report pilot research. To be clear, the condition on c R for complete revelation is a strict equality that aligns R's and D's preferences at the point where D must decide between approving innovation or not. However, we can still support the credible revelation of information, with honesty in some cases and dishonesty in others, with some cost asymmetry. For example, there are cases where R is less sensitive to the costs of deployment (c R < c D/x) where D is still persuaded by R's research report and innovates if R's report is positive. In this case, it is possible R wants to innovate following pilot research, but D would not innovate if he knew the truth. In these cases, R fabricates the report. Had R sent an honest report, D would have rejected it. D is aware of this risk but trusts R anyway, because the results of pilot research that generate incentives for dishonesty are unlikely relative to the results of pilot research where both actors would proceed.
In short, D will trust R even if their preferences are not perfectly aligned because R is sufficiently sensitive to the foreign policy costs associated with innovation that R does not want projects approved that are likely to fail in most cases. This result has a secondary implication about how a researcher who has selected into secret research will behave following the outcome of pilot research.
Expectation 6 Suppose a researcher is willing to take on a secret research project. Then, if pilot research suggests that a project will fail, the researcher will terminate the research and argue against developing the project.
External Ambiguity and Calibrating Cost Passing
Because secrecy makes oversight hard, managers could sustain plausible deniability of the devilish details if the researcher briefed the manager informally. This would facilitate oversight, while offsetting the manager's expectation of incurring the increased costs from authorization should the controversial aspects ever be exposed. However, even if managers learn informally and approve passively, they are still more exposed to costs in expectation than if they learned nothing. For example, if a controversial experiment comes to light, an investigator may piece together the manager's knowledge from unusually long meetings with the research team, coded messages, or depositions of subordinates. Thus, at the time the manager is informally briefed, the decision to approve research must still factor in the cost of professional disgrace and criminal liability from involvement (k D)—and the expectation of incurring these costs from the informal briefing (call it x + z < 1). Here, the expectation is lower than if the manager had written a memo authorizing the experiments but higher than if they were truly ignorant (x).
In Appendix A.8, we extend the model to account for these issues. We set up the model as a tough test for internal secrecy, because the researcher faces strong incentives to brief informally to pass on at least some costs, and we assert that if the researcher does so it does not meet our definition of internal secrecy.Footnote 64 And yet, we still find that researchers will exploit internal secrecy (that is, not brief the manager at all) rather than provide informal briefings when the underlying cost parameters (k i, c i) are high. What is more, we show that the option to brief informally raises the chance that research occurs beyond the baseline model. This illustrates how modeling other loose reporting requirements that internal secrecy facilitates expands the conditions under which innovation occurs.
Institutional Design
In theory, even the president, the most senior member of the executive, answers to Congress (and the public). If abuse is possible, why does Congress tolerate the institutional arrangement described here? Why doesn't Congress design institutions that hold executives accountable even when they do not learn the details? It is hard to address this question empirically because internal secrecy is an enduring feature of national security institutions. In the US context, the National Security Act of 1947 handed the executive and national security agencies enormous power to sustain internal secrecy.Footnote 65 And this legislative framework survived reform debates that followed intelligence failures and executive abuse.
One possibility is that reform is hard and institutions are sticky. But in Appendix A.9, we adapt the monitoring model to provide a strategic explanation for Congressional inaction. We introduce a higher-order principal (Congress) who first sets x ∈ [0, 1] (the level of internal secrecy), and then the game unfolds following the monitoring model analyzed in Proposition 2, given that x. Our setup closely reflects two features of Congress's abilities and incentives expressed in historical debates over reform. First, Congress's main power to influence national security employees is by passing ex ante, rather than scandal-specific, laws about appropriate conduct for all future cases of secret research. This includes when managers are supposed to monitor their subordinates, when subordinates must report their activities, and so on. Then national security employees are confronted with specific scenarios (for example, the decision to pursue a particular idea) knowing the laws that govern their actions. Second, Congress is aware that internal secrecy is necessary to sustain external secrecy. Others have shown that greater oversight, or even greater sharing within the national security community, runs the risk that foreign agents will learn about sensitive operations.Footnote 66 Thus Congress knows that the higher it sets x, the more likely it is US rivals will discover secrets and exploit them.
We focus on conditions where, as shown in Proposition 2, if x is sufficiently low Congress induces the researcher and manager to engage in the behaviors described in the don't-ask-don't-tell equilibrium. But if Congress sets x higher, they induce the researcher to never engage in secret research, and we observe only uncontroversial research the manager directly approves.
We identify two strategic explanations for why Congress would tolerate secrecy and the possibility of abuse (set x low). The first closely reflects the don't-ask-don't-tell mechanism. Congress also desires welfare-enhancing innovations, and knows innovation is less likely if x is high. When the costs or risk of abuse are low relative to the foreign policy stakes, Congress prefers to tolerate a risk of abuse for the same reason the manager prefers not to monitor. Second, when the trade-off between internal and external secrecy is severe, Congress prefers to tolerate the risk of abuse to prevent foreign agents from discovering secrets. This second mechanism potentially explains the unique amount of internal secrecy in national security agencies. For example, there is little cost of leaking innovations in education policy, because they will not be exploited by rivals. Thus Congress has no incentive to write laws maximizing internal secrecy. Concerns over national security leaks can cause Congress to tolerate the risk of abuse from internal secrecy in national security institutions. We show that radical innovation is a convenient byproduct.
Testing the Argument
We trace the logic of secret innovation in two cases: the search for mind control (MKULTRA) and the first reconnaissance satellite (CORONA). Table 2 summarizes the case parameters, which we substantiate in later sections. As a reminder, our theory identifies two pathways to secret research. MKULTRA fits our high-risk-high-reward pathway. Its moral repugnance generated enormous political costs during the research phase. But the promise of mind control was seen as very large. CORONA fits our lower-cost-but-high-variance pathway. The political costs of CORONA are smaller because they stemmed mainly from perceptions of wasteful spending. But so little was known about the atmosphere and satellite telemetry that researchers found it hard to predict its chance of success.
Mind Control
In the late 1940s and early 1950s, US policymakers became convinced that the Soviet Union and the People's Republic of China had mastered mind control.Footnote 67 According to Richard Helms, a longtime CIA official who would go on to become director of the agency, “There was deep concern over the issue of brainwashing … We felt that it was our responsibility not to lag behind the Russians or the Chinese in this field.”Footnote 68
Policymakers were hopeful they could unlock the mysteries for themselves.Footnote 69 They believed mind control was “of the utmost importance … [and] could mean the difference between the survival and extinction of the United States.”Footnote 70 A declassified memo from the early 1950s lists core aims: “A. Can accurate information be obtained from willing or unwilling individuals. B. Can Agency personnel … be conditioned to prevent any outside power from obtaining information from them by any known means? C. Can we obtain control of the future activities (physical and mental) of any given individual … ?”Footnote 71
In 1950, the CIA conducted some ad hoc experiments code-named BLUEBIRD and ARTICHOKE.Footnote 72 Even these initial projects were handled outside normal oversight channels. A memo to the CIA director stated: “In view of the extreme sensitivity of this project and its covert nature, it is deemed advisable to submit this project directly to you, rather than through the channel of the Projects Review Committee. Knowledge of this project should be restricted to the absolute minimum number of persons.”Footnote 73
Within a few years, CIA director Allen Dulles decided to “intensify and systematize” the CIA's efforts, and in April 1953 he authorized Sidney Gottlieb to establish MKULTRA.Footnote 74 Gottlieb was allowed to conduct experiments with virtually no oversight. Years of controversial experiments followed. Consistent with our assumptions, CIA managers granted this level of secrecy to researchers partly because of external threats. The Technical Services Division was awarded “exclusive control of the administration, records, and financial accountings of the program” owing to fear that “public disclosure of some aspects of MKULTRA activity could … stimulate offensive and defensive action in this field on the part of foreign intelligence services.”Footnote 75
While Dulles gave the research team broad authority to conduct experiments involving “chemical and biological materials capable of producing human behavioral and psychological changes,”Footnote 76 he and other managersFootnote 77 were not privy to the controversial details of how this research was performed.Footnote 78 Gottlieb secretly tested the effects of LSD on unwitting, nonvolunteer subjects. Under Operation Midnight Climax, sex workers lured unsuspecting American citizens to a safe house in San Francisco where CIA staff secretly dosed them with LSD and monitored them.Footnote 79 MKULTRA also involved experiments on prisoners overseas.Footnote 80 When the Church Committee reviewed MKULTRA years later, it was these research practices that caused them to conclude that “the nature of the tests, their scale, and the fact that they were continued for years after the danger of surreptitious administration of LSD to unwitting individuals was known, demonstrate a fundamental disregard for the value of human life.”Footnote 81
As we will see, this firewall between managers and researchers meant that the latter, who oversaw the experiments, were at greatest risk for potential criminal prosecution and professional disgrace. CIA managers, who were ignorant of the most controversial aspects of MKULTRA, suffered fewer costs.
In summary, several features of this case fit our high-stakes pathway for secret innovation. It involves two primary actors: the MKULTRA research team (with Gottlieb at the center), and CIA management (the most senior for the majority of the time was Dulles). At the outset, Dulles knew that if MKULTRA succeeded, it would generate large benefits (e 0 was high). However, he also knew the necessary research would be controversial (c i was high).Footnote 82 Starting from this position, three facts about this case match the choices our model predicts. First, the CIA hand-selected Gottlieb to oversee MKULTRA. Second, Gottlieb judged that highly controversial human subjects research was necessary for MKULTRA. He could have discussed these research plans with managers but chose to keep these details secret. Third, Dulles had several opportunities to learn what Gottlieb was up to but never asked.
Why Was Gottlieb Chosen?
Gottlieb was not an obvious pick to lead MKULTRA. Although he had experience in government laboratories as a chemist, he did not have an intelligence background. Why was an intelligence outsider selected to lead a high-stakes and intensely secret project? In the extension used to characterize Lemma 2, we argued that when researchers conduct scientific tests in secret, it is easy for them to give managers the mistaken impression that their novel idea is more effective than the research suggests. Anticipating this problem, the manager must carefully select an unscrupulous patriot: a researcher who is insensitive to whatever controversy it takes to complete a research program, but who shares the manager's desire to field only projects that will advance national interests.
This is exactly how CIA managers saw Gottlieb and others on the Technical Services staff. The agency needed “a character steely enough to direct experiments that might challenge the conscience of other scientists, and a willingness to ignore legal niceties in the service of national security.”Footnote 83 The problem in Dulles's view was that certain parts of the CIA “had shown no stomach for further work on humans.” As Thomas notes, however, “the Agency's Office of Technical Services Staff (TSS) had no such qualms … They would have no reservations about testing ideas on unsuspecting subjects, especially in such a vitally important and urgent area as brainwashing.”Footnote 84
Accounts of Gottlieb's personality in particular are telling. According to Kinzer, “Like many Americans of his generation, he had been shaped by the trauma of World War II [which] left him with a store of pent-up patriotic fervor. His focused energy fit well with the compulsive activism and ethical elasticity that shaped the officers of the early CIA.”Footnote 85 When he later testified before a Senate subcommittee about MKULTRA, Gottlieb used language we would expect from unscrupulous patriots: “I would like this committee to know that I considered all this work … to be extremely unpleasant, extremely difficult, extremely sensitive, but above all to be very urgent and important … There was a real possibility that potential enemies … possessed capabilities in this field that we knew nothing about, and the possession of those capabilities … combined with our own ignorance about it, seemed to us to pose a threat of the magnitude of national survival.”Footnote 86
Of course, Gottlieb had incentives to cast himself as patriotic during an inquiry into his conduct. However, his behavior in the final years of MKULTRA also fit this personality profile. We show the patriotic researcher pursues her project only because she believes the science is viable. If she learns her research will fail to advance national security interests, she will quit, even if no one is stopping her. Consistent with this logic, one reason key parts of MKULTRA ended after nearly a decade of experimentation was Gottlieb's realization that “on the scientific side, it has become very clear that these materials and techniques are too unpredictable in their effect on individual human beings, under specific circumstances, to be operationally useful.”Footnote 87 It would be odd for a researcher motivated by pride to publicly declare their work a failure.
Why Did Researchers Opt for Internal Secrecy?
If our theory is correct, Gottlieb and his team exploited internal secrecy because they knew CIA managers would refuse to let them continue the most controversial experiments if they figured out what they were up to. Unfortunately, Gottlieb never explicitly articulated why he kept the most controversial details of experiments from his managers. But the context surrounding his actions is consistent with our logic in three ways.
First, the experiments he was engaged in, particularly the parts having to do with surreptitious testing of unwitting subjects, were extraordinarily controversial. According to the inspector general's report of 1963, “Research in the manipulation of human behavior is considered by many authorities in medicine and related fields to be professionally unethical, therefore the reputations of professional participants in the MKULTRA program are on occasion in jeopardy.” It also states that “some MKULTRA activities raise questions of legality implicit in the original charter.”Footnote 88 A memo from the late 1950s titled “Influencing Human Behavior” similarly notes that “some of the activities are considered to be professionally unethical and in some instances border on the illegal.”Footnote 89 Because of this, “CIA officers felt it necessary to keep details of the project” extremely tightly guarded.Footnote 90
Second, several CIA managers later stated they would have stopped MKULTRA if they had known its full extent. Dulles was reportedly interested in trying “everything the Communists could have done” but knew that “the risks for him and the Agency were enormous. If it ever became known that the United States government had funded what would be unprecedented clinical trials—ones beyond all ethical acceptability—it would most certainly lead to the sudden end of his remarkable and brilliant career.”Footnote 91 This is likely why, as we will see in the next section, he was cut out of the loop of the precise details of MKULTRA. One senior CIA official who was “excluded from regular reviews of the project” was strongly opposed to MKULTRA—when he learned about it. According to one account, “it is possible that the project would have been terminated in 1957 if it had been called to his attention when he then served as Inspector General.”Footnote 92
Although less directly relevant given the timing, Stansfield Turner, who served as CIA director in the late 1970s, expressed similar reservations: “It is totally abhorrent to me to think of using a human being as a guinea pig … I am not here to pass judgment on my predecessors, but I can assure you that this is totally beyond the pale of my contemplation of activities that the CIA or any other of our intelligence agencies should undertake.”Footnote 93
A final piece of evidence that internal secrecy facilitated Gottlieb's experiments is that once Congress got wind of MKULTRA and asked to review the program files, Gottlieb destroyed them on “the verbal order of then DCI Helms” rather than handing them over.Footnote 94 This impeded subsequent investigations into what had transpired.Footnote 95 Gottlieb and Helms purportedly felt that the experiments “might be ‘midunderstood’,” leading them to order “that every scrap of paper relating to the brainwashing experiments be incincerated.”Footnote 96
Managers Built the System So They Would Be in the Dark
Our theory suggests that managers will embrace ignorance because they know that if they do not investigate they will incur a small cost as an ignorant bystander, but they may accrue a large gain from a successful innovation. On the other hand, if they investigate, they are faced with the choice of incurring a large cost or shutting down the program altogether. Three case features support this logic.
First, Dulles minimized his exposure to MKULTRA's details from the outset.Footnote 97 When he initially authorized the project in 1953, the $300,000 he set aside was “not subject to financial controls,” and researchers had “permission to launch research and conduct experiments at will.”Footnote 98 Dulles's Reference Dulles1953 memo states that “the nature of the research and the security considerations involved preclude handling the projects by means of the usual contractual arrangements.”Footnote 99 According to one account, “Dulles ordered the Agency's book-keepers to pay the costs blindly on the signatures of Sid Gottlieb and Willis Gibbons, a former US Rubber executive who headed TSS.”Footnote 100 Helms, who was one of the few senior officials to have reasonable insight into MKULTRA, “avoid[ed] oversight even by the CIA's director, because he ‘felt it necessary to keep the details of the project restricted to an absolute minimum number of people’.”Footnote 101 Richard Lashbrook, one of the senior scientists alongside Gottlieb, purportedly stated at one point that “what was actually signed-off on was not the same as the actual proposal, or actual detailed project.”Footnote 102
Second, CIA managers went to great lengths to avoid looking into MKULTRA. The most extreme example involved a civilian employee of the Army, Frank Olson, who was unwittingly given LSD and purportedly jumped out of a hotel window to his death in the weeks afterward. The internal investigation that followed accused the TSS of “fail[ing] to observe normal and reasonable precautions.” In response, Dulles wrote a letter to Gottlieb “criticizing him for ‘poor judgment … in authorizing the use of this drug on such an unwitting basis and without proximate medical safeguards’.”Footnote 103 Ultimately, however, these were not formal reprimands, had no effect on advancement, and did not lead to a termination of the experiments.Footnote 104 Surprisingly, but consistent with our theory, even after investigators uncovered wrongdoing in the narrow experiments related to Olson, they did not expand their audit to MKULTRA broadly. One senior CIA official cautioned that a formal reprimand “would hinder ‘the spirit of initiative and enthusiasm so necessary in our work’.”Footnote 105
Third, when MKULTRA was eventually made public, the costs were distributed in accordance with our theory. As the most senior scientist who knew the complete details, Gottlieb was hauled before Congress to testify. Years later, he was implicated in a variety of lawsuits by families of victims of MKULTRA. Most important for our purposes, “since Richard Helms was not alleged to have been directly involved in the drugging, he could not be prosecuted—but … the case against Gottlieb could proceed.”Footnote 106
Overhead Reconnaissance
Our second case examines the origins of the first US reconnaissance satellite, CORONA. We chose it for three reasons. First, it verifies that our argument extends beyond morally repugnant programs like MKULTRA to the costs and risks faced by many technical innovations. Second, reconnaissance satellites are a tough technological test of our theory because they are hard to keep secret, and because the research needs for cutting-edge experts across many scientific areas made openness attractive. Finally, there are historical quirks that provide a quasi-counterfactual test. CORONA occurred in a unique period in which the CIA was not widely known to be in the business of technical intelligence. Because of this, we know what would have happened if an open organization—that is, the Air Force, where it was originally pitched—was the only avenue for authorizing this bold innovation. There, it was rejected.
The Open Origins of CORONA
Monitoring the Soviet Union was a pressing issue for policymakers in the early Cold War.Footnote 107 As the Soviets’ ability to thwart US reconnaissance tools advanced, concerns about the continued viability of the U-2 spy plane grew. US policymakers wanted a more reliable option.Footnote 108 Thus, some in the Air Force conceived of Weapons System 117L (the antecedent to CORONA).Footnote 109 Responsibility for it was placed in the Western Development Division, which was managing ballistic missile development. According to a declassified history, “WDD had been established with handpicked military personnel and with special reporting channels for expediting program decisions.”Footnote 110 They initially solicited design bids from cleared government contractors. Lockheed won a contract, but funding challenges loomed.Footnote 111
The institutional structure surrounding WS-117L was internally open. The secretary of the Air Force, Donald Quarles, “responded to news of the [Lockheed] contract award by ruling that neither mockups nor experimental vehicles should be built without his specific prior approval.”Footnote 112 In other words, the research team could not pursue pilot testing without alerting their manager. Moreover, although WS-117L was technically a classified project, presumably to keep information from the Soviet Union, “program details were reported to, and approved by, Congress.”Footnote 113
From the perspective of Air Force managers, approving research on WS-117L presented low (but nonzero) political costs, but uncertain expected benefits. There was deep uncertainty about whether satellites were viable despite their enormous potential. According to a declassified history, “the technology to be embodied in the WS117L satellite was largely unproven; no satellite had even been orbited, and little was known of problems that might arise in a weightless, airless environment.” It also notes that Quarles “was not actively hostile to the satellite program as such, but had developed strong views about reliability and using low-risk technology.”Footnote 114 There was also concern about unanticipated escalation. On the costs side, Eisenhower was promoting the “space for peace” initiative, which “became a credo of US policy in 1955.”Footnote 115 Decision makers worried that if they authorized WS-117L they would be perceived as acting contrary to such commitments. Further, WS-117L was so novel that research into it could be perceived as wasteful. Quarles understood “the administration's commitment to eliminate ‘noncritical’ defense expenditures.” Weighing these costs and benefits, and despite the desire of the WS-117L research team, he “found ample justification for his stubborn refusal to approve the start of a meaningful development program.”Footnote 116
After it became clear that Air Force management would not adequately fund WS-117L, a plan was hatched to pursue it secretly. The project, conceived by Colonel Oder, was known as Second Story.Footnote 117 It had two prongs. First, it would be announced that WS-117L was being canceled and replaced with a scientific satellite overseen by the Air Force. This was a cover story. At the same time, the project would be covertly restarted and accelerated under the auspices of the CIA.Footnote 118 As noted, the CIA was just getting into the business of technical intelligence and thus was not an obvious choice to handle the project. This is likely why it did not originate there. Interestingly, however, a handful of the individuals involved with WS-117L were familiar with the Office of Science and Technology after working on the highly classified U-2 project.Footnote 119 Thus, the very fact that they proposed this option, which was outside the “‘normal’ development cycle,” is highly suggestive that internal secrecy was viewed, at least by the research team, as a way to advance a bold and risky innovation.Footnote 120
Sputnik's success in October 1957 took policymakers by surprise. While their earlier behavior was obviously not conditioned by an event that had not yet taken place, the Soviet Union's success in space altered their thinking, including on the importance and feasibility of this technology.Footnote 121 As such, the post-Sputnik period is effectively a separate case and beyond our current scope. Moreover, policymakers’ emphasis on limiting many discussions to oral briefings “owing to the extreme sensitivity” of the project means that “there are few official records in the project files bearing dates between 5 December 1957 and 28 February 1958.”Footnote 122 Nevertheless, our theory points to several key elements of this period that are worth highlighting.
First, the strong desire for external secrecy—in this case, concealing CORONA from the Soviets—meant that the CIA's ability to “maintain effective secrecy” was of paramount importance.Footnote 123 Second, efforts to preserve external secrecy resulted in deep internal secrecy, as evidenced by Eisenhower's admonition that “only a handful of people should know anything at all about it.”Footnote 124 The fact that the CIA director was “the only US Government employee authorized to spend money without substantiating vouchers” is also notable in that it almost certainly helped prevent higher-order principals like Congress from interfering.Footnote 125 Eisenhower's apparent decision to approve CORONA via “a handwritten note on the back of an envelope,” combined with the heavy emphasis on oral briefings, is also consistent with our mechanism focused on plausible deniability.Footnote 126
Conclusion
We have argued that secretive national security institutions are more innovative because they are secret. Secrecy is not equally valuable at every stage of innovation. Rather, it allows an enterprising researcher to pursue initial ideas that are so bizarre, morally controversial, or unlikely to work ex ante that their manager would refuse to fund the initial concept. But if pilot research confirms the researcher's intuition, she can convert it into an innovation. These ideas reflect some of the most important innovations of the last century. The model explains that this theoretically drives different patterns of innovation in national security and other public-sector agencies.
While we emphasized the welfare-enhancing effects of internal secrecy, our framework is general. Future researchers should explore the promise and peril of secrecy for innovation. They could consider how other institutional features could maximize innovation while reducing the risk of waste and abuse. They could also examine diversity in institutional design to harness the late-stage advantage of open organizations and the early-stage advantages of secrecy.
These insights also have significant policy implications, particularly with respect to the return of great power competition generally and competition between the US and China specifically. On the one hand, innovation is viewed as a key pillar of this dynamic. Mike Rogers and Glenn Nye, two former US representatives on opposite ends of the political spectrum, argued in an op-ed that “the race to take leadership in advanced technologies such as artificial intelligence, quantum computing, and 5G networks will determine the future balance of geopolitical power.”Footnote 127 On the other hand, officials have emphasized political and ideological factors as relevant to great power competition. The Biden administration's National Security Strategy makes frequent mention of transparency and openness as being integral to competing with opaque, closed states like China and Russia.Footnote 128
Our framework and findings suggest that there is a potential tension between these two impulses. In particular, internal secrecy—which sits uncomfortably alongside calls for greater openness domestically and internationally—has facilitated some of the most radical innovations of the last century. Ultimately, the best course of action may be to maintain a diversity of institutions.
Supplementary Material
Supplementary material for this article is available at <https://doi.org/10.1017/S0020818324000250>.
Acknowledgments
We thank Erik Gartzke, Ron Hassner, Federica Izzo, Kendrick Kuo, Matthew Malis, Rachel Metz, Michael Miller, Erik Sand, Jane Vaynman, the Strategic Multilayer Assessment Speaker Session, and the participants in the Center of Peace and Security Studies Workshop at UC San Diego for helpful feedback. Thanks to the men and women of the national security community for helpful conversations. We also thank Peter Rosendorff, Ashley Leeds, and the rest of the editorial team at IO as well as the substantive and technical reviewers for feedback that greatly improved the manuscript. The views expressed are the authors’ own and do not represent those of the US Naval War College, the Department of the Navy, or any other organization of the US government.