Hostname: page-component-586b7cd67f-vdxz6 Total loading time: 0 Render date: 2024-11-24T02:01:25.314Z Has data issue: false hasContentIssue false

Inductive Risk and the Legitimacy of Non-Majoritarian Institutions

Published online by Cambridge University Press:  06 July 2023

Trym Nohr Fjørtoft*
Affiliation:
ARENA Centre for European Studies, University of Oslo, Norway
*
Corresponding author: Trym Nohr Fjørtoft; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

In political discourse, it is common to claim that non-majoritarian institutions are legitimate because they are technical and value-free. Even though most analysts disagree, many arguments for non-majoritarian legitimacy rest on claims that work best if institutions are, in fact, value-free. This paper develops a novel standard for non-majoritarian legitimacy. It builds on the rich debate over the value-free ideal in philosophy of science, which has not, so far, been applied systematically to political theory literature on non-majoritarian institutions. This paper suggests that the argument from inductive risk, a strong argument against the value-free ideal, (1) shows why a naive claim to value freedom is a poor general foundation for non-majoritarian legitimacy; (2) provides a device to assess the degree of democratic value inputs required for an institution to be legitimate; which (3) shows the conditions under which a claim to technical legitimacy might still be normatively acceptable.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Introduction

What is required for non-majoritarian expert institutions to be legitimate? This central question for modern democracies is growing in importance as ever more policy tasks are delegated to such institutions. Many debates over the legitimacy of non-majoritarian institutions revolve around the idea that these institutions are value-free. Proponents claim that non-majoritarian institutions are neutral bodies tasked with finding the objectively best way to pursue the goals set by politicians. Sceptics claim that facts and values are difficult or even impossible to disentangle and that appeals to value freedom are bound to fail.

Despite value freedom holding such a central place, the political theory literature on the legitimacy of non-majoritarian institutions engages surprisingly little with the lively debate over the value-free ideal in philosophy of science. In the latter literature, the value-free ideal is commonly defined as the ideal that ‘social, ethical, and political values should have no influence over the reasoning of scientists, and that scientists should proceed in their work with as little concern as possible for such values’ (Douglas Reference Douglas2009, 1). The ideal has enjoyed wide historical support, but recent arguments have seriously challenged its empirical tractability and normative desirability. One of the strongest arguments against the value-free ideal in the contemporary philosophy of science is the argument from inductive risk (Douglas Reference Douglas2009; Hempel Reference Hempel1965; Rudner Reference Rudner1953). It says that any scientific decision entails a risk of error – uncertainty. When making a decision in the face of uncertainty, scientists are forced to choose which type of error they are more willing to accept – false negatives or false positives. What is worse, claiming that a substance causes cancer when it does not or claiming that it is safe when it causes cancer? Such choices are value based. They cannot be determined based on data or observation alone. The strong argument from inductive risk says that when there is a risk of epistemic error, and there are non-epistemic consequences of that error, values are not only warranted but are required in the internal stages of science.Footnote 1

This paper has two goals. The first is to introduce the argument from inductive risk to the debate over the legitimacy of non-majoritarian institutions. I suggest that the argument from inductive risk is able to account for why a naive claim to value freedom is a poor general foundation for non-majoritarian institutions' legitimacy. The second, more ambitious goal is to use inductive risk to determine the legitimacy demands those non-majoritarian institutions face. The argument allows us to go further than simply saying that facts and values are often intertwined. Risk is not binary; it is a scale. Therefore, we can use inductive risk to measure the amount of value input required for a non-majoritarian institution to be legitimate. In brief, I will defend the following thesis: The degree of democratic value input required for an institution to be legitimate increases with the institution's degree of inductive risk.

This paper proceeds as follows. Section 2 surveys the major debates over the legitimacy of non-majoritarian institutions and argues that many strands of the literature may be unified under the term technical legitimacy, of which value freedom is a key component. Section 3 presents the value-free ideal and the argument from inductive risk against it and demonstrates how the argument applies to the legitimacy of non-majoritarian institutions. Section 4 operationalizes the two dimensions that make up inductive risk: uncertainty and consequences. They combine to make up a measure of the legitimacy demands facing different non-majoritarian institutions. Section 5 demonstrates its utility in a case study of an archetypal case of non-majoritarian delegation – independent central banks. Section 6 concludes.

The Legitimacy of Non-Majoritarian Institutions

Legitimacy is notoriously understood both as an empirical and a normative term. Empirical legitimacy tracks whether subjects believe they have substantive moral reasons to comply with an institution's directives. Normative legitimacy tracks whether subjects are right in their beliefs about these moral reasons to comply. An institution is normatively legitimate when acceptance of its directives would be expected from a rational person or a rational deliberation process (Buchanan Reference Buchanan, Sobel, Vallentyne and Wall2018; Eriksen Reference Eriksen2009, 27–8). There is a presumed link between normative and empirical legitimacy. For example, a complete lack of social acceptance would count against an institution's normative legitimacy. Empirical legitimacy tracks people's moral reasons to comply, not reasons based on fear, self-interest, etc. The argument presented in this paper is normative, but empirical observations are admissible and informative parts of that argument.

The delegation of power to non-majoritarian institutions raises a basic democratic puzzle. Non-majoritarian institutions are unelected; they are not accountable to citizens in the traditional sense through a chain of delegation (Maggetti Reference Maggetti2010). How can they then be legitimate?

A Technical Theory of Legitimacy

A large class of answers to the problem of non-majoritarian legitimacy says that these institutions are legitimate precisely because of their removal from electoral politics. This premise is shared by analysts from the regulatory state literature (Majone Reference Majone1996) and proponents of output legitimacy (Scharpf Reference Scharpf1999), and it is echoed in two more recent volumes in the literature on unelected power (Tucker Reference Tucker2018; Vibert Reference Vibert2007). In empirical research, the bureaucratic reputation literature's idea of ‘technical reputation’ reflects the same idea (Carpenter Reference Carpenter2010). Finally, theorists on the political uses of expert knowledge say that an organization may enhance its legitimacy by drawing on neutral expert knowledge – or by being seen as doing so (Boswell Reference Boswell2009; Sabatier Reference Sabatier1978; Weiss Reference Weiss1979). In this section, I will unpack the underlying idea of technical legitimacy that all these approaches have in common and demonstrate how they depend on a claim to value freedom.

Two main justifications exist for accepting or even requiring certain institutions' independence from majoritarian politics. The first is that non-majoritarian institutions ensure credible commitments to a policy that promotes the common good. Some decisions must be shielded from politicians seeking short-term gain (Majone Reference Majone1996; Jacobs Reference Jacobs2016; Tucker Reference Tucker2018). Politicians may, for instance, be incentivized to manipulate interest rates for short-term political gain, even when that generates inflation which leads to long-term economic loss. When politicians tie their hands by delegating the power to set interest rates to an independent central bank, they credibly commit to a certain inflation target (Kydland and Prescott Reference Kydland and Prescott1977). The argument extends to all instances of regulation with a time-inconsistency problem. An independent regulator would be more credible in these cases because of its independence from democratic politics (Jacobs Reference Jacobs2016; Maggetti Reference Maggetti2010, 3).

The second justification says that there is a democratic obligation of a political system to ensure the epistemic quality of decisions. Modern democracies are complex, and specialized agencies possess expert knowledge that other parts of the political system lack. Therefore, the best way to safeguard the epistemic quality of certain decisions is to delegate them to experts. Here, credibility is not the main concern. Instead, independence ensures that expert evaluations are free from political distortions. The epistemic justification is a large part of the idea of output legitimacy and can also be found in, for instance, republican theories of democracy (see Holst and Molander Reference Holst and Molander2019; Pettit Reference Pettit2004; Scharpf Reference Scharpf1999; Steffek Reference Steffek2015).

These two justifications often operate in concert. For example, in a 2002 communication on the use of independent EU agencies, the European Commission writes: ‘The main advantage of using the agencies is that their decisions are based on purely technical evaluations of very high quality and are not influenced by political or contingent considerations’ (Commission of the European Communities 2002, 5). We find here an explicit appeal to the (epistemic) quality of decisions and to their removal from political considerations.

In summary, credible commitments and the epistemic quality of decisions are often said to justify an institution's independence from majoritarian politics. I will refer to arguments of this sort as appeals to technical legitimacy.

The Elements of Technical Legitimacy

The two justifications above need to be completed as foundations for legitimacy. For technical legitimacy to be defensible, it must ensure the robust satisfaction of the reasons that grant an institution's right to rule (Sandven and Scherz Reference Sandven and Scherz2022, 7). Three factors seem especially important in ensuring such robustness.

First, technical legitimacy only makes sense with the existence of a unitary and identifiable common good. For instance, Bickerton and Invernizzi Accetti describe technocracy as advancing an ‘unmediated conception of the common good’. There is an objective political truth or a science of the common good that technocrats (and, by extension, experts) have privileged access to (Bickerton and Accetti Reference Bickerton and Accetti2021, 3; Bellamy Reference Bellamy2010; see also Caramani Reference Caramani2017; Gaus, Landwehr, and Schmalz-Bruns Reference Gaus, Landwehr and Schmalz-Bruns2020; Urbinati Reference Urbinati2014). A weaker version of the argument says only that experts are more likely to make conscientious and informed decisions about certain policies because they are free from the distorting incentives of majority rule (Bellamy Reference Bellamy2010). Even the weak version, however, presupposes the existence of a political truth or unitary common good (see Friedman Reference Friedman2019) that is separate from the whims and wishes of the political majority.

Second, non-majoritarian institutions need to be able to access the ‘political truth’ and know when they have done so. Technical legitimacy, therefore, appeals to expertise (Holst and Molander Reference Holst and Molander2017). This will often be scientific expertise but could also include risk analysis and other analytical techniques. For instance, we delegate power to a central bank not only because it is independent but also because we believe that it possesses some economic expertise that will lead to the price stability we entrust it to maintain.

Third, technical legitimacy relies on an allegiance to the value-free ideal. I will return to a detailed exposition below. But, for now, the ideal entails that technical evaluations should be kept separate from (political) values. The ideal is made explicit in the above quote from the European Commission, and it is empirically observed as a central part of agencies' own legitimation and reputation management strategies (Busuioc and Rimkutė Reference Busuioc and Rimkutė2020; Carpenter Reference Carpenter2010; Fjørtoft Reference Fjørtoft2022; Fjørtoft and Michailidou Reference Fjørtoft and Michailidou2021). Value freedom implies a division of labour. Politicians make value choices and set goals for an agency, and the agency finds the means to reach that goal – guided by ‘purely technical evaluations’ free from political interference (see Christiano Reference Christiano, Parkinson and Mansbridge2012; Vibert Reference Vibert2007). This division of labour, modelled on the Weberian division of labour between the bureaucracy and political leadership, has been described as the ‘dominant twentieth-century solution to the problem of expertise’ (Pamuk Reference Pamuk2021, 8).

In summary, the elements presented here make up a theory of technical legitimacy. First, non-majoritarian institutions require independence from majoritarian politics to ensure credible commitments to, or the epistemic quality of, decisions. This is justified because they (1) are set up to promote an identifiable and incontestable common good, (2) hold the expertise that makes them equipped to bring it about, and (3) engage in conduct that is value-free, that is restricted to technical matters. An argument claiming technical legitimacy seems more normatively defensible and, plausibly, empirically acceptable where these conditions are in place. However, they may be realized to different degrees in different institutions. In this paper, I will take aim at the third element: value freedom.

Critics of Technical Legitimacy

Critics of what I call technical legitimacy have pointed out that, as an empirical matter, facts and values are almost always entangled in political decision-making (Eriksen Reference Eriksen2021; Føllesdal and Hix Reference Føllesdal and Hix2006). This objection is often phrased in practical or empirical terms, claiming that value freedom is impractical, impossible, or rare. Take Richard Bellamy's objection as a typical example: ‘most ‘purely’ technical decisions raise normative issues and are often less clear-cut empirically than is claimed’ (2010, 9). He continues that scientific arguments leave open normative questions about the solutions to problems; expert judgements contain discretion, and different economic theories might disagree about interest rate increases or decreases (2010, 9). A slightly different charge is that agencies often perform political tasks masked as technical operation (Boswell Reference Boswell2009; cf. Eriksen Reference Eriksen2021, 785). And scholars in the tradition of science and technology studies (STS) have argued that facts and values are particularly intertwined in regulatory science – for instance, in the regulation of medicines or toxic substances – due to its place between ordinary research and policy-making (Jasanoff Reference Jasanoff, Camic, Gross and Lamont2011).

Some critics go further than questioning value-freedom on empirical terms. They challenge the conceptual possibility of a fact-value distinction altogether. This view is, for instance, found in the ‘strong program’ in STS and in certain post-positivist approaches to policy analysis (see, for example, Fischer Reference Fischer2009; Latour and Woolgar Reference Latour and Woolgar1986; cf. Goldman and O'Connor Reference Goldman, O'Connor and Zalta2021). Such claims, however, challenge more than what is necessary for the argument in this paper. To make the maximally acceptable case for my argument, I argue instead that the value-free ideal fails even when upholding most premises of mainstream positivist science – including the fact-value distinction, the idea of hypothesis testing, and the premise that science is, and can be, truth-seeking. As such, it is a critique from within. The concept of epistemic error, a crucial component of the argument from inductive risk, only makes sense if the fact-value distinction holds.

In summary, the theory of technical legitimacy has been thoroughly challenged, but it is thriving as a normative ideal (see, for example, Christiano Reference Christiano, Parkinson and Mansbridge2012; Vibert Reference Vibert2007) and an empirically observable legitimizing strategy (Boswell Reference Boswell2009; Fjørtoft Reference Fjørtoft2022; Maor Reference Maor2007; Paul Reference Paul2017; Rimkutė Reference Rimkutė2020). I believe part of the reason for this resilience is that technical legitimacy is, in some cases, an appropriate standard. It seems clear that many proponents of technical legitimacy are too optimistic about the potential for neutral facts to guide policy decisions directly. But if democracy has an obligation towards the epistemic quality of decisions alongside its majoritarian or representative obligation, some decisions might legitimately be shielded from majoritarian democracy. This is not inherently undemocratic; it is a premise shared by many plausible theories of democracy (see, for example, Pettit Reference Pettit2004; Steffek Reference Steffek2015). We need a measure of legitimacy that is open to non-majoritarian delegation but which, at the same time, ensures that technical legitimacy does not overstep its boundaries. I suggest that the argument from inductive risk provides such a measure.

The Value-Free Ideal and the Argument From Inductive Risk

In this section, I will briefly introduce the value-free ideal and show how the argument from inductive risk has been used in philosophy of science to refute it. I will then unpack the argument's constituent parts – uncertainty and consequences – and move on to show how it may apply to debates over non-majoritarian institutions' legitimacy.

The Value-Free Ideal

The value-free ideal for science is the ideal that ‘social, ethical and political values should have no influence over the reasoning of scientists, and that scientists should proceed in their work with as little concern as possible for such values.’ (Douglas Reference Douglas2009, 1). I follow Heather Douglas' (2007, 2009) specification of the ideal that the internal stages of science should be free from non-epistemic values. There is a widely accepted distinction in the philosophy of science between epistemic and non-epistemic values and between the internal and external stages of science. Epistemic values are those constitutive to the pursuit of knowledge itself; for instance, accuracy, internal coherence, and external consistency (McMullin Reference McMullin1982). Non-epistemic values are those that fall outside this demarcation; for instance, personal, social or cultural values. The external stages of science regard everything that is conceptually outside the conduct of research: the choice of research topic, ethical limitations on methodology (for instance, on the use of human subjects), the application of technologies emanating from the research, and policy implications derived from the research. The internal stage is the research process itself, including the collection, analysis, and interpretation of data.

The use of epistemic values in science, externally or internally, is widely accepted as necessary and desirable in modern philosophy of science. So is the use of non-epistemic values in the external parts of science (Douglas Reference Douglas, Kincaid, Dupré and Wylie2007, 121). For instance, we accept that some projects need approval by an institutional review board or an ethics committee before data collection can begin, and it is permissible to let the choice of research subject be guided by moral convictions (see, for example, King, Keohane, and Verba Reference King, Keohane and Verba1994, 12).

According to the value-free ideal, what is problematic is the influence of non-epistemic values in the internal parts of science. The value-free ideal protects science's epistemic integrity against wishful thinking, political motivations, economic interests, and so on (De Melo-Martín and Intemann Reference De Melo-Martín and Intemann2016). This is intuitively appealing. Non-epistemic values in the internal stages of science seem to threaten the objectivity of and trust in scientific findings. However, according to the argument from inductive risk, the value-free ideal is not a good defense against these worries.

The Argument from Inductive Risk

The argument from inductive risk is often attributed to Rudner (Reference Rudner1953) and was given its name by Hempel (Reference Hempel1965). In contemporary philosophy of science, the argument is most prominently developed by Douglas (Reference Douglas2000, Reference Douglas2009). It begins from the observation that no evidence can guarantee the truth of a hypothesis. Therefore, the decision to accept or reject a hypothesis is associated with risk since accepting a false hypothesis or rejecting a true one can have serious social or political consequences (Contessa Reference Contessa2021, 354; Douglas Reference Douglas2000, 561). When deciding whether the evidence is strong enough to justify the acceptance of a hypothesis, scientists must make a value judgement over the ethical consequences of being wrong (Gundersen Reference Gundersen2021, 163). Whenever such errors have non-epistemic consequences, non-epistemic values are thus not only permitted but are required in science (Douglas Reference Douglas2000, 559). Scientists are morally responsible for their conduct as scientists, including the consequences of being wrong.

Take the example of null-hypothesis significance testing. Whether one should place the threshold for statistical significance (that is, hypothesis acceptance) at a p-value of, for instance, p < 0.1, 0.05, or 0.01 is a choice that cannot be based on data, observations, or epistemic values alone. Even where a certain threshold, say, 0.05, is the de facto convention in a scientific community, it is not based on epistemic values or empirical observation. Any choice of threshold entails increasing the risk of either false positives or false negatives. One cannot reduce both types of error at once. One can only make trade-offs from one to the other (Douglas Reference Douglas2000, 566). The choice of threshold, therefore, comes with inductive risk and should be informed by the consequences of error.

A team of analysts in a tech company might, for instance, decide on a relatively low (permissive) threshold for hypothesis acceptance when running an A/B test of whether a red button on a website gives more clicks than a blue one. This is because they believe the consequences of being wrong are small. Conversely, when evaluating whether a certain food additive is safe for humans, a team of scientists might set a high (restrictive) threshold for accepting the hypothesis that a substance is safe. Again, this is because they would rather conclude that a safe substance is unsafe than the other way around. Such decisions are based on a normative judgement of the consequences of error in each case, which comes conceptually before any evidence assessment.

A classical defense of the value-free ideal says that scientists avoid the problem of inductive risk by refraining from accepting and rejecting hypotheses. They should instead assess the probabilities of hypotheses and communicate the relevant uncertainties to the decision-makers. Decision-makers can then make the final decision, taking all risks associated with different options into account (Jeffrey Reference Jeffrey1956). Thus, the internal stages of science remain value-free. But uncertainties are not always easily quantifiable and communicable. There is ‘second-order’ uncertainty in the assessment of uncertainty itself, which in turn requires value judgements (Steel Reference Steel2015, Reference Steel2016). Furthermore, inductive risk permeates all stages of the research process and cannot be neatly circumscribed to the final stage (Contessa Reference Contessa2021, 355). According to Douglas (Reference Douglas2000, 565), ‘A chosen methodology assumed to be reliable may not be. A piece of data accepted as sound may be the product of error. An interpretation may rely on a selected background assumption that is erroneous.’ In all these cases, if there are non-epistemic consequences of being wrong, the researcher should consider non-epistemic values when making choices.

Applying the Argument to Non-Majoritarian Institutions

The argument from inductive risk is most often construed as an argument against value freedom in science. But, as we have seen, value freedom is an ideal for non-majoritarian institutions too, and it is part of the technical argument for their legitimacy. Moreover, like scientists, non-majoritarian institutions make choices about the interpretation of data, thresholds for hypothesis acceptance, methodological approaches, the credence of existing research, and so on. The argument, therefore, bears on the value freedom of non-majoritarian institutions.

Making the move from science to non-majoritarian institutions requires some clarification. There are two stages where inductive risk might come into play. The first is external, in the decision to delegate. To use conventional principal-agent terminology, the principal runs an inductive risk whenever it decides to delegate to an agent. For instance, principals might be wrong in their predictions about agents' future conduct, and they might be wrong about the predicted costs and benefits (in broad terms) of delegation. The principals facing inductive risk here are often elected officials. While I believe the concept of inductive risk can be fruitfully applied to this class of decisions, it is not the main target of my argument. Some of the relevant dilemmas are already captured in the delegation and accountability literature's concepts of agency capture and agency drift (see, for example, Schillemans and Busuioc Reference Schillemans and Busuioc2015).

The other stage where inductive risk might come into play is internal, in the day-to-day working of unelected bodies. Agency experts face inductive risk in their decisions in a sense that resembles what scientists face in their work. Their actions also have non-epistemic consequences since they are, by design, set up to exercise some form of public authority. This internal stage is the target of my argument here. It is not, however, completely decoupled from the act of delegation. It seems reasonable that principals should consider the inductive risk that agency experts will face (to the extent that they can predict it) when they consider whether to delegate to an independent body.

If there is inductive risk in non-majoritarian bodies' reasoning and decisions, technical legitimacy is not automatically an appropriate source of legitimacy. Values are unavoidable parts of the knowledge claims these institutions make. This undercuts the clear-cut division of labour presupposed by (naive) technical legitimacy.Footnote 2

On this point, the inductive-risk approach lands on the same broad scepticism towards technical legitimacy as many existing critiques. But it does so by another device, and its assessment of concrete cases sometimes differs from existing critiques. Take one example. Analysts in the regulatory state tradition hold that an institution that generates Pareto-efficient outcomes with no distributive consequences is legitimate despite (or because of) its lack of majoritarian democratic input. This argument has been central in discussions about the EU, independent regulatory agencies, and independent central banks (Maggetti Reference Maggetti2010; Majone Reference Majone1996; Tucker Reference Tucker2018). Critics have rejected that notion at the empirical level by showing that truly Pareto-efficient decisions are rare or by showing the redistributive consequences of specific decisions (Dietsch Reference Dietsch2020; Føllesdal and Hix Reference Føllesdal and Hix2006). If a decision has redistributive consequences, the argument goes, it belongs in the political domain. Non-majoritarian institutions should not be allowed to decide who wins and who loses. Neither proponents nor critics, however, attach any probability to their claims about outcomes. Instead, they treat the expected outcomes of decisions as relatively fixed and certain.

If we allow for uncertainty around the expected outcomes of decisions, Pareto efficiency is not enough to legitimize non-majoritarian institutions. At this point, I depart from, for instance, Føllesdal and Hix's influential critique of Majone (Føllesdal and Hix Reference Føllesdal and Hix2006). They hold that Pareto-efficient decisions are empirically rarer than Majone supposes, but they do not challenge the conceptual point that Pareto efficiency can be a source of legitimacy. They draw up a continuum from purely redistributive to purely efficient decisions, of which consumer product standards and safety protection are at the ‘efficient’ extreme. They argue that such decisions ‘might best be isolated from political interferences once the laws and other standards are identified’ (Føllesdal and Hix Reference Føllesdal and Hix2006, 542). As recent spats over the regulation of the herbicide glyphosate have shown, different regulatory agencies may land on opposite assessments of the same substance (see Busuioc and Rimkutė Reference Busuioc and Rimkutė2020, 7). This indicates that epistemic uncertainty is pervasive even at the purportedly efficient extreme of the spectrum. Due to inductive risk, value judgements are required even here, and Pareto efficiency is an insufficient basis for legitimacy.

Summing up, inductive risk provides an argument against a clear-cut division of labour between technical experts and value-laden politicians. This is not to deny that there might be good reasons, all things considered, to delegate tasks like consumer product standards and safety protection to an independent agency. But the legitimacy of such arrangements should not be evaluated by an underdetermined notion of Pareto efficiency or a naive reference to institutions' value freedom. Instead, it should be informed by the institution's inductive risk.

Note that inductive risk is unlikely to be the only salient factor when determining the legitimacy of an institution. My aim here is not to offer a total, encompassing theory of legitimacy. A full assessment of an institution's democratic legitimacy is likely to involve a wider set of normative considerations – for instance, institutions must at least fulfil a ‘minimal moral acceptability’ criterion to respect basic human rights (Buchanan Reference Buchanan, Sobel, Vallentyne and Wall2018, 59). Moreover, my account is compatible with broader democratic theories that find grounds for legitimate non-majoritarian power in the quality of reasoning or deliberation of agencies and their delegating procedures (Downey Reference Downey2021; Eriksen Reference Eriksen2021; Holst and Molander Reference Holst and Molander2017; van't Klooster Reference van't Klooster2020). Finally, it expands on existing accounts by offering a concrete device by which to assess claims to technical legitimacy. By conceptualizing value input as a matter of degree, it is able to balance scepticism towards naive technical legitimacy against the need in modern democracies to delegate certain decisions to unelected expert bodies.

A Two-Dimensional Concept of Legitimacy

The previous section laid out the paper's first goal by providing an inductive risk-based argument against justifying legitimacy in a naive claim to value freedom via, for instance, Pareto efficiency. But the argument from inductive risk allows us to go further than simply saying that facts and values are often intertwined. Risk is not a binary; it is a scale. We can use inductive risk as a measure of the amount of value input required for a non-majoritarian institution to be legitimate. Where inductive risk is low, technical legitimacy runs into fewer problems than where risk is high.

When applying the framework of inductive risk to the legitimacy of non-majoritarian institutions, it is more fruitful to speak of an institution's average level of inductive risk rather than that of individual decisions. This shift requires the operationalization of the concept's two dimensions – epistemic uncertainty and consequences of errors.

Let me first clarify what I mean by inductive risk as a measure of legitimacy. It may be objected that the assessment of an institution's inductive risk is itself a decision that runs inductive risk, such that we only move the problematic division of labour one step further out by adopting an inductive risk-based approach to legitimacy.Footnote 3 Against this, note that my account is a device for the normative assessment of institutions' legitimacy. It is not intended as a guide for political decision-makers to directly pick up and apply. To be sure, my argument has institutional consequences, which I will return to in brief below. But questions of institutional design and implementation are conceptually distinct from the question of normative analysis.

Second, what do I mean by democratic value input? I do not mean direct decision-making by majoritarian means. Epistemic decisions should not be made by plebiscite. The argument from inductive risk does not say that value judgements should play a direct role when making scientific decisions. They play an indirect role in evaluating the consequences of accepting or rejecting a claim. ‘Values weigh the importance of uncertainty, but not the claim itself’ (Douglas Reference Douglas2009, 103). Making epistemic decisions by direct majoritarian means would give value judgements a direct role, harming institutions' obligation to safeguard the epistemic quality of their decisions. Instead, institutions need democratic value inputs in an indirect role.

Third, where should the values that inform institutions' decisions come from? Heather Douglas suggests that scientists should use their own personal values in scientific decisions. However, this seems too contingent when applied to non-majoritarian institutions (see Pamuk Reference Pamuk2021, 16, for a critique). Instead, I take a cue from Andrew Schroeder's notion of democratic values. His contribution is concerned with trust in science but translates well to non-majoritarian institutions. Democratic values, which he defines as the values held by the public and its representatives, are what non-majoritarian institutions should appeal to when value judgements are called for (Schroeder Reference Schroeder2021, 553). According to Schroeder, empirically informed political philosophy can tell us how to determine the public's values. This may include some procedure like a deliberative forum, citizen science initiative, referendum or opinion survey. The resulting values may be ‘filtered’ and ‘laundered’ to remove obviously illegitimate values (like racism) and clean up values based on, for instance, false empirical beliefs. While Schroeder acknowledges that it might sometimes prove difficult to determine what those democratic values are, he maintains that it, in many cases, would not be especially difficult to at least approximate a democratic values approach in science (Schroeder Reference Schroeder2021, 559). While a full development lies beyond the present paper, it seems plausible that his approach would work equally well in the context presented here.

Epistemic Uncertainty

There is epistemic uncertainty – that is, a chance of being factually wrong – associated with the epistemic choices that non-majoritarian institutions make. The more uncertainty, the higher the associated inductive risk. And conversely, where uncertainty is very low – where we believe there is almost no chance of being wrong – there is little to be gained by considering the consequences of being wrong. This is not different from how we think about risk in everyday life. We do not go around considering the consequences of being wrong about our prediction that the sun will rise tomorrow because the chance of being wrong is so small (see also Douglas Reference Douglas2000, 577).

Many analyses of non-majoritarian institutions implicitly or explicitly appeal to epistemic uncertainty in their explanatory or evaluative typologies. For example, Radaelli (Reference Radaelli1999) explicitly theorizes that uncertainty is one of two axes along which expertise use in the EU varies. Likewise, using different terms and definitions, Gormley (Reference Gormley1986), Rimkutė (Reference Rimkutė2015), Schrefler (Reference Schrefler2010) and Fjørtoft and Michailidou (Reference Fjørtoft and Michailidou2021) all employ concepts that can be restated as epistemic uncertainty. I suggest that institutions may, as a heuristic, be sorted by their expertise basis, where different expertise bases feature different average levels of epistemic uncertainty. For example, it is widely agreed that certain natural sciences, like physics, are characterized by less epistemic uncertainty – for instance, measured by higher predictive accuracy or scientific consensus – than the ‘softer’ sciences (Fanelli and Glänzel Reference Fanelli and Glänzel2013; Smith et al. Reference Smith2000).

Consequences of Error

The other part of inductive risk is the consequences of error. Given the same error rate, if the consequences are serious in one case and trivial in the other, we expect decisions to be different. And again, determining the ‘seriousness’ of a consequence is a decision based on values. In science, there might be some areas where making a wrong choice has no impact on anything outside that research project itself. In those cases, non-epistemic values do not come into play (Douglas Reference Douglas2000, 577). For example, it might be relatively harmless to make a mistake in certain esoteric areas of theoretical physics (but see Staley Reference Staley2017 for a counterargument), while errors in nuclear science or in the evaluation of a large-scale policy intervention might have far-reaching consequences. This line of thought applies just as much to non-majoritarian institutions.

The consequences of a single decision can be captured quite straightforwardly. In Rudner's terms, we should assess how serious, ‘in the typically ethical sense’ (Rudner Reference Rudner1953, 3), the consequences of (for instance) mistakenly accepting or rejecting a given hypothesis would be. The average potential consequences of error in a non-majoritarian institution's decisions would increase both with the seriousness of the domain in which it holds power and with the degree of power it holds over that domain. In the typically ethical sense, life-or-death issues are more serious than commissioning artworks for public buildings. An institution with direct decision-making power can do more damage than one with only advisory power (for example, Scherz Reference Scherz2021).

It may be objected that the potential adverse consequences of an institution's decisions should, in fact, count in favour of its value freedom. In other words, higher consequences warrant less value input. The argument would run something like this: It would be catastrophic to determine by plebiscite decisions over, for instance, nuclear policy. Such decisions should be left to unelected experts precisely because of the potentially adverse consequences of error. Value input would increase the probability of error and is, therefore, a mistake.

This objection fails to appreciate that there is a distinction between independence, in the form of removing decisions from majoritarian democratic control, and value freedom. Value input does not entail granting majoritarian control over an epistemic procedure. The inductive risk-based approach only calls for the indirect use of values in weighing the consequences of error, not the direct use of values in making an epistemic claim itself. Epistemic claims should not be made by a plebiscite at all. There is, therefore, no trade-off, in my framework, between value input and epistemic accuracy.

Granted, the objection does show that adverse consequences could be one reason to support an institution's independence from majoritarian politics. But it does not refute the type of value input I am arguing for in this paper. We might, in many cases, want independent expert bodies deciding over technically complex matters with potentially serious consequences – precisely because we want to get things right. But due to inductive risk, experts in such institutions make all kinds of judgements and choices that are underdetermined by evidence alone. When their concern for epistemic accuracy cannot take them any further, experts should look to values when making such choices. These values gain more weight when consequences are severe than when they are trivial. And they should be democratic values, as outlined above.

An Inductive-Risk-Based Measure of Legitimacy Demands

Combining the two dimensions, we have an operative understanding of inductive risk and its link to legitimacy. Inductive risk is a function of the probability of error and the consequences of that error. Figure 1 is a graphical representation of the two-dimensional scheme. Inductive risk increases as you move up, to the right, or both.

Figure 1. A two-dimensional scheme of inductive risk.

What does the measure imply in practice when assessing concrete institutions? I will discuss three scenarios characterized by low, medium, and high inductive risk.

There might be good reasons to give much power to experts, with little political interference, over certain domains. The inductive risk-based approach easily allows this where inductive risk is low, either due to low epistemic uncertainty or limited potential consequences. For instance, EU agencies based on hard-science expertise, deciding over issues that are relatively restricted or specialized, might be legitimate even without a strong set of procedures for democratic value input. Importantly, such agencies are not legitimate because of their ‘purely technical evaluations’ or Pareto-efficient outcomes. They are legitimate because their inductive risk is below some threshold for acceptable risk.Footnote 4 Accountability procedures, public consultations, and other procedures for value input are costly and might not be worth the effort. Note that even these agencies cannot claim absolute value freedom. However limited, a basic check on their use of values is warranted. A simple transparency criterion might be enough. Agencies should be transparent about their value judgements and open them up to public scrutiny.

In the case of medium inductive risk, more value inputs are in most cases required. I will discuss a case of medium inductive risk in the case study below (sec. 5). Here, I show that independent central banks are often conceived as cases of low inductive risk, whose high independence is obviously warranted. I argue that their degree of inductive risk is higher and probably belongs in the medium category. Therefore, central bank independence cannot be justified by reference to pure technical legitimacy. These institutions would benefit from integrating at least some degree of democratic values.

A third class of cases is where inductive risk is very high, either due to high epistemic uncertainty, consequences of error, or both. For these institutions, appeals to technical legitimacy are insufficient, and stronger mechanisms for value input are required for their legitimacy. Take one example. EU agencies exist not only in the classic areas of food safety and medicines regulation but also in fields further removed from ‘hard’ science. For instance, the European Border and Coast Guard Agency, Frontex, has received new powers and a steadily increasing budget over the previous decade, making it a powerful agency with executive powers. Nevertheless, it is still set up as an EU agency, which makes it formally independent from elected politicians. Crucially, the agency appeals to its independent expertise and its neutral, objective basis for operations – technical legitimacy – when legitimizing itself (Fjørtoft Reference Fjørtoft2022; Paul Reference Paul2017).

Surely the agency wields significant expertise in its risk analysis and vulnerability assessment units. But this expertise is characterized by high epistemic uncertainty. Migration risks are notoriously difficult to assess. Even in the agency's own documents, we find warnings against conveying a ‘false sense precision’ to decision-makers (Fjørtoft Reference Fjørtoft2022, 10). Moreover, risk analysis and vulnerability assessments are linked to action via the so-called ‘right to intervene,’ meaning that any errors in the assessments may have clear non-epistemic consequences – including direct effects on (prospective) migrants who are subject to Frontex's Standing Corps. Therefore, by the standards of inductive risk, the agency's claim to technical legitimacy seems to fall short.

This does not mean that Frontex should be disenfranchised, but it means that its legitimacy cannot rest on independence and technical neutrality. Instead, it would have to depend on a robust mechanism for involving democratic values in the agency's decisions – not only in its operational branch but also in its analytical work. The concrete institutional setup of such an arrangement is beyond the scope of this paper, but such a robust mechanism would likely include reform both of the agency itself and of other parts of the border regime (see Fjørtoft and Sandven Reference Fjørtoft and Sandven2022 for an extended argument).

Inductive Risk in Central Banks: Uncertainty and Consequences

I have argued that epistemic uncertainty and consequences make up a two-dimensional measure of the degree of democratic value input necessary for an institution to be legitimate. One of the most clear-cut examples of non-majoritarian power in modern democracies is independent central banks' power over monetary policy. Some have gone so far as to label central bank independence a ‘free lunch’ (Grilli et al. Reference Grilli1991, 375). But considerations of inductive risk have rarely figured in debates over central banks' legitimacy. If we take an inductive risk-based approach, we see that the lunch might come at a cost after all.

In classical economic theory, the power to set interest rates is delegated to independent central banks because it solves a time-inconsistency problem. Politicians may have incentives to manipulate interest rates for short-term political gain, to the detriment of the economy in the long run. Delegation is necessary for politicians' commitments to price stability to be credible and is, therefore, in the public interest. A central bank bases its decisions on macroeconomic theory. What happens, then, if there is inductive risk in macroeconomic theory?

Contessa (Reference Contessa2021) argues that there is. In the mid-20th century, the prevailing view in macroeconomic theory was that there was a trade-off between inflation and unemployment. This relationship – the so-called Phillips curve – was supported by empirical observations. It suggested a dilemma for policymakers: They could try to reduce inflation or unemployment, but not both. This picture started to show cracks in the 1960s. Milton Friedman and Edmund Phelps argued that the Phillips curve failed to take into account the inflationary expectations of economic agents (Contessa Reference Contessa2021, 356). Instead of a trade-off, there is a natural rate of unemployment corresponding to the equilibrium in the market for labour and goods. Attempts by politicians or central banks to drive down unemployment by expansionary monetary policy will work in the short term. But in the long run, unemployment will bounce back to its ‘natural’ level while inflation remains high. This is often called natural rate theory. The theory implies that one should not try to control unemployment through monetary policy and that doing so might have detrimental effects. Furthermore, there is little to be lost by setting very low inflation targets since unemployment is unaffected. If this theory is true, it clearly justifies delegation. Central bank independence, under this view, is a ‘free lunch.’ There are benefits without apparent costs in terms of long-term macroeconomic performance (Grilli et al. Reference Grilli1991, 375).

But consider the following inductive risk: What if the natural rate theory is not true? What if there is a long-term trade-off between unemployment and inflation instead? A zero-inflation target would, in this case, be poor economic policy. Akerlof and Shiller estimate the costs of pursuing a zero-inflation target in such a scenario: ‘The calculated increase in the unemployment rate of 1.5 per cent would render jobless 2.3 million people [in the US] … [and] entail a loss of GDP of more than $400 billion per year.’ (Akerlof and Shiller, cited in Contessa Reference Contessa2021, 360).

Contessa goes on to present a case study of the Canadian central bank in the 1990s. It interpreted its mandate narrowly, as price stability above all else, believing this would lead to ‘a healthy economy’ (Contessa Reference Contessa2021, 363). Yet the policy led to (or exacerbated) a recession in Canada. Pursuing a strict price stability mandate was, in this case, not only an error by some external benchmark. It was self-defeating by the bank's own standards. It failed to bring about the central bank's own goal of economic stability.

Other contributions support the argument that there is more inductive risk associated with central bank independence than has been commonly assumed. For instance, Hansen (Reference Hansen2021) shows empirically that banking crises produce larger unemployment shocks when the level of central bank independence is high – but only when banks have a strict inflation-centric mandate. If these arguments are true, a narrow focus on inflation fails, at least in certain cases, to bring about the outcomes it is theorized to bring about. This is a clear case of the non-epistemic consequences of epistemic uncertainty in macroeconomic theory. It is, in other words, a clear case of inductive risk.

Contessa, to my knowledge, is the only author who has explicitly linked central banks' use of macroeconomic theory to inductive risk. But he has not discussed the implications for central banks' legitimacy. Other recent contributions have been more explicitly concerned with central bank legitimacy and the ethics of delegation but have not used the framework of inductive risk (see, for example, Dietsch Reference Dietsch2020; Downey Reference Downey2021; van't Klooster Reference van't Klooster2020).

For instance, van't Klooster (Reference van't Klooster2020) observes that central banks started taking on new tasks after the 2008 financial crisis. Even if we accept the premise that a narrow price stability mandate was adequate before the crisis, central banks now have ‘many more instruments to use in pursuit of a much less clearly defined set of goals’ (van't Klooster Reference van't Klooster2020, 596). Central banks now must consider a wider range of public interests, which requires a rethinking of their mandates. In other words, their power has increased, and with it the potential consequences of their errors. The pre-crisis justification of central bank independence was adequate because the consequences of errors were (ignoring, for now, the counterarguments discussed above) relatively limited. With greater potential consequences comes greater legitimacy demands.

Taken together, the example of central banks illustrates the two dimensions of an inductive risk-based account of non-majoritarian legitimacy. First, strong central bank independence bound to a narrow price stability mandate is only normatively legitimate if epistemic uncertainty is low or if the potential consequences of errors are relatively limited. Both of these premises may be challenged. The inductive risk associated with central bank independence and its macroeconomic assumptions may be so large as to undermine the classical argument for central bank independence. An immediate implication is that democratic values must, to a greater extent, be taken into account in central bank deliberations.

Note that this argument does not invalidate the concept of central bank independence altogether. I agree with van't Klooster (Reference van't Klooster2020, 587) that ‘it should in principle be permissible for governments to delegate political choices to unelected experts. … What matters is whether the government has an adequate justification for its decision to delegate.’ By my argument, that justification must include a way for central bankers to appeal to democratic values instead of requiring them to ‘cloak their arguments in terms of the price stability mandate’ (van't Klooster Reference van't Klooster2020, 597).

Conclusion

This paper has argued that the amount of democratic value input required for a non-majoritarian institution to be legitimate depends on its inductive risk. Every choice in a truth-seeking procedure comes with uncertainty. There is a chance of being wrong. When there are non-epistemic consequences of being wrong, values are required when deciding what type of error and what consequences we are more willing to accept. While the argument from inductive risk originated in the philosophy of science, the paper has shown how it is applicable to non-majoritarian institutions.

The inductive risk-based conception of legitimacy offers an argument against a naive claim to technical legitimacy. It is insufficient to justify the power of an institution by reference to its neutrality and value freedom alone. Instead, such a claim must be made in consideration of the institution's inductive risk. If we should ever accept a claim to pure technical legitimacy, it is when that risk is low. When the inductive risk is higher, we should expect institutions to appeal more explicitly to democratic values and include a procedure to determine those values.

In practice, this means that an institution with limited power over a restricted domain and whose decisions have low empirical uncertainty might be relatively cut loose from procedures of democratic value input. But most real-world independent agencies carry some inductive risk, and some carry a lot. So wherever high-inductive-risk institutions are designed as highly independent bodies with a purely technical mandate, they should be reformed.

Some crucial questions are left for future analysis. Notably, a full account of the precise mechanisms for democratic value input is beyond the scope of this paper. By contrast to the dominant division-of-labour model, institutions' own experts must, in my account, make value judgements themselves. It is not sufficient to defer democratic control and accountability to a separate stage after the knowledge claims are sorted out. We need mechanisms which ensure that the experts are attuned to the values of their political society.Footnote 5 A range of mechanisms might be needed, depending on the level of value input required and on the characteristics of the institution in question.

Finally, my argument has implications for an issue left untouched by this paper, namely the decision to delegate. Delegation is often unavoidable (at the very least because there is a limited number of elected officials) and often desirable because it might bring the benefits that technical legitimacy promises. A possible implication of an inductive-risk-based account is that decisions to delegate power to independent expert bodies should be informed by a society's tolerance for risk in the relevant field. When the benefits of delegating certain decisions to experts outweigh the (inductive) risks, delegation should – all else being equal – be permitted (see also Buchanan Reference Buchanan, Sobel, Vallentyne and Wall2018). If this line of thinking holds, two questions must be sorted out – whose answers are themselves characterized by epistemic uncertainty. How should we determine the expected benefits of delegation? How should we weigh benefits against risks? These questions merit further substantive analysis.

Acknowledgements

I wish to thank Erik O. Eriksen, Eilev Hegstad, Cathrine Holst, Asimina Michailidou, Regine Paul, Claudio Radaelli, Hallvard Sandven, Jens Steffek, Jovana Todorović, and participants at the GOODPOL Final Conference in Oslo, May 2022, for comments on earlier drafts. Thanks also to three anonymous reviewers for this journal, whose comments much improved the article.

Financial support

Support for this research was provided by the Research Council of Norway project ‘Democracy and Expert Rule: The Quest for Reflexive Legitimacy (REFLEX)’ (Project Number: 250436).

Competing interests

None.

Footnotes

1 The argument also comes in a weak variant. The weak variant says values are warranted, while the strong says they are also required (see Gundersen Reference Gundersen2021). The distinction should not make a difference in the argument presented here.

2 Traces of inductive risk-based arguments, although not explicitly developed, are found in the literature on independent agencies. For instance, Madalina Busuioc implicitly makes an inductive-risk-based argument when she claims that ‘value judgments on the acceptability of risk are integral parts of scientific decisions and of the decisions of the [European Medicines Agency]’ (Busuioc Reference Busuioc2013, 217). Furthermore, she echoes Douglas's critique of the value-free ideal when she continues: ‘Such decisions are being taken exclusively by experts, under the guise of a formal, yet in this case de facto meaningless, separation between risk assessment and risk management’ (218).

3 I am grateful to an anonymous reviewer for this objection.

4 The important question of how to set that threshold is outside the scope of this paper. For the normative assessment of legitimacy, the threshold might be determined by substantive normative argument. Another option is to treat the threshold empirically as a given society's risk tolerance.

5 Eriksen (Reference Eriksen2021) presents one possible model, which requires agencies to ground their value judgements in a publicly accessible framework of reasoning, like their mandate. Pamuk (Reference Pamuk2021) presents a more radical model of an adversarial ‘science court,’ an idea which might be transferred to the specific context of non-majoritarian institutions.

References

Bellamy, R (2010) Democracy without democracy? Can the EU's democratic ‘outputs’ be separated from the democratic ‘inputs’ provided by competitive parties and majority rule? Journal of European Public Policy 17(1), 219.CrossRefGoogle Scholar
Bickerton, C and Accetti, CI (2021) Technopopulism: The New Logic of Democratic Politics. Oxford: Oxford University Press.CrossRefGoogle Scholar
Boswell, C (2009) The Political Uses of Expert Knowledge: Immigration Policy and Social Research. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Buchanan, A (2018) Institutional legitimacy. In Sobel, D, Vallentyne, P, and Wall, S (eds), Oxford Studies in Political Philosophy Volume 4. Oxford: Oxford University Press, pp. 5378.Google Scholar
Busuioc, M (2013) European Agencies: Law and Practices of Accountability. Oxford Studies in European Law. Oxford: Oxford University Press.Google Scholar
Busuioc, M and Rimkutė, D (2020) The promise of bureaucratic reputation approaches for the EU regulatory state. Journal of European Public Policy 27(8), 1256–69.CrossRefGoogle Scholar
Caramani, D (2017) Will vs. reason: The populist and technocratic forms of political representation and their critique to party government. American Political Science Review 111(1), 5467.CrossRefGoogle Scholar
Carpenter, DP (2010) Reputation and Power: Organizational Image and Pharmaceutical Regulation at the FDA. Princeton, NJ: Princeton University Press.Google Scholar
Christiano, T (2012) Rational deliberation among experts and citizens. In Parkinson, J and Mansbridge, J (eds), Deliberative Systems: Deliberative Democracy at the Large Scale. Cambridge: Cambridge University Press, pp.2751.CrossRefGoogle Scholar
Commission of the European Communities (2002) Communication from the Commission on the Operating Framework for the European Regulatory Agencies. COM(2002) 718 Final.Google Scholar
Contessa, G (2021) Inductive risk in macroeconomics: Natural rate theory, monetary policy, and the great Canadian slump. Economics and Philosophy 37(3), 353–75.CrossRefGoogle Scholar
De Melo-Martín, I and Intemann, K (2016) The risk of using inductive risk to challenge the value-free ideal. Philosophy of Science 83(4), 500–20.CrossRefGoogle Scholar
Dietsch, P (2020) Independent agencies, distribution, and legitimacy: The case of central banks. American Political Science Review 114(2), 591–95.CrossRefGoogle Scholar
Douglas, H (2000) Inductive risk and values in science. Philosophy of Science 67(4), 559–79.CrossRefGoogle Scholar
Douglas, H (2007) Rejecting the ideal of value-free science. In Kincaid, H, Dupré, J and Wylie, A (eds), Value-Free Science? Ideals and Illusions. Oxford: Oxford University Press, pp. 120141.CrossRefGoogle Scholar
Douglas, H (2009) Science, Policy, and the Value-Free Ideal. Pittsburgh: University of Pittsburgh Press.CrossRefGoogle Scholar
Downey, L (2021) Delegation in democracy: A temporal analysis. Journal of Political Philosophy 29(3), 125.CrossRefGoogle Scholar
Eriksen, EO (2009) The Unfinished Democratization of Europe. Oxford: Oxford University Press.CrossRefGoogle Scholar
Eriksen, A (2021) Political values in independent agencies. Regulation & Governance 15(3), 785–99.CrossRefGoogle Scholar
Fanelli, D and Glänzel, W (2013) Bibliometric evidence for a hierarchy of the sciences. PLoS ONE 8(6), 111.CrossRefGoogle ScholarPubMed
Fischer, F (2009) Democracy and Expertise: Reorienting Policy Inquiry. Oxford: Oxford University Press.CrossRefGoogle Scholar
Fjørtoft, TN (2022) More power, more control: The legitimizing role of expertise in Frontex after the refugee crisis. Regulation & Governance 16(2), 557–71.CrossRefGoogle Scholar
Fjørtoft, TN and Michailidou, A (2021) Beyond expertise: The public construction of legitimacy for EU agencies. Political Research Exchange 3(1), 126.CrossRefGoogle Scholar
Fjørtoft, TN and Sandven, H (2022) Symmetry in the delegation of power as a legitimacy criterion. Journal of Common Market Studies Early view, 117.Google Scholar
Føllesdal, A and Hix, S (2006) Why there is a democratic deficit in the EU: A response to Majone and Moravcsik. Journal of Common Market Studies 44(3), 533–62.CrossRefGoogle Scholar
Friedman, J (2019) Power Without Knowledge: A Critique of Technocracy. Oxford: Oxford University Press.CrossRefGoogle Scholar
Gaus, D, Landwehr, C, and Schmalz-Bruns, R (2020) Defending democracy against technocracy and populism: deliberative democracy's strengths and challenges. Constellations 27(3), 335–47.CrossRefGoogle Scholar
Goldman, A and O'Connor, C (2021) Social epistemology. In Zalta, EN (ed.), The Stanford Encyclopedia of Philosophy, Winter 2021. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2021/entries/epistemology-social/.Google Scholar
Gormley, WT Jr (1986) Regulatory issue networks in a federal system. Polity 18(4), 595620.CrossRefGoogle Scholar
Grilli, V et al. (1991) Political and monetary institutions and public financial policies in the industrial countries. Economic Policy 6(13), 341–92.CrossRefGoogle Scholar
Gundersen, T (2021) Values in expert reasoning. The Accountability of Expertise: Making the Un-Elected Safe for Democracy. Milton Park, Abingdon, Oxon: Routledge, pp. 155172.CrossRefGoogle Scholar
Hansen, D (2021) The economic consequences of banking crises: The role of central banks and optimal independence. American Political Science Review 116(2), 453–69.CrossRefGoogle Scholar
Hempel, CG (1965) Science and human values. In Aspects of Scientific Explanation and Other Essays in the Philosophy of Science. New York: The Free Press, pp. 8196.Google Scholar
Holst, C and Molander, A (2017) Public deliberation and the fact of expertise: Making experts accountable. Social Epistemology 31(3), 235–50.CrossRefGoogle Scholar
Holst, C and Molander, A (2019) Epistemic democracy and the role of experts. Contemporary Political Theory 18(4), 541–61.CrossRefGoogle Scholar
Jacobs, AM (2016) Policy making for the long term in advanced democracies. Annual Review of Political Science 19(1), 433–54.CrossRefGoogle Scholar
Jasanoff, S (2011) The practices of objectivity in regulatory science. In Camic, C, Gross, N and Lamont, M (eds), Social Knowledge in the Making. Chicago: University of Chicago Press, pp. 307–38.Google Scholar
Jeffrey, RC (1956) Valuation and acceptance of scientific hypotheses. Philosophy of Science 23(3), 237–46.CrossRefGoogle Scholar
King, G, Keohane, RO and Verba, S (1994) Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Kydland, FE and Prescott, EC (1977) Rules rather than discretion: The inconsistency of optimal plans. Journal of Political Economy 85(3), 473–91.CrossRefGoogle Scholar
Latour, B and Woolgar, S (1986) Laboratory Life: The Construction of Scientific Facts. Princeton, N.J: Princeton University Press.Google Scholar
Maggetti, M (2010) Legitimacy and accountability of independent regulatory agencies: A critical review. Living Reviews in Democracy, 19.Google Scholar
Majone, G (1996) Regulating Europe. London, New York: Routledge.Google Scholar
Maor, M (2007) A scientific standard and an agency's legal independence: Which of these reputation protection mechanisms is less susceptible to political moves? Public Administration 85(4), 961–78.CrossRefGoogle Scholar
McMullin, E (1982) Values in Science. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 3–28 March.CrossRefGoogle Scholar
Pamuk, Z (2021) Politics and Expertise: How to Use Science in A Democratic Society. Princeton, NJ: Princeton University Press.Google Scholar
Paul, R (2017) Harmonisation by risk analysis? Frontex and the risk-based governance of European border control. Journal of European Integration 39(6), 689706.CrossRefGoogle Scholar
Pettit, P (2004) Depoliticizing democracy. Ratio Juris 17(1), 5265.CrossRefGoogle Scholar
Radaelli, CM (1999) The public policy of the European Union: Whither politics of expertise? Journal of European Public Policy 6(5), 757–74.CrossRefGoogle Scholar
Rimkutė, D (2015) Explaining differences in scientific expertise use: The politics of pesticides. Politics and Governance 3(1), 114–27.CrossRefGoogle Scholar
Rimkutė, D (2020) Building organizational reputation in the European regulatory state: an analysis of EU agencies’ communications. Governance 33(2), 385406.CrossRefGoogle ScholarPubMed
Rudner, R (1953) The scientist Qua scientist makes value judgments. Philosophy of Science 20(1), 16.CrossRefGoogle Scholar
Sabatier, P (1978) The acquisition and utilization of technical information by administrative agencies. Administrative Science Quarterly 23(3), 396417.CrossRefGoogle ScholarPubMed
Sandven, H and Scherz, A (2022) Rescue missions in the Mediterranean and the legitimacy of the EU's border regime. Res Publica 28, 673692.CrossRefGoogle Scholar
Scharpf, FW (1999) Governing in Europe: Effective and Democratic? Oxford: Oxford University Press.CrossRefGoogle Scholar
Scherz, A (2021) Tying legitimacy to political power: Graded legitimacy standards for international institutions. European Journal of Political Theory 20(4), 631–53.CrossRefGoogle Scholar
Schillemans, T and Busuioc, M (2015) Predicting public sector accountability: From agency drift to forum drift. Journal of Public Administration Research and Theory 25(1), 191215.CrossRefGoogle Scholar
Schrefler, L (2010) The usage of scientific knowledge by independent regulatory agencies. Governance 23(2), 309–30.Google Scholar
Schroeder, SA (2021) Democratic values: A better foundation for public trust in science. British Journal for the Philosophy of Science 72(2), 545–62.CrossRefGoogle Scholar
Smith, LD et al. (2000) Scientific graphs and the hierarchy of the sciences: A Latourian survey of inscription practices. Social Studies of Science 30(1), 7394.CrossRefGoogle Scholar
Staley, KW (2017) Decisions, decisions: Inductive risk and the Higgs Boson. Exploring Inductive Risk: Case Studies of Values in Science. Vol. 1. Oxford University Press, pp. 3756.Google Scholar
Steel, D (2015) Acceptance, values, and probability. Studies in History and Philosophy of Science 53, 8188.CrossRefGoogle ScholarPubMed
Steel, D (2016) Climate change and second-order uncertainty: Defending a generalized, normative, and structural argument from inductive risk. Perspectives on Science 24(6), 696721.CrossRefGoogle Scholar
Steffek, J (2015) The output legitimacy of international organizations and the global public interest. International Theory 7(2), 263–93.CrossRefGoogle Scholar
Tucker, P (2018) Unelected Power: The Quest for Legitimacy in Central Banking and the Regulatory State. Princeton, NJ: Princeton University Press.Google Scholar
Urbinati, N (2014) Demcoracy Disfigured: Opinion, Truth, and the People. Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
van't Klooster, J (2020) The ethics of delegating monetary policy. Journal of Politics 82(2), 587–99.CrossRefGoogle Scholar
Vibert, F (2007) The Rise of the Unelected: Democracy and the New Separation of Powers. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Weiss, CH (1979) The many meanings of research utilization. Public Administration Review 39(5), 426–31.CrossRefGoogle Scholar
Figure 0

Figure 1. A two-dimensional scheme of inductive risk.