Introduction
The uncodifiability thesis asserts that there is no way to fully delineate the relationship between moral and nonmoral properties. As such, the uncodifiability thesis argues that there is no way to finitely or reasonably detail the exceptions to the moral principles and rules that pluralistic generalists support.Footnote 1 From this brief description, it should be apparent that, if the uncodifiability thesis is true, monistic theories about the good and the right face serious challenges. This occurs because monistic theories argue that the relationship between moral and nonmoral properties is strictly reducible to one common denominator, namely, whatever their respective monisms are predicated upon. However, not all generalists are monists, and pluralist ethical theories that support non-absolute generalities do not obviously succumb to such a critique. In this article, I would like to explore the uncodifiability thesis in greater detail and critically examine the implications of this thesis for principlism as a pluralistic and non-absolute generalist ethical theory. In particular, I will focus on the form of principlism developed by Tom Beauchamp and James Childress because of the significant influence their work has had on the field of bioethics.
Ross, Intuitionism, and Ethical Pluralism
Historically, W.D. Ross’ intuitionism has often been classified as a form of commonsense ethics. As such, he believed that one of the most important goals of ethical theory is to account for the actual ethical beliefs and practices held by ordinary people. In addition, Ross also places a high priority on the role of good judgment or practical wisdom in ethics, both in determining how to resolve conflicts of duties and in ascertaining what duties are relevant to a certain case. Like a number of other commonsense ethicists, Ross believed that this conception of a complex and pluralistic commonsense morality can best be captured by general, but defeasible, ethical principles. Because of this focus on principles, I will refer to Ross and similar theorists as “principlists” and to their theories as “principlism.”Footnote 2
Principlists argue that the principles that they support are both foundational and the locus of moral certainty in ethics. Such principles differ from the principles of traditional ethical theories by being prima facie, rather than absolute, and by endorsing a variety of independent and irreducible ethical goods. Monistic ethical theories can (and often do) espouse so-called mid-level exceptionable principles that are often identical to the principles of principlists, but such mid-level principles are reducible to other theoretical commitments and thus are neither foundational nor as emphasized as the exceptionable principles of principlists. Ross’ theory has six main duties, each of which can be formulated into a defeasible principle:
-
(1) That results from one’s own past actions, which can be further divided into two subtypes:
-
(A) Duties of fidelity (i.e., I promised)
-
(B) Duties of reparation (i.e., I did some wrong)
-
-
(2) Duties from previous acts of others
-
(3) Duties of justice
-
(4) Duties of beneficence
-
(5) Duties of self-improvement
-
(6) Duties of nonmaleficenceFootnote 3
Duties such as these are viewed as being fundamental and foundational because they are underived from either other duties or other theoretical commitments. In addition, disagreements about the number and type of duties can be resolved by ascertaining whether the duties in question are wholly underived from other duties. Depending on the result of such inquiries, the number of fundamental duties can conceivably be enlarged or shortened. Ross himself at times shortens or lengthens the above list by, for example, subsuming the fifth duty under the fourth. From the short list of fundamental duties, one can then develop a longer and more specific list of secondary-derived duties. Furthermore, Ross acknowledges that both fundamental and derived duties are often found intertwined and at times in conflict. This interaction might occur in a relatively innocuous form, such that one might have an obligation to perform a specific action that arises from several of the above fundamental duties. For example, helping a parent may arise from duties (1), (2), and (4).
A more troublesome interaction occurs when duties conflict, especially fundamental duties. This potential for conflict among fundamental duties and their respective principles gives rise to one of the most important claims of principlism, namely, that such principles (and duties) are not absolute. Ross formulates his principles with the clear understanding that exceptions can be made to them. However, the only allowable exceptions are those that arise when two principles conflict. For example, at times duties that arise from promises can conflict with duties that arise from justice or beneficence. If one’s duties are absolute, an irresolvable impasse is reached in such situations, and rational and moral action becomes impossible. Unfortunately, such conflicts, while not obviously common, do occur with some frequency, and any moral theory that allows them to lead to impasses appears to be deficient and impractical.
In particular, advocates for Ross’ non-absolute principles claim that they are able to avoid this breakdown of moral rationality by allowing for pertinent exceptions. When such principles do conflict, one “balances” them against each other or otherwise evaluates them to decide which principle carries the most “weight” and thus should be followed. Because Ross’ principles are defeasible, they have traditionally been called prima facie principles, but it has been suggested by Brad Hooker that they more accurately should be called pro tanto principles.Footnote 4 Referring to such principles as being prima facie suggests that when they are instantiated in cases they appear to be reasons at first glance but, upon closer examination, either the first impression was mistaken or the reason disappears. In contrast, pro tanto means “as far as this goes,” and this terminology more accurately suggests that the reason, while it may be overridden by other principles, still remains a relevant reason, arguably even with all of its original force.
Using particularist terminology, this understanding of pro tanto principles suggests that moral properties do not change either the direction or strength of their valence when they are overturned in ethical conflicts. Rather, they are simply overwhelmed by other considerations at those times. According to this understanding, if an act is just or kind or truthful, etc., it is always a right-making feature of such acts. On this account, not all just acts are right, for there can be other and stronger moral considerations that apply to specific acts, but justice always counts in favor of an act. Thus, the pro tanto principle “one ought to keep one’s promises” entails that promise-keeping is always good-making, but it is only obligatory so long as it does not conflict with another moral principle. To briefly summarize, Ross’ pro tanto principles claim that a duty, if present, always counts for or against an action. If only one duty is present, it decides the action. At times, two or more duties will apply to the same action, and if they conflict, one chooses to follow the more important one. The duty that is not followed still retains its inherent good- or bad-making essence even though it does not decide the moral outcome of the case.
One problem that has historically been raised regarding Ross’ ethical theory and principlism in general is that of balancing principles against each other. This issue arises when two or more principles are relevant to a case and not all of them can be followed. There are two ways to approach such balancing. First, one can a priori rank the six (or so) fundamental principles. Second, one can balance or weigh principles against each other after they are instantiated in specific cases.Footnote 5 Traditionally, many philosophers have taken Ross to follow the second option of claiming that such balancing is not possible aside from particular instances in cases. However, David McNaughton claims that Ross does suggest that such rank and order balancing in this first sense is possible; that is, some duties, such as nonmaleficence, are simply more “stringent” than others, such as beneficence.Footnote 6 McNaughton goes on to claim that this attempt by Ross to order his principles ultimately fails, but still suggests that, through discernment or good judgment, one can decide which principle should be given priority on a case-by-case basis. Jonathan Dancy interestingly suggests that this move to the second option is inevitable, for he argues that ethical pluralism necessarily drives one to a particularist epistemology because only through such an epistemology is one able to solve the problem of ordering a variety of fundamental properties or principles.Footnote 7
Finally, Ross’ ethical theory relies on intuitive induction to both understand and ground his six foundational principles. Intuitive induction is the process of learning general truths by examining a small number of specific examples of such truths. For example, one might recognize in certain cases that an action is right because it is just and, after seeing this in various cases, come to realize that justice is universally right and thus can be formulated as a pro tanto principle. This process is inductive because it relies on extrapolation from a small number of case examples, but it is intuitive in that it relies on a leap of understanding that mere induction cannot justify. This intuitive leap is typically argued as being justified because the truth being apprehended is self-evident and because the relationship between the moral and nonmoral properties that are being understood is a necessary one. However, this notion of self-evidence is widely controversial.
In this regard, Ross’ theory has traditionally been discounted because of his strong and explicit reliance on intuition as being foundationally justificatory. It was this issue more so than any other that caused his theory to fall out of theoretical favor. Additionally, this use of intuition as a justificatory foundation is connected to the prior problem of balancing. If intuition is the means to decide what principles one ought to follow, it also seems likely that it should be used to determine that one principle has more importance than another in a specific case (or overall). However, as Henry Richardson points out, “the problem with intuitive balancing [of principles] is not its unattainability but its arbitrariness and lack of rational grounding.”Footnote 8 While Ross’ principle-based ethical intuitionism is one of the more historically important defenses of principles, its reliance on intuition and problems with balancing ultimately brought it and similar theories into philosophical disrepute.Footnote 9
Contemporary Principlism
Although Ross’ theory is still often discussed, this dismissal of principlism by moral philosophers has continued more or less consistently to the present day. Interestingly enough, though, in the latter part of the twentieth century, a strong revival of principlism occurred in biomedical ethics. In fact, over the last thirty to forty years, principlism has become arguably the most important and influential theory in the field. This revival is due in large part to the work of Beauchamp and Childress and their influential textbook The Principles of Biomedical Ethics.Footnote 10 Their four-principle theory emphasizing autonomy, beneficence, nonmaleficence, and justice is currently the most widely followed form of bioethical principlism.Footnote 11
Contemporary principlists follow Ross by claiming that morality is both pluralistic and complex and that this can best be understood by arguing that general principles informed by the various components of morality are foundational. Contemporary principlists tend to differ from Ross both in the number of principles that they support (for instance, Beauchamp and Childress support four, and others support either more or less) and in their understanding of the justification of such principles. Contemporary principlists’ understanding of how the principles they support are justified is varied and interesting,Footnote 12 but too complex to give more than a brief overview here. One approach to take is that which Ross did, which is to argue that principles are foundational and justified by intuition or because they are self-evident.
Another approach, and perhaps the most popular one, is to argue that the selected principles are not foundational, but rather mid-level and universally accepted. This can be understood in various ways, and contemporary principlists are often vague about which way one should take their claim. First, the claim could be that such principles are found or taken from commonsense morality and thus accepted by all normal or moral human beings. This claim hints that common morality is itself foundational, but it also leaves open the possibility that a traditional or other specific ethical theory is correct and explicatory of much or all of what we commonly believe.Footnote 13 Second, the claim could be that all important ethical theories, both traditional and contemporary, can and do commonly support general but not foundational principles. These principles are called mid-level because they lack the foundational quality that is typical of high-level principles, but they are still too general and theoretical to be lower-level (i.e., more specific and practical) claims. This claim suggests that ethical theories ultimately provide foundational support, but it makes no declaration as to which specific theory is correct. A third approach, which is the least popular, is to argue that a specific, typically traditional, ethical theory such as virtue ethics, utilitarianism, deontology, or natural law theory is correct and that its insights can be best applied to specific practical and especially biomedical cases using the selected principles.Footnote 14
Most contemporary principlists respond to the question of the grounding of principles by claiming that, since nearly everyone accepts them and they are useful for explaining and resolving ethical problems, we can take them at face value and apply them without worry, leaving the more theoretical and difficult work of providing their philosophical grounding to those with such inclinations or interests.Footnote 15 At times, they also argue that the four most popular principles of autonomy, nonmaleficence, beneficence, and justice can be individually derived from more developed ethical theories. For instance, justice can be derived from a Rawlsian social contract theory, autonomy can be derived from Kantian theories, beneficence is utilitarian in nature, and nonmaleficence might be drawn from virtue ethics or natural law theory. This suggestion illustrates the important commonsense morality plurality assumption of contemporary principlists, but it is difficult to understand what it signifies for the problem of grounding principles.
Similarly to Ross, contemporary principlists at times struggle to determine which principle should be followed when they conflict in specific cases. In this regard, Beauchamp and Childress follow a balancing approach in the first few editions of their Principles of Biomedical Ethics, but they later switch to a model of making their principles more specific in order to avoid most conflict, while still relying on balancing principles in specific cases when conflict is unavoidable. However, contemporary principlists tend to focus more on a priori lexical ordering than Ross did. The most common way to perform lexical ordering is to assign each principle a priority and then, when applying them, attempt to completely satisfy the highest-ranked principle before others can be evaluated.Footnote 16 For example, John Rawls lexically orders the two principles of justice derived from the original position;Footnote 17 Tristram Engelhardt lexically orders autonomy over beneficence;Footnote 18 Bernard Gert, Charles Culver, and K. Danner Clouser partially lexically rank their principles, with nonmaleficence being ranked over their other principles;Footnote 19 and Robert Veatch proposes a partial lexical ranking, with non-consequentialist principles being lexically ranked over consequentialist principles.Footnote 20
Beauchamp and the Nature of Principles
At this point, I would like to turn to a specific example of contemporary principlism and critically examine how such principles are both formulated and grounded. As one of the foremost proponents of principlism in bioethics, Beauchamp will serve as a useful model for this task. Beauchamp defines an ethical principle as “a fundamental standard of conduct on which many other moral standards and judgments depend.”Footnote 21 He goes on to claim that “a principle is a norm in a system of thought or belief, forming a basis of moral reasoning in that system.”Footnote 22 Significantly, Beauchamp argues that the concept of a principle involved in his work is different from that of ethical principles that have been used in the past. Historically speaking, a principle was viewed as being:
-
(1) General.
-
(2) Normative.
-
(3) Substantive.
-
(4A) Unexceptionable.
-
(5A) Foundational.
-
(6) Theory-summarizing.Footnote 23
In this regard, Beauchamp still accepts conditions (1)–(3), but argues that conditions (4A)–(6) do not apply to his new conception of principles. In addition, he rejects conditions (4A)–(6) for many of the same reasons that casuists would, namely that they immerse one in many of the problems of so-called deductivist ethical theories.Footnote 24 Beauchamp’s new conception of principles is taken from Ross’ formulation of pro tanto principles and claims that principles are as follows:
-
(1) General.
-
(2) Normative.
-
(3) Substantive.
-
(4B) Exceptionable Prima facie.
-
(5B) Nonfoundational.
At first glance, this understanding of principles would seem to do much to assuage several of the traditional concerns regarding principlism. In response, I would argue there are two problems with this account of principles. First, I think that Beauchamp, like many contemporary casuists such as Albert Jonsen and Stephen Toulmin, is unclear about the grounding of his principles.Footnote 25 On the one hand, at times he suggests that his principlism can either be theory-free or rely on extant ethical theories for their support. I find both of these responses, however, to be questionable. On the other hand, he almost always refers to and uses his principles as if they were foundational. That is, although he claims that he is not taking a stance on moral theory, he also hints or explicitly claims that his four general principles are indefeasible, universally agreed upon, and the general locus of moral certainty. Thus, he seems to be providing the basic components of a moral theory and using such components to support the rest of his claims, but he is still unwilling to accept the burden of either developing the theory or providing theoretical support for it.
This brings me to my next concern, which is that important questions still remain regarding the nature of justification in principlism. Beauchamp answers this in part when he claims that principles receive support from Rawlsian-considered judgments, which are “justified without argumentative support and are the proper starting points for moral thinking.”Footnote 26 Beauchamp explains that considered judgments have four necessary conditions: (1) A moral judgment occurs; (2) impartiality is maintained; (3) the person making the judgment is competent to make it; and (4) the judgment is generalizable to apply to all cases relevantly similar to those originally judged.Footnote 27 Moreover, Beauchamp claims that one needs a form of coherence theory as a background for considered judgments, to ascertain that all such judgments are compatible with one another. In particular, Beauchamp embraces a form of Rawlsian reflective equilibrium by claiming that “a proper theoretical ideal is to make principles and the relevant features of considered judgments coincide, perhaps through a process of mutual adjustment.”Footnote 28
Unfortunately, the two sets of conditions for considered judgments and principles appear to be conflicting, or at least unconnected. For instance, what are these considered judgments about? They could be judgments of cases, of principles, or of both. Beauchamp originally hints that such judgments are in reference to cases (see the fourth of his conditions above), but he later suggests that they apply to either cases or principles: “the considered judgments with which we begin in constructing an ethical theory themselves can be at any level of generality and may be expressed as principles, rules, maxims, ideals, models, and even as normative judgments about cases.”Footnote 29 Beauchamp goes on to claim that this system allows for a top-down or bottom-up approach: “if these considered judgments occur at a lower level of generality than principles, they support principles bottom up, rather than being supported by principles top down.”Footnote 30
However, Beauchamp’s foundational support of his principles still does not answer the question that it was supposed to answer, namely why his four principles are supported (or correct) as opposed to other conceptions of morality. For example, Beauchamp’s considered judgments do not rule out the traditional ethical theories that he opposes. For instance, Kant’s categorical imperative or utilitarians’ claim about happiness readily fulfills all four of Beauchamp’s rules regarding considered judgments. If Kantian or utilitarian claims are supported by considered judgments—which are apparently the only support for Beauchamp’s own principles—why cannot they also function as rules or principles? This points toward a disconnect between Beauchamp’s understanding of acceptable moral principles and the judgments that ground them. The condition of exceptionality (non-universalizability) appears to be merely an ad hoc restriction to prevent the support of traditional monistic theories.
At this point, one might be tempted to resolve this issue by relying on the fourth of Beauchamp’s conditions for considered judgments, namely that “the judgment is generalizable to apply to all cases relevantly similar to those originally judged.”Footnote 31 At first glance, that condition seems reasonable and appears to rule out monist theories. However, upon closer inspection it appears to be little more than a platitude. Of course, one can only generalize to relevantly similar cases—if the case is not relevantly similar, then there is no basis for a generalization. The real conceptual difficulty is ascertaining the definition of “relevant similarity.” For example, Kant could make the impartial, competent, moral judgment that the categorical imperative is generalizable to all cases of human interaction, because all such cases are relevantly similar to each other in that they involve rational human beings. To a certain extent, monistic theories have an easier time explaining relevant similarity than pluralistic theories, for they can readily claim that their monist foundation is the criterion of similarity that classifies all ethical cases. How does the pluralist tell if a case is relevantly ethically similar to another? Beauchamp gives no answer to this, and indeed, principlism tends to overlook this question. Because of its structure, this question tends to arise more frequently in discussions of casuistry, and many casuists will argue that it is possible to provide an answer by the method of analogy to paradigmatic cases.Footnote 32
Saving Principles with Specification
Turning back to principlism generally, the broad complaint that it provides against general principles is that they are often too abstract and indeterminate to be applicable to specific cases. For instance, the principles “do good,” “be just,” or “do the right thing” provide little or no practical guidance about what this means when one is deciding how to act or who one should aspire to be. The more specifically particularist complaint against principles is that they are simply wrong about the holism of moral reasons. If holism is correct, then most principles are inaccurate, for there are occasions, perhaps many occasions, on which the specific context of an action can change both the strength and direction of a property’s valence.Footnote 33 As Dancy notes, “the leading thought behind particularism is the thought that the behavior of a reason (or of a consideration that serves as a reason) in a new case cannot be predicted from its behavior elsewhere. The way in which the consideration functions here either will or at least may be affected by other considerations here present. So there is no ground for the hope that we can find out here how that consideration functions in general … nor for the hope that we can move in any smooth way to how it will function in a different case.”Footnote 34 This complaint again illustrates the fact that abstract general ethical principles are often too vague to provide meaningful action guidance. There are two obvious and popular ways to respond to this critique.
First, one can make principles more detailed so that they incorporate situation-specific information about their application, exceptions, and range. Second, one can incorporate some broad clause about exceptions that might refer to “all relevant properties” or “lack of other defeating conditions.” The desire to defeat particularist arguments by making one’s principles more detailed is widespread among generalists. The most common form of this move among principlists in the realm of bioethics is that of the specification. In this regard, Jonsen defines specification as “the process of giving greater determinacy to indeterminate moral norms by adding to them qualifying clauses that both respect the intent of the original norm and also bring it closer to concrete cases.”Footnote 35
Specification was first named and detailed by RichardsonFootnote 36 and expanded and specifically applied to medical ethics by David DeGrazia.Footnote 37 After that point, other principlists in bioethics, such as Beauchamp and Childress, soon made use of the concept and terminology in later editions of The Principles of Biomedical Ethics. As Richardson defines the concept, one norm specifies another if (1) everything that satisfies the former’s absolute counterpart will satisfy the latter’s absolute counterpart; (2) the former adds substantive qualifying clauses to the latter rather than simply shifting around its logical form or creating an exception; and (3) these clauses are relevant to the norm being specified rather than being extraneous riders.Footnote 38 In his earlier work, Richardson further elaborates on criterion (2) by claiming p qualifies q by substantive means (and not just by converting universal quantifiers to existential ones) by adding clauses indicating what, where, when, why, how, by what means, by whom, or to whom the action is to be, is not to be, or may be done or the action is to be described, or the end is to be pursued or conceived.Footnote 39 Thus, specification is a formal method of making general ethical principles more detailed, while still incorporating and supporting the substance of their original claim. For example, the norm regarding respect for persons or respect for autonomy can be further specified to “respect the autonomy of competent patients by following their advance directives when they become incompetent.”Footnote 40
Specifications arose from two perceived flaws in traditional principles. First, one must determine how to resolve conflicts between principles. Second, one must know how to make general principles relevant to specific cases. The traditional ways of dealing with these problems have been, respectively, to balance them and to apply them deductively. As previously discussed, balancing principles appear to be largely intuitive in nature and thus presumptively irrational or difficult to justify to others. Deductively applying abstract principles is generally thought to be both difficult to perform in many circumstances, as well as an inaccurate conception of how moral reasoning occurs.Footnote 41 Specification claims to resolve the problems of conflict between principles and how to make such generalities applicably useful to specific cases. In regard to the first concern, Richardson claims that specification can often, but not always, resolve conflicts between general principles by simply being more specific about how such principles apply to detailed situations. For example, if the principles of autonomy and beneficence conflict in a specific case—i.e., one where a patient autonomously refuses a beneficial medical treatment—both principles could be refined with, respectively, clauses about competence and whether the potential benefit is certain or unlikely, extreme or moderate, and so forth. If the specification is successful in this case, the substantive content of the more general principles will be seen as compatible.Footnote 42
Regarding the problem of application, Richardson claims that “once our norms are adequately specified for a given context, it will be sufficiently obvious what ought to be done” and goes on to state that “without further deliberative work, simple inspection of the specified norms will often indicate which option should be chosen.”Footnote 43 That is, if the specification is performed thoroughly and accurately, the resulting principles will often be detailed enough so that, once one understands or knows them, the situations in which they are relevant should be readily apparent.Footnote 44 Furthermore, the applicability of specified principles is predicated upon the assumption that their more general formulations are not absolute, and this prima facie quality arguably extends to specified principles as well.
Specification is an intriguing approach, so much so that even some critics have succumbed to its appeal. For example, Jonsen claims that when maxims, such as “do no harm” or “informed consent is obligatory,” are invoked, they represent, as it were, short-hand versions of the major principles relevant to the topic, such as beneficence and autonomy, “cut down to fit the nature of the topic and the kinds of circumstances that pertain to it,”Footnote 45 and he later states that “specification and casuistic analysis need each other to get close to the case.”Footnote 46 Carson Strong, another casuist critic of principlism, makes similar conciliatory moves, although he claims that specification relies on casuistry to assign priorities to principles, especially conflicting principles.Footnote 47
Additionally, similar to principlists in bioethics, the broader spectra of generalists in ethical theory also tend to take the specification approach, although they do not explicitly refer to it as such. For example, Martha Nussbaum suggests that it is the generality of rules, not their universality, that is problematic.Footnote 48 If rules could be made specific enough, many or most of the problems arising from them would vanish. Walter Sinnott-Armstrong argues that particularism only rules out simple generalities, not detailed or even very complex ones.Footnote 49 Kasper Lippert-Rasmussen also suggests that making principles more detailed is one way to avoid many of their traditional problems.Footnote 50 Most such generalists claim that although explanatory generalities have not yet been specified or detailed, they can be and eventually will be.
Turning back to the particularist holism argument, the specification move, if performed correctly, can accommodate some of its insights about the functioning of moral reasons. Even if moral reasons do rely on background, supporting, and defeating conditions, such conditions can presumably be built into specified principles. There is nothing in holism itself that suggests that this is impossible, for, although many moral reasons do act holistically, in actual fact this might infrequently make a practical difference. If moral particularists are correct that any property can be important as a supporting or undermining condition, then the specification of such principles will be more problematic, and such specifications will be very complex, but still theoretically possible.
Specification and the Nature of Rules
I will shortly argue that, although specification may be a partially effective response against the holism argument, it is still problematic as a whole. However, before I turn to the particularists’ main argument against specification, there are first a few non-particularist problems with specification that I wish to briefly raise. On a broad level, one could argue that the trend toward specification illustrates a misunderstanding of the very nature of rules. H.L.A. Hart raises a similar point regarding the trend to specification in terms of how rules work in the legal system.Footnote 51 He argues that, because of the nature of human language, rules will always be somewhat open-textured, with interpretation being needed to understand both the rules and how to apply them to particular circumstances. Because of this inherent open texture, rules cannot simply be deductively applied, rather, their use requires good judgment and discretion. This uncertainty secondary to the open texture of rules often leads people to believe that rules ought to be formulated more strictly to resolve conflict and minimize the need for difficult choices. This leads to what Hart calls “rule formalism,” which argues that correct rules will be explicit enough to be applied without this uncertainty. In this regard, specification might be seen as part of this larger trend toward rule formalism that occurs in both law and ethics and, as Hart would argue, is based on a misunderstanding of how rules are able to function.Footnote 52
Aside from this general disapproval of the purpose of the specification and the conception of rule formalism that arises, other non-particularist critiques can be made. For example, specification apparently depends on prior theoretical decisions about the priorities of conflicting principles.Footnote 53 This can be taken in several ways. In one sense, this critique suggests that specification is only possible after balancing has occurred to ascertain what role principles should play in the specification process to avoid conflict. On this account, certain principles will be affected or changed more than others in this process, and there needs to be some reason why this occurs to some principles and not others. As Veatch points out, “the claim of those who specify seems to imply that within limited domains, principles can be lexically ranked,”Footnote 54 and yet such ranking, whether taken broadly or narrowly, has traditionally been viewed with skepticism as relying on intuition. In another sense, this critique rightly points out that there must be some method of comparing opposing specifications to each other, for there are many ways to actually specify a principle, and one would wish to be able to evaluate this process. There are certainly ways to avoid this critique, but the supporters of the specification have not yet, to my knowledge, fully or successfully pursued them.Footnote 55
I am also skeptical as to how specified principles can both hold substantially true to the insights of their general predecessors and change to refine and improve our understanding of morality. Richardson makes both such claims for his specification, and yet they are incompatible. On the one hand, he strongly emphasizes what he calls extensional narrowing, namely that “everything that satisfies the specified norm must also satisfy the initial norm.”Footnote 56 A fundamental aspect of the specification is that it adds clauses to the initial norm, thus respecting its substantive content. This condition ensures that the initial general norm is completely satisfied and thus grounds the specification. On the other hand, Richardson claims that “what allows the idea of specification to offer a third way of reflectively coping with conflicts among principles is the fact that it offers a change in the set of norms” and that interpretation of principles, of which specification is an example, “modifies the content of a norm.”Footnote 57 Thus, specification is apparently supposed to satisfy the insight of the original norm (i.e., satisfy its absolute counterpart) and change its content. One could perhaps attempt to argue that general norms have both essential and non-essential contents and that the specification should support the essential content and change the non-essential content, or otherwise argue that general norms can be changed and supported at the same time, but I cannot envision any such arguments being either successful or compelling.
The Uncodifiability Thesis
While the previous critiques of the specification are not theory-specific, particularists will argue that the uncodifiability thesis also prevents the specification move detailed above. The uncodifiability thesis claims that there is no way for rules or principles to fully detail the relationship between moral and nonmoral properties. That is, within the context of the relationship between the moral and nonmoral sets of properties, the particularist denies that “there are any usefully, finitely specifiable conditionals of the form if M then N.”Footnote 58 Another way of expressing this claim is to say that the moral is shapeless in regard to the nonmoral. If this claim is true, then there is no reason to believe that moral properties are either defined by, or inextricably linked to, nonmoral properties, and even extremely detailed specified principles will not be successful in describing the relationship between the moral and the nonmoral.
For instance, consider the virtue of “kindness”. As a moral property, “kindness” supervenes upon certain nonmoral properties. The uncodifiability thesis claims that there is no single common property, or even a unique set of properties, that all acts of kindness consist of. This entails that without understanding the evaluative concept of “kindness” there is no way that someone can correctly identify the comprehensive set of instances of “kindness” by identifying some patterns among the nonmoral properties of the items in such a set. John McDowell explains this in the following way:
However long a list we give of items to which a supervening term applies, described in terms of the level supervened upon, there may be no way, expressible at the level supervened upon, of grouping such items together. Hence there need be no possibility of mastering, in a way that would enable one to go on to new cases, a term which is to function at the level supervened upon, but which is to group together exactly the items to which competent users would apply the supervening term.Footnote 59
This concept of the uncodifiability of the relationship between the moral and the nonmoral is not a historically novel stance. For instance, Aristotle is at times understood to be espousing such a viewpoint when he claims that “matters concerned with conduct must be given in outline and not precisely … matters concerned with conduct and questions of what is good for us have no fixity, any more than matters of health.The general account being of this nature, the account of particular cases is yet more lacking in exactness; for they do not fall under any art or precept, but the agents themselves must in each case consider what is appropriate to the occasion, as happens also in the art of medicine or of navigation.”Footnote 60 Likewise, in his earlier dialogues, Plato points out the sheer difficulty involved in meaningfully defining virtue in nonmoral terms so that all respective virtuous acts are unified by the definition. For example, in the Euthyphro, the young Athenian Euthyphro offers a number of definitions of piety, ranging from “doing what the gods ask” to “giving the gods their due.” However, Socrates’ questions about each individual definition quickly illustrate that all of Euthyphro’s definitions were substantially incomplete, often because they were either too broad, thus encompassing acts that were not pious, or too narrow, therefore excluding pious actions. McDowell highlights this issue as follows:
If one attempts to reduce one’s conception of what virtue requires to a set of rules, then, however subtle and thoughtful one was in drawing up the code, cases would inevitably turn up in which a mechanical application of the rules would strike one as wrong—and not necessarily because one had changed one’s mind; rather, one’s mind on the matter was not susceptible of capture in any universal formula.Footnote 61
Why is morality uncodifiable in relation to nonmoral properties? Many particularists want to be able to answer this question while still claiming that morality is objective. One possible response is to argue that morality is practice-based and thus intrinsically human and evaluative in nature. While this response has some viability—depending on how carefully one details this claim—the problem arises of ethics becoming entirely subjective in nature. One way to address the issue of ethical relativism is to claim that, although moral properties are understandable only from a particular evaluative, likely human, perspective, this limitation is shared by all rational endeavors.
This avenue of thought is often traced back to Wittgenstein’s discussion of rule following in the Philosophical Investigations (1953, § 185). In particular, Wittgenstein suggests that our ability to understand complex practices and concepts, to keep going on, as it were, outruns any formulatable rule or principle. That is, the rules and principles that supposedly ground practices or procedures of any type are too thin or content-poor to actually provide the grounding that we seek. On this account, practices are too richly textured to be susceptible to any finite collection of rules. Rather, when one is immersed in a practice, one develops skills that go beyond one’s experiences and understanding. Plato gives many examples of this in his dialogues, such as when Euthyphro could not define holiness in purely descriptive terms and when Laches failed to define courage, yet both men had the ability to understand the respective concepts and use them correctly.Footnote 62 Even for something as basic as, to use Wittgenstein’s example, extending a series of numbers by two (2, 4, 6, 8, 10, etc.), an individual’s understanding surpasses the grounding that is provided by the finite set of examples that any rule provides and that could have been illustratory of any number of actual rules or practices. In this sense, Wittgenstein is arguing that all human endeavors—be they linguistic, scientific, mathematical, social, or moral—rely on skills that project understanding that is uncodifiable by abstract general rules or principles. Thus, although morality may be uncodifiable and practice-dependent, the same holds true for the broader epistemic realm as well, and yet endeavors in both areas can be rational because of our capacity to understand and participate in such uncodifiable practices.Footnote 63
A related response as to why the moral is uncodifiable in regard to the nonmoral arises as a result of the holism of moral reasons argument. In particular, the holism argument claims that many properties can change or reverse their valence due to the influence of other properties. Specification seeks to account for holism, but it can do so only if there is a limited, or finite, number of circumstances in which such holism actually occurs. Whether or not this is true appears to be simply assumed. However, I believe that we have good reason to think that holism actually pertains to a large, and likely infinite, number of circumstances. If this is true, then we should not expect to find an exact and definite set of rules codifying the relationship between the moral and the nonmoral. There are a potentially infinite number of nonmoral facts, as well as an infinite possible arrangement of sets of such nonmoral facts. Since moral facts supervene both on nonmoral facts and on sets of such facts, there are also an infinite number of possible arrangements of moral facts, as well as supporters, defeaters, and relevant background conditions that affect such facts. Because of this, one can never know a priori what weight a property has in a case or if it even applies at all, because there is always the possibility of defeaters being present, either in terms of prima facie principles or in terms of other moral facts. As John Arras notes, “real life does not announce the nature of problems in advance.”Footnote 64 Additionally, since there are a potentially infinite number of sets of nonmoral facts that moral facts can supervene upon, one cannot automatically assume that there is a way to formulate a finite principle that takes account of every possible organization of nonmoral facts, and this is what complete specification and codification purport to accomplish.
In this regard, Sinnott-Armstrong argues that the particularist uncodifiability thesis merely shows that human beings are simply limited in their formulation of practical codifiable principles and does not show that there is metaphysical uncodifiability.Footnote 65 However, if I am correct that there are a potentially infinite number of sets of combinations of nonmoral facts, then it would be metaphysically impossible to capture this complexity using finite principles. This is one place where Frank Jackson, Philip Pettit, and Michael Smith go astray. They explicitly assume that one can evaluate every individual set of an infinite number of sets of nonmoral properties, and they use this assumption to derive the conclusion that the uncodifiability thesis is false.Footnote 66 Unfortunately, that initial assumption is one of the very moves that particularists are arguing against, for it is impossible to evaluate every component of a potentially infinite number of sets.
Responses to Specification
I have argued that, because of the uncodifiability thesis, a complete specification of moral principles is unachievable. However, leaving that argument to the side for a moment, I believe that there is another particularist approach to argue that specification fails, namely, that the results of the specification (assuming it is partially or wholly possible) are antithetical to our understanding of morality. The dilemma that principlists face is that their principles are either too general and thus lack sufficient content to be useful at all, or, if they are specified, they subsequently become too detailed.Footnote 67 What is wrong with a principle being too detailed? For one thing, the search for moral principles that are sufficiently specific to be useful will result in an enormous multiplication of the necessary principles. Suddenly, what was once one of generalism’s advantages, namely simplicity, disappears. This dramatically impacts both the ability to easily teach such principles and the ability to offer rational justification for them. We are now faced with hundreds or perhaps even thousands of principles, and problems regarding conflict between them and knowledge or understanding of them increase in kind. Additionally, if principles are to be useful it will be difficult to logically stop their specification before it ends with unique principles for each particular situation.
Turning to the first problem, I would argue that the goal of complete specification is simply the wrong way to approach ethical theory. While the ideal of specified principlism might be understandable or attractive in some sense, nonetheless, the end result will be similar to the contemporary US tax code, which contains 70,000 pages of minute, detailed, and prima facie justified rules that attempt to account for every conceivable situation. In this approach, morality and contemporary medical or legal practice are transformed into an enormous unwieldy bureaucracy built on a huge system of rules. Contemporary legal systems that follow a similar approach, such as those in France and Mexico, have become incredibly complex, notoriously inflexible, and riddled with internal tensions and inconsistencies. Following this trend, the goal of ethics essentially becomes the formulation of a vast, all-encompassing omniscient rulebook that, once all the variables are known, one can determine the correct answer to any given situation. Yet, the ideal of complete specification becomes far too complex for any useful action guiding and coherent account of moral rationality. In particular, whenever any of the numerous specifications conflict, a new specification is needed to resolve that conflict, and so on, ad infinitum. In addition, reliance on rules to guide actions leads to a kind of “Third Man” regress, in which the application of rules needs guidance, which must be provided by other rules, which themselves need guidance and rules to be applied.Footnote 68 Finally, to account for new scenarios and circumstances, the rule book would have to undergo constant editing and revision and would continually keep expanding.
In response, I would argue that the key to resolving complex problems instantiated by rule following is not to increase the number and complexity of rules, but rather to focus instead on discretion, sensitivity, perception, and good judgment.Footnote 69 If anything, ethical conflict and complexity suggest that we need more flexibility, not less, in our approach to morality and in our application of commonsense ethical rules and principles. Perhaps some might claim that the specification of principles can be performed only partially, thus preventing morality from being completely codified, or from creating too many principles. In addition, as Beauchamp and Childress acknowledge, the specification will likely have to end at some point as some moral dilemmas may be ineradicable.Footnote 70 However, once one has started down the path of completely codifying the relationship between the moral and the nonmoral, it is difficult to set a place for stopping the process. If the goal of ethical theory is to erase the gap between principles and practical judgments, then specification must continue until this end is met, and this will only occur when specified rules are available for every relevantly similar moral situation. This is where holism regarding moral reasons returns, for, even if the specification can take account of such holism, it would require the formulation of an enormous number of very detailed rules to do so.Footnote 71
In the end, specified principlism seems to become a form of moderate moral particularism, for the formulation and application of a large number of amazingly complex principles ultimately devolves into (and derives from) particular case discussions. The problem arises because there is no a priori way of knowing which moral principles will be relevant to which specific set of circumstances, or what weight such principles might have on different occasions. This can only be ascertained by examining each case in specific detail to ascertain which specified principles hold true in that instance, but at that point one appears to be a principlist in name only. Additionally, any principles that result from specification, if they are truly applicable, will be too individually complex to be applicable to every situation.
Finally, attempts to specify the conditions of a principle will have to include the absence of defeaters and the presence of supporting conditions, for both apply to the holism of moral reasons that specification is meant to account for. The list of potential supporting and defeating conditions is immense, so much so that any principle that actually accommodates them will be paragraphs, if not many pages, in length. Since one of the main arguments for moral principles is that they rightfully summarize and simplify moral knowledge, this result is clearly problematic for supporters of principlism. Furthermore, as argued previously, such extremely complex principles will not be explanatory in the sense that principlists and moral particularists usually rely on, for the good-making characteristics of specific situations become indistinguishable from less important, but still significant, properties.
These problems lead me to the second response taken by principlists in order to account for the insights of moral particularism and specifically those suggested by holism, namely referring to background conditions of normality. A number of principlists claim that particularist arguments can be defused using disclaimers about normal or usual background conditions when ethical principles are formulated. For instance, one could claim that “all else being normal, killing is morally wrong” or “if there are no other significant facts, one ought not to lie.” Principles that reference background conditions differ from Rossian and similar principles by widening the realm of possible defeaters from other basic moral principles to also include nonmoral background and supporting conditions. In this way, the reference to background conditions of normality is partially effective in incorporating holistic insights, but it is more of a response to the critique that specified principles, if spelled out, are both too long and complex to be useful or effective as action-guiding principles. That is, the complex and explicit clauses of such specifications can presumably be summarized and understood by shortened disclaimers.
I raise this response at this point mostly to place it in its appropriate context as a generalist response against specific particularist moves, namely the uncodifiabilty argument. As a rejoinder to this generalist approach, the moral particularist can appeal to several previously used objections. First, if holism regarding reasons is correct, it is not likely that there is any clear set of normative background conditions in the broad sense that generalists appear to rely on here. Second, if uncodifiabilty is correct, there will be no way to quantify and clarify such background conditions accurately, and simply making the reference to such conditions as vague rather than specific does not resolve this issue. Third, such disclaimers fall under the general argument against principles, namely, that they are too vague to offer any real or useful action guidance. One of the main purposes of the specification was to reply to the objection of vagueness, and broad disclaimers about background conditions are a move away from the specification in this regard.
Problems with Uncodifiability
Although the uncodifiability thesis, if true, presents problems for principlists, it is not surprisingly somewhat controversial. For instance, Onora O’Neill and Roger Crisp argue that all principles are going to be at least somewhat indeterminate and that this means that principle-supporting ethicists can un-problematically accept the uncodifiability thesis.Footnote 72 In other words, Kantians and utilitarians can accept the contemporary Wittgensteinian insights that at least partially support the uncodifiability thesis, while still remaining true to their original claims about the foundational aspects of morality and moral reasoning. Assuming that Wittgenstein is correct in his claim that no rules are fully determinate, nor are they required to be, because the use of practical wisdom or good judgment allows them to still be intelligible and action guiding.
Particularists and those who are sympathetic to aspects of their project have several responses to make to this line of argument. First, I would argue that they should welcome the new emphasis on practical wisdom or good judgment that has crossed over from the Aristotelian tradition to generalist ethical theories. The fact that there is increasing awareness of the importance of practical wisdom is encouraging to the field of ethics as a whole, for it promises a richer analysis of the topic than has, perhaps, been previously accomplished. Second, I believe that moral particularists can question how thoroughly monistic ethical theorists have taken this new awareness of practical wisdom and good judgment to heart. If one honestly believes, as some monists claim, that a single criterion is the foundational essence-defining element of morality, then the amount of context-specific judgment needed to apply the principle is likely both too much and too little. It is too little in that, since one knows a priori that one component of any ethical situation is preeminently noteworthy, likely little judgment will often be needed to evaluate that component. It is too much because any judgment can be rationalized or justified if enough ingenuity is used to that end, but the desire to justify and apply a judgment at all costs is antithetical to an honest pursuit of knowledge.
Additionally, as Crisp suggests, traditional ethical theories that take the uncodifiability thesis to heart have a tendency to move to a tiered system, which emphasizes different aspects of morality at differing levels of theoretical and practical concern.Footnote 73 The classic example of such an approach is Henry Sidgwick who argues that utilitarians ought to advocate that people either should not, or try not to, think as utilitarians.Footnote 74 Unfortunately, I believe that this route creates a significant divide between theory and belief, both practically and theoretically. It is disingenuous to suggest that the absolute codification at the theoretical level either disappears or is ignored at the level of practice, and yet this is precisely what monistic theories that accept the uncodifiability thesis must attempt. One possible counter-example to this point can be found in mathematics, where immensely intricate formalized proofs are often bypassed, for reasons of simplicity and discursive ease, for informal proofs. In such situations, absolute codification is often disregarded at the practical level. However, in mathematics, unlike in ethics, the formal codification can be readily provided and proven. Mathematicians could readily provide formal proofs if asked to, ethicists simply cannot. Additionally, although mathematicians often overlook formal proofs, such proofs, if given, would be theoretically consistent with informal proofs. In the ethical theories that I am discussing here, the theoretical commitments are prima facie, if not absolutely, inconsistent with the practical results that are permitted or condoned.
Turning to other critiques, Jackson, Pettit, and Smith offer one of the strongest arguments against the uncodifiability thesis, claiming that the relationship between moral and nonmoral properties must be codifiable if we are to be able to use evaluative predicates rationally.Footnote 75 If the evaluative truly is shapeless in terms of the descriptive, then morality is random, for there is nothing unifying the evaluative properties:
[The uncodifiability thesis] is not, for example, like Wittgenstein’s famous examples of a game and, more generally, of family resemblances. In these cases, it can be difficult to spot or state the pattern, but the fact that, given a large enough diet of examples, we can say of some new case whether or not it is, say, a game (or, perhaps, that it is indeterminate whether it is or not) shows that there is a pattern we can latch on to; our ability to project shows that we have discerned the complex commonality that constitutes that pattern.Footnote 76
If there is no pattern between the moral and the nonmoral—if the connection is totally random—then there is no semantic distinction between discussing right acts and wrong acts. In the end, there is no difference between the two. On this account, a rational semantic distinction is predicated upon some patterned commonality that distinguishes the different classes of actions. One possible response to this argument is to claim that the distinction on which semantical terms are predicated upon is un-analyzable or non-natural.Footnote 77 This, however, simply becomes another way of stating G.E. Moore’s proposal that moral properties are sui generis and are not the novel idea that moral particularists claim they are proposing.Footnote 78
The better particularist response is to claim that there is a pattern—that the connection between the evaluative and the descriptive is not totally random—but that such a pattern is still uncodifiable. This proposal saves the rationality of moral language while also allowing moral particularists to support widespread commonsense moral claims that certain acts tend to be morally important, often in the same fashion, whereas others do not. One possible response to the randomness critique is to argue for what they refer to as restricted particularism, which claims that moral acts are unified solely by our response to them. This restricted particularism denies that there is any non-evaluative or purely descriptive pattern among moral acts and thus appears to follow the uncodifiability thesis at least in substance. However, the problem with restricted particularism, as Jackson et. al. also point out, is that we believe that moral justification arises in part from the descriptive similarities and differences of individual cases, and thus, it is appropriate to question why similarly descriptive moral acts are evaluated as being morally different.
While the randomness critique is more imposing, it is by no means definitive. As Simon Kirchin points out, it is misguided to claim that if one denies the connection of the moral to the nonmoral one is left with merely a new form of the old Moorean sui generis properties.Footnote 79 In this regard, particularist claims are not reducible to sui generis properties, for the properties that they support are not ontologically odd in this sense but rather are merely collections of sets of non-ethical properties.Footnote 80 As a result of this sui generis conception (not property), one can still argue that there is no pattern of descriptive features that unites sets of situations that instantiate certain ethical properties. Rather, the unifying feature is the sui generis conception itself. This points the way toward an escape from the objection raised to restricted particularism—the claim that what unifies moral properties is our human response to certain nonmoral features. One can argue that people are responding to nonmoral features that are particularly important in specific situations as a result of sui generis concepts, which provide the unifying strand. The randomness critique is too quick to assume that restricted particularism can essentially take no account of nonmoral differences or similarities. In fact, it is such descriptive properties that are being responded to, even if they are not the essence-defining components of moral properties.
For example, one can draw the analogy between art and morality and follow a particularist viewpoint to claim that what makes something artistically beautiful or good is uncodifiable. Just as the common similarity among all ethical acts is that they are evaluated as being moral, so also is the common denominator among all works of art the simple fact that they are evaluated as works of art. However, this does not entail that one cannot provide descriptive reasons for why a specific work of art is beautiful or good such as harmony, symmetry, proportion, balance, consonance, clarity, and radiance or why one affective response such as compassion or amusement is more appropriate than another. Rather, such reasons (and similarities and differences that are integral to such evaluations) are in fact the key components of the evaluative response. For example, if one responded to Aeschylus’ Oresteia, Shakespeare’s King Lear, or Tolstoy’s Anna Karenina, with howls of laughter, then one has not understood their work correctly. In the same way, moral properties are uncodifiable but still directly responsive and accountable to descriptive features and their interactions. In addition, one can extend the analogy to argue that it is not possible to specify in advance all possible works of art or music nor is there a simple mechanical step-by-step procedure for creating beautiful works of art or music. If this were the case, any of us could acquire the mastery of Michelangelo or Bach.
Additionally, the randomness critique draws a false dichotomy by assuming that a pattern either has to be absolutely certain or totally random. In this regard, some patterns are absolute. Most, however, are not. There is an important difference between a pattern (a trend) and an absolute 100% correlation—essentially a definition. In this regard, particularists can support trends or patterns. What they must deny, however, is that moral properties are absolutely defined by certain and essential nonmoral properties. For instance, if one sees a thousand crows that are black, there is nothing that necessitates that the next crow one sees must be black. It could just as readily be white. There is a pattern of this property among crows, but it is neither one that holds 100% (most or 99.9% of crows are black), nor one that allows absolute predications to be made about future events or encounters. Even if one has observed every crow that exists, or has existed, one will only be able to repeat the claim that the pattern regarding the color of crows is that 99.9999999% are black. Perhaps one can argue that this pattern differs from other patterns (like that of moral properties relating to nonmoral properties) because it involves contingent properties such as genetic mutations or environmental factors that influence phenotypical expression rather than necessary properties. Inductive patterns are the type of generalizations that particularists can make regarding moral properties and rely on patterns, but such contingent and defeasible patterns can still be uncodifiable in the essence-defining sense that the randomness critique assumes is necessary.
Conclusion
In conclusion, the uncodifiability thesis is, along with holism regarding reasons, one of the two foundational components of contemporary moral particularism. The uncodifiability thesis provides arguments against all traditional types of general ethical principles, but it specifically affects exceptionable (i.e., pro tanto) principles that can theoretically accommodate the particularist claim about the holism of moral reasons. As such, the uncodifiability thesis suggests that two common principlist trends today, namely specifying principles to accommodate exceptions and prefacing principles with broad disclaimers, are both problematic. Moreover, even if fully specified principlism were possible, I have argued that it would not be conducive to our understanding of morality or very helpful in making moral choices. In the end, rather than hoping to endlessly multiply the complexity and number of the moral principles and rules that we must follow, a better approach would be to focus on cultivating situation-specific and case-based practical wisdom and judgment.Footnote 81
Competing interest
The author declares none.