Hostname: page-component-586b7cd67f-rcrh6 Total loading time: 0 Render date: 2024-11-23T21:09:52.709Z Has data issue: false hasContentIssue false

Collective Rationality and Simple Utilitarian Theories

Published online by Cambridge University Press:  13 April 2010

Michael J. Almeida
Affiliation:
University of Texas at San Antonio

Extract

Much of recent moral philosophy has been concerned with the relation between individual rationality and individual obligation. Familiar gametheoretic analyses, in particular the Prisoner's Dilemma, at least suggest that unconstrained pursuit of rational self-interest leads to collective ill. The difficulty is nicely illustrated by comparing the preference-orderings of distinct individuals over the possible outcomes of their actions to their collective preference-ordering. Consider the following typical version of the Prisoner's Dilemma, where R2 and C2 represent respectively “R has confessed to the crime” and “C has confessed to the crime,” and Rl and Cl correspond to “It is not the case that R has confessed” and “It is not the case that C has confessed.”

Type
Articles
Copyright
Copyright © Canadian Philosophical Association 1994

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Notes

1 The preference-ordering of both R and C are assumed to meet Harsanyi's conditions of individual and social rationality. That is, each orders the social options and all of the lotteries over the social options, and the ordering meets the assumptions of the expected utility theorem. Under these conditions, the preference-ordering of R and C can be represented by a von Neumann-Morgenstern utility function. See Resnik, Michael D., Choices (Minneapolis: University of Minnesota Press, 1987)Google Scholar. See also Harsanyi, John C., Essays on Ethics, Social Behavior and Scientific Explanation (Dordrecht: Reidel Press, 1976)CrossRefGoogle Scholar and his Cardinal Welfare, Individualistic Ethics and Interpersonal Comparisons of Utility,” Journal of Political Economy, 63 (1955): 309–21CrossRefGoogle Scholar.

2 The paper is concerned with a recent objection to simple utilitarian theories (average or sum utilitarians) for which Harsanyi's theorem is a welcome result. John C. Harsanyi has, of course, provided a proof that if the preference-ordering of the group (or, the planner) meets certain rationality conditions (essentially, the assumptions of the expected utility theorem) and certain ethical assumptions (anonymity and strong Pareto optimality) then the group ranking of any option (R1, CJ) will be a simple additive function of the individual utilities for that option. See Harsanyi, , Essays on Ethics, Social Behavior and Scientific Explanation and his “Morality and the Theory of Rational Behavior,” Social Research, 44, 4 (1977)Google Scholar.

3 For an interesting discussion of the selection of a moral ordering from various possible orderings, see Sen, A. K., “Choice, Orderings and Morality,” in Choice, Welfare and Measurement, edited by Sen, Amartya (Cambridge, MA: MIT Press, 1982)Google Scholar, and Sen, A. K., “Isolation, Assurance and the Social Rate of Discount,” Quarterly Journal of Economics, 81 (1967): 112–24CrossRefGoogle Scholar.

4 This is true, of course, in the absence of a fairly restrictive view about the nature of morality. Compare, for instance, Gauthier, David, “Reason and Maximization,” in Moral Dealing: Contract, Ethics and Reason (Ithaca, NY: Cornell University Press, 1990)Google Scholar, and his Morals by Agreement (New York: Oxford University Press, 1986)Google Scholar.

5 The individual obligations of R and C are generated from both the utilitarian moral ordering and the principle, (U), which follows in section 2 below.

6 The objection has various formulations, each of which I consider below. The most important formulations are due to Feldman, Fred, “The Principle of Moral Harmony,” The Journal of Philosophy, 11 (1980): 166–79CrossRefGoogle Scholar; Regan, Donald, Utilitarianism and Co-operation (Oxford: Clarendon Press, 1980)CrossRefGoogle Scholar; Barnes, Gerald, “Utilitarianisms,” Ethics, 82 (1971): 5664CrossRefGoogle Scholar; Gibbard, Allan F., “Rule-Utilitarianism: Merely an Illusory Alternative?,” Australasian Journal of Philosophy, 43 (1965): 211–20CrossRefGoogle Scholar; Sobel, J. Howard, “Everyone's Conforming to a Rule,” Philosophical Studies, 48 (1985): 375–87CrossRefGoogle Scholar; and Feldman, Fred, Doing the Best We Can (Dordrecht: Reidel, 1986)CrossRefGoogle Scholar.

7 The principle (PI) corresponds to Feldman's (PMH4) and (P2) corresponds to his (PMH3). See Feldman, “The Principle of Moral Harmony.” See also Feldman, , Doing the Best We Can, pp. 147–78Google Scholar.

8 To avoid confusion, note that (PI) admits of a more precise, but more cumbersome, formulation than that which appears in the body of my text. Strictly speaking, in two-person cases (P1) is de dicto with respect to their actual obligations and states the following:

(PI): If every member, R and C, of a group, G{R,C} fulfils her actual obligations, ORi & OCj (i, j > 0), then the result is at least as good for G{R,C} as it would have been had R or C failed to fulfil their actual obligations, ORi; & OCj.

With obvious changes, a similar modification renders (P2) and (P3) precise for two-person cases.

9 A similar principle is found in Regan, Utilitarianism and Co-operation. Regan's principle, Prop COP, states that “if every agent satisfies [a theory] T in all choice situations, then the class of all agents produce by their acts taken together the best consequences that they can possibly produce by any pattern of behavior” (pp. 4–5). Regan argues that simple (act) utilitarian moral theories violate Prop-COP, and hence violate (P3). For other interesting discussions of the relation of simple utilitarian theories to (P3) see Barnes, “Utilitarianisms”; Sobel, J. Howard, “Rule Utilitarianism,” Australasian Journal of Philosophy, 46, 2 (1968): 146–65CrossRefGoogle Scholar; Gibbard, “Rule-Utilitarianism: Merely an Illusory Alternative?”; and Sobel, “Everyone's Conforming to a Rule.”

10 Throughout the paper I assume that (U) (and each agent) selects from among a limited number of “pure” strategies. Also, throughout, actual obligations are held to be the objective obligations of each agent.

11 See Sobel, J. Howard, “The Need for Coercion,” in Coercion: Nomos XIV, edited by Pennock, J. R. and Chapman, John W. (New York: Aldine/Atherton, 1972): pp. 148–77Google Scholar . There is also a good deal of literature on the problem of characterizing an “alternative” to an action, which is related, but not central to the discussion here. See, in particular, Bergstrom, Lars, The Alternatives and Consequences of Actions (Stockholm: Almqvist & Wiksell, 1966)Google Scholar.

12 The intended interpretation of the English counterfactual is most accurately captured, in my view, by 1-standard, α-models (W, ∥ ∥, f) where W ≠ ∅ ∥ ∥ assigns to each sentence A a subset ∥A∥ of W (defined in the usual way for the truth-functional connectives) and f assigns to each w in W and sentence A, a subset f(A, w) of W. Truth at a world, in a model, is defined as ∅A ⊡→ B∅wIR iff f(A, w) ⊆ ∥BIR. To ensure the appropriate relation of similarity among worlds, the following restrictions are placed on f: (i)f(A, w) ⊆ ∥A∥, (ii) if f(A, w) ⊆ ∥.B∥ and f(B, w) ⊆ ∥A∥then f(A, w) = f(B, w); (iii)f(A ∨ B, w) ⊆ orf(A ∨ B, w) ⊆ ∥B∥, or f(AB, w) =f(A, w) ⊆ f(B, w); (iv) if w ∈ ∥A∥, then w ∈ f(A, w); and (v) if w ∈ ∥A∥ then w' ∨ f(A, w) only if w' = w. The 1-standard, α-models were developed by Lewis, David in “Completeness and Decidability of Three Logics of Counterfactual Conditionals,” Theoria, 37 (1971): 7485CrossRefGoogle Scholar.

13 For alternative ways to define necessity and possibility in this context, see Lewis, David, Counterfactuals (Cambridge, MA: Harvard University Press, 1973)Google Scholar.

14 There is a convenient semi-formal representation of principle (U) and, correspondingly, a convenient way to specify the truth-conditions for statements of obligation:

U: OA out of {A, A1,…, An}iff(Vw)(Vw')((w ∈f(A, w))⊃(w' ∈f(Al, w) ∨ W' ∈ f(A2, w) ∨,… ∨ w' ∈ f(An, w)) ∈ w ≥ w')

On simple utilitarian theories of obligation, it is worth noting that obligation is not closed under implication. So, for instance, “OA” does not follow from “O(A & B).” This, of course, simply reflects the fallacy of weakening the antecedent for the counterfactual conditional, in terms of which obligation is defined. Familiar deontic distribution principles also do not hold for utilitarian obligation: for example, from “OA & OB,” “O(A & B)” does not follow. This reflects, of course, the fallacy of strengthening the antecedent for the counterfactual conditional. The objections to (U) discussed below are independent of the invalidity of such inferences. For a more detailed discussion of the logical properties of principles analogous to (U) see Jackson, Frank, “On the Semantics and Logic of Obligation,” Mind, 94 (1985): 177–95. See alsoCrossRefGoogle ScholarSobel, J. Howard, “Utilitarianism and Past and Future Mistakes,” Nous, 10, 2 (1976): 195220CrossRefGoogle Scholar.

15 The case is due to Feldman, , “The Principle of Moral Harmony,” pp. 166–79Google Scholar. Structurally similar cases can be found in several other places, including Barnes, , “Utilitarianisms,” and in , Feldman, Doing the Best We Can, pp. 147–78Google Scholar.

16 It is perhaps worth noting that though (iv) is assumed to be true only in the broad, logical sense of possibility, this is for the sake of technical simplicity. We can as well assume, without the corresponding operator, that it is true in some more narrow sense of possibility.

17 The case is due to Feldman, “The Principle of Moral Harmony.” For similar cases see Feldman, , Doing the Best We Can, pp. 147–78Google Scholar.

18 For a discussion of situations structurally analogous to the mosquito case see Regan, Utilitarianism and Co-operation. Regan considers violations of (P3) a serious problem for (U). But, I believe, there is some confusion here. A violation of (P3), as I show below, does not entail that there are situations in which the members of some group G would bring about a better result for G if some or all had failed to fulfil their actual obligations. For a discussion of similar cases, see Gibbard, “Rule-Utilitarianism: Merely an Illusory Alternative?”; Sobel, “Rule Utilitarianism”; Barnes, “Utilitarianisms”; Sobel, “Everyone's Conforming to a Rule”; and Feldman, , Doing the Best We Can, pp. 147–78Google Scholar.

19 Of course, not just any set of actions are such that had R and C brought those actions about it would have been better for G(R Q. Rather, the only relevant set of alternatives in the mosquito case is(Rl, Cl).

20 Violations of (P3) in the mosquito case does show the following: (i) the obligations of R and C might have been OR1 & OC1 rather than what they actually are, OR2 and OC2; (ii) if R and C had fulfilled OR1 and OC1, then it would have been better for G{R C} than it actually is. But, notice that it does not matter whether the actual obligations of R and C are (OR2 & OC2) or (OR1 & OC1): t i is always better for G{R C} if R and C fulfil their actual obligations rather than fail to do so. Complying with (U) is collectively rational.

21 I am indebted to Richard F. Galvin and Mark Bernstein for their comments on an earlier draft of the paper. Thanks also to Jordan H. Sobel and two anonymous referees for their comments on a later draft.