Hostname: page-component-586b7cd67f-tf8b9 Total loading time: 0 Render date: 2024-11-27T11:18:07.088Z Has data issue: false hasContentIssue false

Absolute Fairness and Weighted Lotteries

Published online by Cambridge University Press:  20 November 2024

Lukas Tank*
Affiliation:
Institute of Philosophy, Christian-Albrechts-Universität zu Kiel, Kiel, Germany
Nils Wendler
Affiliation:
Institute of Philosophy, Christian-Albrechts-Universität zu Kiel, Kiel, Germany
Jan Peter Carstensen-Mainka
Affiliation:
Independent Scholar, Hamburg, Germany
*
Corresponding author: Lukas Tank; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Weighted lottery proposals give guidance in rescue dilemma situations by balancing the demands of comparative and absolute fairness. While they do not advocate for saving the greater number outright, they are responsive to absolute fairness insofar as they show a certain sensitivity to the numbers involved. In this paper we investigate what criterion of absolute fairness we should demand weighted lotteries to fulfill. We do so by way of critically examining what is probably the most sophisticated weighted lottery on the market: Gerard Vong's Exclusive Composition-Sensitive (EXCS) lottery. We find that both the standard that seems most common in the debate, and a different standard Vong uses to criticize Jens Timmermann's Individualist Lottery are in contradiction to basic demands placed upon weighted lotteries and are therefore unsuitable as necessary conditions for absolute fairness. We instead propose a purely gradual understanding of absolute fairness.

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

1. Introduction

The debate about rescue dilemmas is one of the classics of modern moral philosophy. Whom to save when you cannot save all? And to what extent, if at all, should it matter that we at least save more rather than fewer people? The search for an answer to these questions has raged on for half a century.Footnote 1 Arguably the most prominent line of answers revolves around proposing various weighted lotteries. By being lotteries, rather than demands to save the greater number outright, they grant every person a chance to be rescued. By being weighted, they exhibit a certain sensitivity to the numbers involved: a tendency to save more people rather than fewer. They aim to offer the best of both worlds: comparative fairness, the fairness of how the claims of people are treated vis-à-vis each other, and absolute fairness, the fairness of how many claims to be rescued we satisfy in total.Footnote 2

In this paper we investigate what criterion of absolute fairness we should demand weighted lotteries to fulfill. We do so by way of critically examining what is probably the most sophisticated weighted lottery on the market: Gerard Vong's Exclusive Composition-Sensitive (EXCS) lottery.Footnote 3 In contrast to some other weighted lotteries, EXCS is able to give guidance even in situations in which at least one person belongs to more than one of the groups of people we could rescue, so-called ‘overlap cases’.Footnote 4 A further virtue of Vong's work lies in the fact that it is among the most concrete in naming demands of absolute fairness that weighted lotteries should fulfill. This makes engaging with it conducive to reaching a more general position on weighted lotteries and absolute fairness.

We argue that the standard of absolute fairness Vong most explicitly endorses as well as a generalized form of it are failed by EXCS. The same is true for the standard he seems to implicitly rely on in order to criticize a rival lottery proposal, Jens Timmermann's Individualist Lottery.Footnote 5 Furthermore, there are good reasons to reject both standards as demands of absolute fairness for all weighted lottery proposals. A more gradual understanding of absolute fairness in weighted lotteries, one that stops short of naming a necessary condition that all proposals must fulfill, also briefly features in Vong's work. We argue that it constitutes the best way to judge the absolute fairness of weighted lottery proposals, but also note its limitations.

2. The exclusive composition-sensitive (EXCS) lottery

We begin by introducing Vong's EXCS lottery before discussing the demands of absolute fairness he employs. EXCS is a weighted-lottery procedure that aims to provide guidance even in overlap cases by taking into account the composition of outcome groups, that is a group of claimants that can be saved together.Footnote 6 It awards each claimant an equal baseline claim to be rescued and then continues to answer the thorny question of how this claim should be distributed when claimants are part of more than one outcome group. Daniel Hausman gives a succinct summary of the fairly involved procedure the EXCS lottery proposes:

In EXCS lotteries, each of the n equal claimants is assigned an initial baseline weight of 1/n. Each individual j's baseline weight is distributed among the groups in which j is a member. The fraction of j's weight assigned to a group depends on how many members in the group are “distributively relevant” to j, divided by the total number of members distributively relevant to j in all the groups.' A member k of a group containing j is distributively relevant to j in that group if it matters to j how k's baseline probability is distributed among groups. If k is in some groups that do not include j, then it matters to j how k's baseline probability is distributed and k is distributively relevant to j. If every group containing k also contains j, then k is not distributively relevant to j. If an individual, j, is in only one group, then j's entirely [sic!] baseline probability is assigned to that group (Hausman Reference Hausman2022, 135–36).

What should be added is that it might happen that the selected group is a subgroup of one or more other groups. In that case the lottery is repeated only among those groups that contain the selected group. Illustrative examples for the application of this procedure are given by Vong and Hausman.Footnote 7

EXCS was designed to give guidance in overlap cases, such as Rescue 1:

Rescue 1

There are 1000 claimants in need of rescue, named 1–1000. We cannot save them all. What we can do is to save a group consisting of 1–500 or a group consisting of 501–1000. Also each pair of claimants is a possible group we could rescue. We could thus save 1 & 2, 1 & 3, … 11 & 608, …, 999 & 1000. This amounts to 499,500 pairs. That plus the two groups of 500 makes for 499,502 options for action. We must choose one of them. Which one should we choose?Footnote 8

Here is what happens if we apply Vong's EXCS lottery to Rescue 1: Vong's EXCS lottery assigns a probability of approximately 33.3% to each of the larger outcome groups consisting of 1–500 and 501–1000 respectively.Footnote 9 Thus the total probability for benefitting any of the larger outcome groups is ~66.7%. Accordingly, the probability that a pair of two claimants wins the lottery is ~33.3%.Footnote 10

Vong judges the performance of his EXCS lottery in Rescue 1 to be better than other lottery solutions. Specifically, he claims that “[t]he exclusive lottery procedure best promotes the (occasionally conflicting) considerations of comparative and absolute fairness, and it is thus the all-things-considered fair procedure”.Footnote 11 Before we try to show why Vong's treatment of absolute fairness is unconvincing, we need to explain what standards of absolute fairness Vong endorses.

3. Vong on absolute fairness

Vong explicitly names one necessary condition of absolute fairness that weighted lotteries must fulfill:

In equal conflict cases with outcome groups of different sizes and multiple largest outcome groups [that is, the largest possible group that can be saved], it is a necessary condition of absolute fairness that each of the largest outcome groups receives a higher chance of benefiting than any one of the nonlargest outcome groups (Vong Reference Vong2020, 332).

It seems to us that this necessary condition of absolute fairness is a restricted form of a more general criterion of absolute fairness common in the literature:

Absolute Standard 1

It is a necessary condition of absolute fairness that lotteries award options for action with a higher number of expected lives saved a higher chance to be chosen than options for action with a lower number of expected lives saved.

We assert that all cases we discuss in relation to Absolute Standard 1 conform to the more restricted conditions Vong stipulates. Absolute Standard 1 is sometimes thought to define weighted lotteries in general.Footnote 12

While the restricted version of Absolute Standard 1 is what Vong most explicitly endorses, Absolute Standard 1 cannot be the standard of absolute fairness he uses to criticize rival weighted lotteries, in particular an iterated version of Jens Timmermann's Individualist Lottery.Footnote 13 In Rescue 1 Timmermann would advocate for a lottery with 1000 lots, each with an equal chance of being chosen and each representing one claimant. If, say, claimant 300 is chosen, we are committed to saving claimant 300. We can now iterate the lottery among all the people that could be saved together with claimant 300. If, say, 400 is chosen next, we must save these two. Since we realize that we can do so while saving the whole largest outcome group of which they are a part (1–500), we must do so in order not to waste lives. But if, say, 600 is chosen in the second round of the lottery, we must save the largest outcome group in which 300 and 600 take part – which in this case is simply the pair 300 and 600.

In Rescue 1 Timmermann's lottery would award each of the two groups of 500 a 24.975% chance of being chosen. The sum of all the maximal groups of two being chosen is 50.050%. It is thus slightly more likely that we will save just two people in this case rather than 500. Vong uses this case to discard Timmermann's lottery in favor of EXCS which, as stated in section 2, implies a 66.7% chance that a group of 500 is chosen.Footnote 14 He states: “A theory that implies that this distribution of chances is fair is deeply implausible. In this case […] it would be clearly unfair to make it more likely that two people benefit rather than 500 people benefit.”Footnote 15

The allegedly fatal flaw of Timmermann's proposal is clearly one of absolute fairness.Footnote 16 However, it cannot be a violation of Absolute Standard 1. Each of the largest outcome groups is awarded a much higher chance of being benefited than each of the non-largest groups (24. 975% vs. ~0.0002%). It is only the combined chances of the non-largest outcome groups that eclipse the combined chances of the largest outcome groups. Lurking in the background here thus seems to be a different condition of absolute fairness.

One general form it could take is what we call Absolute Standard 2:

It is a necessary condition of absolute fairness that all of the largest outcome groups receive a higher combined chance of being chosen than all of the non-largest outcome groups combined.

To be clear, Vong does not explicitly commit himself to Absolute Standard 2, but his remark that “it would be clearly unfair to make it more likely that two people benefit rather than 500 people benefit” (our emphasis) strongly suggests something like Absolute Standard 2.

4. Why absolute standard 2 is unconvincing

In this section we will argue that EXCS, too, violates Absolute Standard 2. If one wants to say that Timmermann's iterated lottery is “deeply implausible” because it fails Absolute Standard 2, the same must be said about Vong's EXCS. In order to evaluate how EXCS fares with respect to Absolute Standard 2, we introduce a variant of Rescue 1:

Rescue 2

As in Rescue 1 there are 1000 claimants in need of rescue, named 1–1000 and we can again save a group consisting of 1–500 or a group consisting of 501–1000. Instead of being able to save each pair, this time we can save each triplet of claimants. We could thus save 1 & 2 & 3, 1 & 2 & 4, …, 998 & 999 & 1000, amounting to 166,167,000 triples in total. That plus the two groups of 500 makes for 166,167,002 options for action. We must choose one of them. Which one should we choose?

When applying EXCS to this case it turns out that the probability that any of the larger groups will be chosen is ~25.0% and therefore well below the value of 50%, which Vong uses in Rescue 1 to argue the implausibility of Timmermann's individualist lottery. EXCS thus violates Absolute Standard 2.Footnote 17

Could Vong reply that Rescue 2 is so different from Rescue 1 that this violation does not result in the same highly negative verdict that he casts on Timmermann's iterated lottery? We do not see how he could. Rescue 2 is structurally identical to Rescue 1 and the numbers are tweaked ever so slightly. Once again we are faced with a decision between a few very large groups of people and many more much smaller groups. Even if Vong were to reject Absolute Standard 2 in its general form and only accept an application to cases similar to Rescue 1, Rescue 2 would still spell trouble for EXCS precisely because it is relevantly similar to Rescue 1.Footnote 18 We do not see how Timmermann's iterated lottery could be “deeply implausible” because of Rescue 1 and EXCS not be deeply implausible because of Rescue 2.

What has been said so far shows that Timmermann's lottery and Vong's EXCS fail Absolute Standard 2. This alone does not amount to a decisive case against Absolute Standard 2 as a necessary condition of absolute fairness. However, we now propose to discard Absolute Standard 2 altogether as a criterion of absolute fairness. For it can be shown that not only the proposals by Vong and Timmermann violate it, but that all lotteries that fulfill a central demand of comparative fairness do. In Vong's words: “In equal conflict cases each equally worthy claimant should have an equal positive impact on the outcome group selection procedure.”Footnote 19 This strikes us as a highly plausible demand. We call all lotteries that fulfill it ‘equal claim lotteries’. The impossibility for any equal claim lottery to fulfill Absolute Standard 2 in all relevant cases can already be shown in cases without overlap. These are particularly suited to prove as much because equal claim lotteries converge in such cases. Not only Timmermann and Vong, but also Kamm would effectively all demand the same in such cases.Footnote 20 Since every claimant is in one and only one outcome group, they will distribute their entire claim to that one group. As soon as there are more claimants in non-largest groups than in the largest groups, Absolute Standard 2 will always be violated.

As an example consider the case of 1500 claimants, where the outcome groups are the (non-overlapping) pairs 1 & 2, 3 & 4, …, 999 & 1000 and the group of 500 claimants 1001–1500. Every equal claim lottery will assign a probability of 33.3% to the group of 500 claimants and a probability of 66.7% that a pair will be chosen. They all therefore violate Absolute Standard 2.

Furthermore, giving up on the demand that weighted lotteries should be equal claim lotteries instead of giving up on Absolute Standard 2 is no viable option either. Considerations of comparative fairness are what motivates the search for lottery solutions in general, and weighted lotteries in particular. And the call for an equal positive impact on the outcome group selection procedure strikes us as being commonly accepted among proponents of weighted lotteries as the central demand of comparative fairness for cases involving only claims to be rescued of equal strength.Footnote 21 Any claimant not granted as much can rightly feel that they were treated unfairly in a comparative sense. Hence within the search for the best weighted lottery proposal, we should stick with this demand of comparative fairness.Footnote 22

5. Why absolute standard 1 is unconvincing

Absolute Standard 2 has thus proved to be unconvincing. This brings us back to Absolute Standard 1 which demanded that options for action with a higher number of expected lives saved receive a higher chance to be chosen than options for action with a lower number of expected lives saved. Absolute Standard 1 failed to differentiate between Vong's proposal and what is arguably its main rival, Timmermann's Individualist Lottery. This, however, need not speak against Absolute Standard 1 as a necessary criterion of absolute fairness. Simply accepting Absolute Standard 1 is still very much a possibility. After all, it is arguably the most commonly cited demand of absolute fairness and authors like Hirose and Saunders use it to define the ‘weighted’ in ‘weighted lotteries’.Footnote 23

However, we now show how cases can be constructed in which all equal claim lotteries violate Absolute Standard 1. To this end we first spell out a consistency condition: that any sensible equal claim lottery must be invariant under relabeling claimants and/or outcome groups. By this we mean that if two rescue dilemmas differ from each other only by the names of the claimants and/or outcome groups, then any equal claim lottery must produce equal results for the probabilities in both cases up to said renaming. We posit that this is not a new criterion, but instead it is so fundamental that it is implicitly assumed throughout the entire debate around rescue lotteries. For any lottery that does not fulfill this criterion there would exist cases where the probability for a claimant to be rescued depends on their identity.

With this criterion in mind we now introduce a new rescue case:

Rescue 3

The table below illustrates which claimants belong to which outcome groups. Groups G1–G4 are composed of claimants C1–C4 except that one of the claimants is missing in each group. Group G5 consists of claimants C5 and C6. Which group should we choose?

Since claimants C5 and C6 only appear in group G5, any equal claim lottery will assign a probability of 2⁄₆ to this group, which leaves a probability of ⁴⁄₆ for any of the groups G1–G4 to be chosen. The groups G1–G4 are invariant under relabeling of the claimants. In other words, if we shuffle the names of claimants C1–C4 amongst each other, the same groups G1–G4 will emerge. As discussed above, any sensible lotteries must assign equal probabilities to all these groups, resulting in a probability of ¹⁄₆ for each of them. This violates Absolute Standard 1, because the smaller group G5 receives a larger probability than each of the larger groups G1–G4 individually.

The only two ingredients for this argument to work are the presence of overlap and the demand that any lottery ought to fulfill the above mentioned consistency condition. We take this to be a significant finding, since, as stated above, Absolute Standard 1, either stated explicitly or used implicitly, is accepted in the debate as a rather uncontroversial demand of absolute fairness that weighted lotteries typically meet. We have thus shown that for overlap cases, Absolute Standard 1 is in contradiction with all equal claim lotteries that fulfill a highly plausible consistency condition and should therefore be discarded as a necessary condition of absolute fairness.

6. A gradual understanding of absolute fairness

Where does this leave us? Both Absolute Standard 1 and Absolute Standard 2 have proved to be unconvincing as necessary conditions for absolute fairness provided that one is not willing to sacrifice the equal treatment of claimants or basic demands of consistency under renaming. This in itself does not prove that there cannot be any such necessary condition, which is applicable across all different weighted lotteries and across all different kinds of scenarios, but it sheds some light on how difficult it might be to formulate it. One problem with Absolute Standards 1 and 2 seems to be that they are phrased in terms of relationships between the probabilities of the outcome groups being chosen. It might well turn out that the vast space of possible rescue dilemmas is large enough to provide counterexamples for any condition that is formulated in such a way or that conversely any condition that holds for all rescue dilemmas will be so weak as to be virtually useless in differentiating between different lottery proposals.

A separate structural problem in the search for such necessary conditions is that absolute fairness is itself a continuous concept. It concerns itself with whether more rather than fewer claimants are saved.Footnote 24 If the probabilities of the outcome groups (which are also continuous) are used to compute a measure for how well a proposal fairs with respect to absolute fairness, it stands to reason that this measure should also be continuous. Necessary conditions are inherently unfit to deal with the nuances of such a continuous scale because they introduce binary thresholds into a scale that does not exhibit them. This leads to situations where a tiny difference between two lottery proposals can potentially lead to drastically different verdicts on their absolute fairness (if they happen to lie on different sides of the binary threshold) while on the other hand huge differences between them are not be properly distinguished (if they both lie on the same side of the threshold). Both of these are undesirable properties for evaluating lotteries regarding a metric that does not exhibit any cut-off points.

Note that in general there can be pragmatic reasons for introducing binary thresholds to a continuous scale despite the above-mentioned problems. Even if defining a particular threshold strikes us as necessarily arbitrary, we may want to do so in order to have some reference point within the continuum that can serve as a pragmatic action-guiding criterion. For example, assume that scarce medical resources are allocated according to the likelihood that the patient receiving them will survive. Although the likelihood that someone survives is measured on a continuous scale, we can imagine that introducing a threshold below which patients are deemed non-eligible, say 30%, is reasonable for pragmatic reasons.

However, given its theoretical nature, the debate on the absolute fairness of weighted lotteries is such that it is wholly unclear which pragmatic reasons to introduce a binary threshold into a continuous scale there might be. Being able to say that weighted lottery proposals fare better or worse vis-à-vis each other in terms of absolute fairness is all that is needed to judge their merits.

One possible metric that can be used to quantify such a gradual understanding is the expected number of saved claimants, normalized by the maximum number of claimants that can be saved simultaneously. In each case this number will be between 0 and 1, with 1 being achieved if it is certain that the maximum number of claimants that can be saved simultaneously in the case in question are in fact saved and 0 if it is certain that no claimant is saved. According to this metric one lottery is more absolutely fair than another in a certain case if the expected fraction of saved lives is higher.

Judged by this understanding of absolute fairness, Vong's EXCS does indeed perform rather well, and better than the Individualist Lottery. We cannot construct a single case in which the latter is superior to EXCS, and EXCS is superior to the Individualist Lottery in some cases, including Rescue 1 and 2.Footnote 25 We thus do not want to deny that EXCS does indeed excel in terms of absolute fairness. In Rescue 1 the metric we introduced above takes the values ~0.50 and ~0.67 for the Individualist Lottery and EXCS, respectively. In Rescue 2 the values are much closer together with the EXCS being ever so slightly ahead. The values are ~0.2538 for the Individualist Lottery and ~0.2541 for EXCS.Footnote 26

Finally, one must be wary that the gradual understanding of absolute fairness will not always rank weighted lottery proposals as superior or inferior compared to a rival proposal across all possible scenarios. Cases are easily imaginable in which the ranking changes depending on the scenario under consideration. However, we anticipate that clear trends will often emerge, especially if the selection of cases or the selection of lotteries is constrained by certain conditions like ‘no overlap cases’ or ‘only equal claim lotteries’.

7. Conclusion

The debate on rescue dilemmas and weighted lottery proposals has become more diverse in recent times. New types of cases have emerged and so have new lottery proposals.Footnote 27 In this paper we have used an investigation into arguably the most advanced weighted lottery, Gerard Vong's EXCS, to ask which demands of absolute fairness should guide us in assessing various lottery proposals. We have found that neither the standard that seems most common in the debate, what we call Absolute Standard 1, nor a different standard that Vong seems to use to criticize Jens Timmermann's Individualist Lottery, Absolute Standard 2 in our terms, are convincing. We have instead advocated for a purely gradual understanding of absolute fairness that allows us to compare weighted lotteries on a case-by-case basis but stops short of naming any general necessary condition of absolute fairness. We deem this less rigid proposal to be the more appropriate way of thinking about absolute fairness in a debate that has considerably grown in complexity in recent times.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/S0953820824000207.

Acknowledgements

We would like to thank the anonymous reviewers at Utilitas for a very pleasant review process that improved the paper considerably and Gerard Vong for very helpful discussions regarding his lottery. We would also like to thank Jan Gertken, Christian Baatz, Christine Mainka, the Praktische Philosophie der Wirtschaft und Umwelt colloquium at Kiel University, Kirsten Meyer's colloquium at Humboldt-University Berlin, and the audience at the Tagung für Praktische Philosophie 2023 in Salzburg, Austria. Nils Wendler acknowledges financial support by the Federal Ministry of Education and Research of Germany (BMBF) via project adjust (grant number 01UU2001). Lukas Tank acknowledges the financial support by the BMBF in the framework of ASMASYS II 03F0962A-F, one of the research consortia of the German Marine Research Alliance (DAM) research mission “Marine carbon sinks in decarbonisation pathways” (CDRmare).

Footnotes

1 See Foot (Reference Foot1971) and Taurek (Reference Taurek1977) for two highly influential early papers.

2 Vong (Reference Vong2020, 326), referring to the work of Temkin (Reference Temkin, Knight and Stemplowska2011), Hooker (Reference Hooker2005), and Feinberg (Reference Feinberg1974), uses the term ‘absolute fairness’ in his paper. Since our paper heavily engages with Vong, we do so as well. If readers feel that the term ‘fairness’ is not adequate to describe an essentially consequentialist criterion on whether a proposal makes us save more rather than fewer claimants, they can simply substitute ‘greater number criterion’ for ‘criterion of absolute fairness’. Nothing of substance hinges on this choice of terminology when considering the cases discussed here.

3 See Vong (Reference Vong2020).

4 See Vong (Reference Vong2020, 323). While Vong was not the first person to bring up cases like these, he has, to our knowledge, the credit of being the first person to provide a systematic discussion of them. Cases that can be described as overlap cases also feature in Kamm (Reference Kamm1993, ch. 6 & 7) and Meyer (Reference Meyer2006).

5 See Timmermann (Reference Timmermann2004).

6 See Vong (Reference Vong2020, 323).

7 See Vong (Reference Vong2020) and Hausman (Reference Hausman2022).

8 See Vong (Reference Vong2020, 342–44).

9 For brevity's sake, we omit all calculations in this paper and just present the results.

10 This ~33.3% does not refer to all of the 499,500 pairs but only to the 250,000 so-called “maximal pairs”. A maximal pair is a pair of two claimants who are not together in any larger outcome group. In Rescue 1, claimant 1 and 501 make a maximal pair but not claimant 1 and 2 as they are both part of the larger outcome group 1–500. Such a non-maximal pair can win the first iteration of the lottery but in that case, with 1 and 2 being a subgroup of another group, we ought to repeat the lottery only on those groups containing 1 and 2, that is here only one group, 1–500. So if a non-maximal pair wins the lottery, it is always the larger outcome group which the pair is a part of that will be rescued. The reason for this is that non-maximal pairs distribute their chances of winning to their respective larger outcome groups – a feature of Vong's EXCS lottery that has been critically discussed by Hausman (Reference Hausman2022, 137).

11 Vong (Reference Vong2020, 344).

12 See Hirose (Reference Hirose2015, 204) and Saunders (Reference Saunders2009, 290).

13 See Timmermann (Reference Timmermann2004). For the idea of iterating Timmermann's lottery to deal with cases like Rescue 1, see Vong (Reference Vong2020, 338). It strikes us as plausible. The idea of iterated lotteries has earlier featured in Meyer (Reference Meyer2006, 145) and Saunders (Reference Saunders2009, 287).

14 See Vong (Reference Vong2020, 344).

15 Vong (Reference Vong2020, 343).

16 Vong (Reference Vong2020, 343): “While the iterated individualist lottery procedure is comparatively fair in its treatment of equally worthy claimants, it does not promote absolute fairness as well as the exclusive lottery procedure does.”

17 Both Rescue 1 and Rescue 2 are specific instances of a more general class of case: Consider an arbitrary total number of claimants N, arbitrary size of the larger groups n, as long as n is a divisor of N and an arbitrary size of the smaller groups k. Assume that the larger outcome groups form a partition of the total set of claimants, i.e. every claimant is in one and only one of the larger groups. On the other hand for the smaller groups all possible combinations of k claimants are valid outcome groups. Rescue 1 is the case (N = 1000, n = 500, k = 2) and Rescue 2 is the case (N = 1000, n = 500, k = 3). The general formula for the probability that the EXCS lottery will benefit any of the larger groups is given by

$$P = {{n-1 + \left({\matrix{ {n-1} \cr {k-1} \cr } } \right)( {k-1} ) } \over {n-1 + \left({\matrix{ {N-1} \cr {k-1} \cr } } \right)( {k-1} ) }}$$

The proof for this formula is given in the supplementary material.

18 Using the formula from the previous footnote, we readily find other cases which are relevantly similar to Rescue 1 and violate Absolute Standard 2, e.g. (N = 1500, n = 500, k = 2), which results in P~49.9%, (N = 2000, n = 500, k = 2), which results in P~40.0% and (N = 1000, n = 250, k = 2), which results in P~39.9%.

19 Vong (Reference Vong2020, 331).

20 Timmermann's proposal would differ procedurally insofar as groups do not feature in the procedure itself. Practically, however, his proposal, too, converges with the others. The second lottery that Vong introduces (but doesn't endorse), the Equal Composition Sensitive Lottery (Vong Reference Vong2020, 334), also does.

21 In addition to Vong, see, for example, Saunders (Reference Saunders2009, 281–84).

22 We thank an anonymous reviewer for pressing us on this issue and proposing a solution.

23 See Hirose (Reference Hirose2015, 204) and Saunders (Reference Saunders2009, 290). Both have written on the topic before overlap cases have become prominent.

24 Vong's own work features a gradual understanding of absolute fairness when he writes that “absolute fairness is promoted by increasing the chances that more (rather than fewer) claimants will receive what they are due determined noncomparatively”. See Vong (Reference Vong2020, 326 & 332).

25 That a gradual understanding of absolute fairness plays a role for Vong when he compares his EXCS to Timmermann's Individualist Lottery is suggested when he writes that the Individualist Lottery “does not promote absolute fairness as well as the exclusive lottery procedure does”. See Vong (Reference Vong2020, 343).

26 What remains unjustifiable in our opinion is the kind of highly negative verdict Vong takes in relation to Timmermann's Individualist Lottery. A gradual understanding of absolute fairness is ill-suited to generate the verdict that a proposal is “deeply implausible” in terms of absolute fairness, especially if the supposedly much superior proposal produces very similar results in closely related cases.

27 In addition to Vong's work, the inclusion of cases involving probabilities by Rasmussen (Reference Rasmussen2012) is a case in point.

References

Feinberg, Joel (1974): Noncomparative Justice, Philosophical Review 83, pp. 297338.CrossRefGoogle Scholar
Foot, Philippa (1971): The Problem of Abortion and the Doctrine of Double Effect, in: J. Rachels (ed.): Moral Problems, New York: Harper & Row, pp. 2841.Google Scholar
Hausman, Daniel M. (2022): Constrained Fairness in Distribution, Journal of Ethics and Social Philosophy 22 (1), pp. 134–41.CrossRefGoogle Scholar
Hirose, Iwao (2015): Moral Aggregation, Oxford: Oxford University Press.Google Scholar
Hooker, Brad (2005): Fairness, Ethical Theory & Moral Practice 8, pp. 329–52.CrossRefGoogle Scholar
Kamm, Frances M. (1993): Morality, Mortality Vol. 1: Death and Whom to Save from It, Oxford: Oxford University Press.Google Scholar
Meyer, Kirsten (2006): How to Be Consistent Without Saving the Greater Number, Philosophy & Public Affairs 34 (2), pp. 136–46.CrossRefGoogle Scholar
Rasmussen, Katharina Berndt (2012): Should the Probabilities Count?, Philosophical Studies 159, pp. 205–18.CrossRefGoogle Scholar
Saunders, Ben (2009): A Defence of Weighted Lotteries in Live Saving Cases, Ethical Theory and Moral Practice 12 (3), pp. 279–90.CrossRefGoogle Scholar
Taurek, John (1977): Should the Numbers Count?, Philosophy and Public Affairs 6 (4), pp. 293316.Google ScholarPubMed
Temkin, Larry (2011): Justice, Equality, Fairness, Desert, Rights, Free Will, Responsibility and Luck, in: Knight, C. & Stemplowska, Z. (eds.): Responsibility and Distributive Justice, Oxford: Oxford University Press, pp. 5176.CrossRefGoogle Scholar
Timmermann, Jens (2004): The Individualist Lottery: How People Count, but Not Their Numbers, Analysis 64 (2), pp. 106–12.CrossRefGoogle Scholar
Vong, Gerard (2020): Weighing Up Weighted Lotteries: Scarcity, Overlap Cases, and Fair Inequalities of Chance, Ethics 130 (3), pp. 320–48.CrossRefGoogle Scholar
Supplementary material: File

Tank et al. supplementary material

Tank et al. supplementary material
Download Tank et al. supplementary material(File)
File 133 KB