Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-25T08:20:21.033Z Has data issue: false hasContentIssue false

Construction of aggregation paradoxes through load-sharing models

Published online by Cambridge University Press:  08 August 2022

Emilio De Santis*
Affiliation:
University of Rome La Sapienza
Fabio Spizzichino*
Affiliation:
University of Rome La Sapienza
*
*Postal address: University of Rome La Sapienza, Department of Mathematics, Piazzale Aldo Moro, 5, 00185, Rome, Italy.
*Postal address: University of Rome La Sapienza, Department of Mathematics, Piazzale Aldo Moro, 5, 00185, Rome, Italy.
Rights & Permissions [Opens in a new window]

Abstract

We show that load-sharing models (a very special class of multivariate probability models for nonnegative random variables) can be used to obtain basic results about a multivariate extension of stochastic precedence and related paradoxes. Such results can be applied in several different fields. In particular, applications of them can be developed in the context of paradoxes which arise in voting theory. Also, an application to the notion of probability signature may be of interest, in the field of systems reliability.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

In the field of reliability theory, the term load-sharing model is mostly used to designate a very special class of multivariate survival models. Such models arise from a simplifying condition of stochastic dependence among the lifetimes of units which start working simultaneously, are embedded into the same environment, and are designed to support one another (or to share a common load or a common resource).

In terms of this restricted class of multivariate models, we will obtain some basic results about stochastic precedence, minima among nonnegative random variables, and related paradoxes. Such results can be applied in several different fields, even fields far from the probabilistic analysis of nonnegative random variables. In particular, direct applications can be developed in the study of the paradoxes arising in voting theory.

Let $X_{1},\ldots,X_{m}$ be m nonnegative random variables defined on the same probability space and satisfying the no-tie assumption $\mathbb{P}\!\left( X_{i}\neq X_{j}\right) =1$ , for $i\neq j$ with $i,j\in\lbrack m]\equiv\{1,\ldots ,m\}$ .

For any subset A $\subseteq [ m]$ and any $j\in A$ , let $\alpha_{j}(A)$ be the probability that $X_{j}$ takes on the minimum value among all the other variables $X_{i}$ with $i\in A$ , as will be formally defined by the formula (3) below. In some contexts, $\alpha_{j}(A)$ can also be seen as a winning probability.

We concentrate our attention on the family $\mathcal{A}_{\left( m \right)}\equiv\{\alpha_{j}(A)\,:\,A\subseteq [ m],\text{ } j\in A\}$ .

When A contains exactly two elements, $A\,:\!=\,\{i,j\}$ say, the inequality $\alpha_{i}(A)\geq$ $\alpha_{j}(A)$ is a translation of the condition that $X_{i}$ stochastically precedes $X_{j}$ . This notion has been considered several times in the literature, possibly under different terminologies. In the last few years, in particular, it has attracted interest for different aspects and in different applied contexts; see e.g. the references [Reference Arcones and Samaniego2], [Reference Boland, Singh and Cukic6], [Reference De Santis, Fantozzi and Spizzichino8], [Reference Finkelstein and Hazra13], and [Reference Navarro and Rubio23]. The same concept is also related to a comparison of statistical preference; see e.g. the paper [Reference Montes, Rademaker, Perez-Fernandez and De Baets22], dealing with the framework of voting theory, and other papers cited therein.

One relevant aspect of this concept is the possibility of observing nontransitive behavior: namely, that for some triple of indexes i, j, h, the inequalities

\begin{equation*}\alpha_{j}( \{i,j\}) >\frac{1}{2}, \qquad \alpha_{h}(\{j,h\}) >\frac{1}{2}, \qquad \alpha_{i}( \{h,i\}) >\frac{1}{2}\end{equation*}

can hold simultaneously. These topics have been treated in several classical references, again using different types of language and notation; see e.g. [Reference Steinhaus and Trybula37], [Reference Trybula38]. Already at first glance, nontransitivity of stochastic precedence can be seen to be analogous to nontransitivity of collective preferences in comparisons between pairs of candidates, which is demonstrated by the Condorcet paradox. As is well known, a very rich literature has been devoted to this specific topic, starting from the studies developed by J. C. Borda and M. J. Condorcet at the end of the eighteenth century. In relation to the purposes of the present paper, a brief overview and a few helpful references will be provided in Section 5, below.

Other types of aggregation paradoxes also arise in voting theory, when attention is focused on elections with more than two candidates. Correspondingly, analogous probabilistic aggregation paradoxes can arise in the analysis of the family $\mathcal{A}_{\left( m\right) }$ , when comparing the winning probabilities $\alpha_{j}(A)$ for subsets containing more than two elements. See e.g. [Reference Blyth4], [Reference Saari28], [Reference Fishburn14]; see also [Reference De Santis, Malinovsky and Spizzichino9].

Another directly related context is that of intransitive dice (see e.g. [Reference Savage31], [Reference Hazla, Mossel, Ross and Zheng18], and references therein) and of the classic games among players who respectively bet on the occurrence of different events in a sequence of trials. In fact, some paradoxical phenomena can emerge in such a context as well. Relevant special cases are the possible paradoxes which arise in the analysis of times of first occurrence for different words of fixed length in a random sampling of letters from an alphabet. See e.g. [Reference Li19], [Reference Guibas and Odlyzko17], [Reference Blom and Thorburn5], [Reference De Santis and Spizzichino10], [Reference De Santis and Spizzichino11], and references therein. This particular field was the initial motivation for our own interest in these topics.

A common approach for studying and comparing paradoxes arising respectively in voting theory and in the analysis of the family $\mathcal{A}_{\left(m\right) }$ for m-tuples of random variables was worked out by Donald G. Saari at the end of the last century ([Reference Saari26], [Reference Saari27], [Reference Saari28]; see also [Reference Saari29]). An approach aiming to describe ranking in voting theory by means of comparisons among random variables has been developed in terms of stochastic orderings; see in particular [Reference Montes, Rademaker, Perez-Fernandez and De Baets22] and the references cited therein.

An important class of results proved by Saari aimed to emphasize that all possible ranking paradoxes can conceivably be observed. Furthermore, and equivalently, the same results can be translated into the language for ranking comparisons among random variables. Such results can be seen as generalizations of McGarvey’s theorem (see [Reference McGarvey21]), the classical result which shows the actual existence of arbitrarily paradoxical situations related to an analysis restricted to pairs of candidates.

As a main purpose of this paper, we obtain, in terms of comparisons of stochastic-precedence type among random variables, a result (Theorem 2) which leads to conclusions similar to those of Saari. From a mathematical viewpoint, however, this result is very different from Saari’s results; it is obtained by exploiting characteristic features of load-sharing models. In particular it allows us to construct load-sharing models which give rise to any arbitrarily paradoxical situation. Our work also has some aspects in common with the paper [Reference Montes, Rademaker, Perez-Fernandez and De Baets22], although its aims are different.

More detailed explanations of the meaning of our results will be provided in the next section and in Section 5.

More precisely, the plan of the paper is as follows.

In Section 2 we present preliminary results about the winning probabilities $\alpha_{j}(A)$ for m-tuples of lifetimes. In particular we consider the random indices $J_{1},\ldots,J_{m}$ defined by setting $J_{r}=i\Leftrightarrow X_{i}=X_{r:m}$ , where $X_{1:m},\ldots,X_{m:m}$ denote order statistics. We also point out how the family $\mathcal{A}_{\left( m\right)}$ is determined by the joint probability of $\left( J_{1},\ldots,J_{m}\right) $ over the space $\Pi_{m}$ of the permutations of [m]. Moreover, we introduce some notation and definitions of necessary concepts, such as that of a ranking pattern, a natural extension of the concept of a majority graph. A simple relation of concordance between a ranking pattern and a multivariate probability model for $\left( X_{1},\ldots, X_{m}\right)$ is also defined.

In Section 3 we recall the definition of load-sharing models, which can be seen as very special cases of absolutely continuous multivariate distributions for $\left( X_{1},\ldots,X_{m}\right) $ . In the absolutely continuous case, a possible tool to describe a joint distribution is provided by the set of the multivariate conditional hazard rate (m.c.h.r.) functions. Load-sharing models arise from imposing a remarkably simple condition on the form of such functions. Concerning the latter functions, we briefly provide basic definitions and some bibliographic references. We then define special classes of load-sharing models and show related properties that are of interest for our purposes. In particular we consider an extension of load-sharing to explicitly include an order-dependent load-sharing condition. In Theorem 1 we show that, for any arbitrary probability distribution $\rho_{m}$ over the space $\Pi_{m}$ , there exists a load-sharing model for $\left( X_{1},\ldots,X_{m}\right) $ such that

\begin{equation*}\mathbb{P}\big( J_{1}=j_{1},\ldots,J_{m}=j_{m}\big) =\rho_{m}\!\left(j_{1},\ldots,j_{m}\right)\end{equation*}

for $\left( j_{1},\ldots,j_{m}\right) \in$ $\Pi_{m}$ . Such a load-sharing model will generally be of the order-dependent type.

In terms of the definition of concordance introduced in Section 2, we state in Section 4 our result concerning aggregation paradoxes (Theorem 2). This result provides a quantitative method of explicitly constructing load-sharing models concordant with any assigned ranking pattern. We give its proof after presenting some technical preliminaries.

We conclude with a discussion in Section 5, where we mainly focus on the connection between our results and the study of paradoxes in voting theory.

2. Notation, preliminaries, and problem assessment

In this section we give the definitions, notation, and preliminary arguments needed to introduce the results which will be formally stated and proven in the sequel. Some further notation will be introduced where needed in the next sections.

We fix $m\in{\mathbb N}$ and denote by the symbol [m] the set $\{1,2,\ldots,m\}$ . The symbol $|B|$ denotes, as usual, the cardinality of a set B. For $m>1$ , we denote by $\widehat{\mathcal{P}}(m)$ the family of subsets B of [m] such that $|B|>1$ . We consider the nonnegative random variables $X_{1},\dots,X_{m}$ and assume the no-tie condition, i.e., for $i,j\in\lbrack m]$ with $i\neq j$ ,

(1) \begin{equation}\mathbb{P}\!\left(X_{i}\neq X_{j}\right)=1. \end{equation}

Henceforth, $X_{1},\ldots,X_{m}$ will be sometimes referred to as the lifetimes. The symbols $X_{1:m},\ldots,X_{m:m}$ denote the corresponding order statistics, and $\mathbf{J}\equiv\left( J_{1},\ldots,J_{m}\right) $ is the random vector whose coordinates are defined by

(2) \begin{equation}J_{r}=i\Leftrightarrow X_{i}=X_{r:m} \end{equation}

for any $i,r\in\lbrack m]$ . Related to the m-tuple $\left( X_{1},\ldots,X_{m}\right)$ , we consider the family

\begin{equation*}\mathcal{A}_{\left( m\right) }\equiv\big\{\alpha_{j}(A);\, A\in\widehat{\mathcal{P}}(m),j\in A\big\},\end{equation*}

where $\alpha_{j}(A)$ denotes the winning probability, formally defined by setting

(3) \begin{equation}\alpha_{j}(A)\,:\!=\,\mathbb{P}\Big(X_{j}=\min_{i\in A}X_{i}\Big). \end{equation}

For $k\in[ m]$ and $j_{i}\neq j_{\ell}\in [ m]$ , for all $i \neq \ell $ with $i , \ell \in [k]$ , we set

(4) \begin{equation}p_{k}^{(m)}(j_{1},\ldots,j_{k})\,:\!=\,\mathbb{P}\big(J_{1}=j_{1},\,J_{2}=j_{2},\,\ldots,\,J_{k}=j_{k}\big).\end{equation}

Next we focus attention on the probabilities $\alpha_{j}\!\left( A\right)$ in (3), on the probabilities of k-permutations (4), which are triggered by $\left( X_{1},\dots,X_{m}\right) $ , and on the corresponding relations among them. Further aspects, concerning nontransitivity and other related paradoxes, will be then pointed out.

For $B \subset [m]$ and $k = 1, \ldots , m -|B|$ let us define

(5) \begin{equation}\mathcal{D}(B,k)\,:\!=\,\{(j_{1},\ldots,j_{k})\,:\,j_{1},\ldots,j_{k}\not \in B\ \ \text{and }\ j_{i}\neq j_{\ell}\in [ m] \text{ for } i \neq \ell \text{ with } i , \ell \in [k] \}.\end{equation}

When $k=m-|B|$ , $\mathcal{D}(B,k)$ is the set of all the permutations of the elements of $B^{c}$ . In particular, the set $\mathcal{D}(\emptyset,m)$ becomes $\Pi_{m}$ , the set of the permutations of all the elements of [m]. For $k=m$ in (4), we will simply write $p_{m}$ in place of $p_{m}^{\left( m\right)}$ and denote by $\mathbf{P}_{\mathbf{J}}$ the set of probabilities

(6) \begin{equation}\big\{p_{m}(j_{1},\ldots,j_{m});\,(j_{1},\ldots,j_{m})\in\Pi_{\left( m\right)}\big\}.\end{equation}

The probability $p_{k}^{(m)}(j_{1},\ldots,j_{k})$ can be computed by the formula

(7) \begin{equation}p_{k}^{(m)}(j_{1},\ldots,j_{k})=\sum_{(u_{k+1},\ldots,u_{m})\in\mathcal{D}(\{j_{1},\ldots,j_{k}\},m-k)}p_{m}(j_{1},\ldots,j_{k},u_{k+1},\ldots,u_{m}).\end{equation}

For $A=[m]$ one obviously has $\alpha_{j}(A)=\mathbb{P}(J_{1}=j)$ . For $A\subset [m]$ , with $1<|A|<m$ , $j\in A$ , and partitioning the event $\big\{X_{j}=\min_{i\in A}X_{i}\big\}$ in the form

(8) \begin{equation}\Big\{X_{j}=\min_{i\in A}X_{i}\Big\}=\{J_{1}=j\}\cup\left( \bigcup_{k=1}^{m-|A|}\{J_{1}\not \in A,\,J_{2}\not \in A,\,\ldots,\,J_{k}\not \in A,\,J_{k+1}=j\}\right) , \end{equation}

one can easily obtain the following claim, which will frequently be used below when dealing with the probabilities $\alpha_{j}(A)$ .

Proposition 1. Let $X_{1},\dots,X_{m}$ be nonnegative random variables satisfying the no-tie condition. Let $A\in\widehat{\mathcal{P}}(m )$ and $\ell=|A|$ . Then for any $j\in A$ one has

(9) \begin{equation}\alpha_{j}(A)=\mathbb{P}\Big(X_{j}=\min_{i\in A}X_{i}\Big)=\mathbb{P}(J_{1}=j)+ \sum_{k=1}^{m-\ell}{\sum_{(i_{1},\ldots,i_{k})\in\mathcal{D}(A,k)} }\ \ p_{k+1}^{(m)}(i_{1},\ldots,i_{k},j). \end{equation}

As a consequence of (7) and Proposition 1 one obtains the following.

Corollary 1. The family $\mathcal{A}_{\left( m\right)}$ is determined by the set of probabilities $\mathbf{P}_{\mathbf{J}}$ .

It immediately follows that the conditional probabilities

(10) \begin{equation}\mathbb{P}\big(J_{k+1}=j_{k+1}|J_{1}=j_{1},\ldots,J_{k}=j_{k}\big)=\frac{p_{k+1}^{\left(m\right) }(j_{1},\ldots,j_{k+1})}{p_{k}^{\left( m\right) }(j_{1},\ldots,j_{k})} \end{equation}

are also determined by the set formed by the probabilities in $\mathbf{P}_{\mathbf{J}}$ .

As mentioned, a central role in our work is played by comparisons of the type

(11) \begin{equation}\alpha_{i}(A)\geq\alpha_{j}(A), \quad \text{for }A\in\widehat{\mathcal{P}}(m), \ i,j\in A. \end{equation}

When $A\equiv\{i,j\}$ (with $i,j\in\lbrack m]$ ), the inequality appearing in (11) is just equivalent to the notion of stochastic precedence of $X_{i}$ with respect to $X_{j}$ , as mentioned in the introduction. Limiting attention to only subsets $A\subset [ m]$ with $|A|=2$ , we can associate a direct graph (or digraph) ([m], E) to the family $\mathcal{A}$ , by defining $E\subseteq [m]\times [ m]$ as the set of oriented arcs such that

(12) \begin{equation}(i,j)\in E\text{ if and only if }\alpha_{i}(\{i,j\})\geq\alpha_{j}(\{i,j\}).\end{equation}

In the recent paper [Reference De Santis7] it has been proven for an arbitrary digraph $G=([m],E)$ that one can build a Markov chain and suitable associated hitting times $X_{1},\ldots,X_{m}$ so that the relations (12) give rise to G. Such a construction is useful for certain applications within fields different from those considered here.

Borrowing from the language used in voting theory, $\left( \left[ m\right]\!, E\right) $ can be called a majority graph. Concerning the notion of digraphs, and the related notions of asymmetric digraphs, complete digraphs, tournaments, etc., we refer the reader to e.g. [Reference Bachmeier3] for explanations and more details from the viewpoint of voting theory.

More generally, for any fixed $A\in\widehat{\mathcal{P}}(m)$ , we can introduce a function $\sigma(A,\cdot)$ , where $\sigma(A,\cdot)\,:\,A\rightarrow\{1,2,\ldots,|A|\}$ , in order to describe a ranking among the elements of A.

If, for a given $A\in\widehat{\mathcal{P}}(m)$ , the values $\sigma(A,j)$ , $j\in A$ , are all different, then $\sigma(A,\cdot)\,:\,A\rightarrow\{1,2,\ldots,|A|\}$ is a bijective function; that is, $\sigma(A,\cdot)$ describes a permutation of the elements of A. Otherwise, we require the image of $\sigma(A,\cdot)$ to be $[\bar{w}]=\{1,,\ldots,\bar{w}\}$ for some $\bar{w}<|A|$ .

We will say that a mapping $\sigma(A,\cdot)\,:\,A\rightarrow\{1,2,\ldots,|A|\}$ is a ranking function when its image is $[\bar{w}]=\{1,,\ldots,\bar{w}\}$ for some $\bar{w}\leq|A|$ .

For $i,j\in A$ , we say that i precedes j in A according to the ranking function $\sigma(A,\cdot)\,:\,A\rightarrow\{1,2,\ldots,|A|\}$ if and only if $\sigma(A,i)<\sigma(A,j).$ We say that two elements are equivalent in A when $\sigma(A,i)=\sigma(A,j)$ . When some equivalence holds between two elements of A, namely when $\bar{w}<|A|$ , we say that $\sigma (A, \cdot ) $ is a weak ranking function.

Extending our attention to the family of all the subsets $A\in\widehat{\mathcal{P}}(m)$ , we introduce the following notation and definition.

Definition 1. For $m\geq2$ , a family of ranking functions

(13) \begin{equation}\boldsymbol{\sigma}\equiv\big\{\sigma(A,\cdot)\,:\,A\in\widehat{\mathcal{P}}(m)\big\}\end{equation}

will be called a ranking pattern over [m]. The collection of all ranking patterns over [m] will be denoted by $\Sigma^{(m)}$ . A ranking pattern containing some weak ranking functions will be called a weak ranking pattern. The collection of all ranking patterns not containing any weak ranking functions will be denoted by $\widehat{\Sigma}^{(m)}\subset\Sigma^{(m)}$ .

Example 1. Let $m=3$ and consider the ranking pattern $\boldsymbol{\sigma}$ defined by the following:

\begin{equation*}\sigma([3],3)=1, \qquad \sigma([3],1)=2,\qquad \sigma([3],2)=3,\end{equation*}
\begin{equation*}\sigma(\{1,3\},3)=1,\qquad \sigma(\{1,3\},1)=2, \qquad \sigma(\{2,3\},3)=1, \qquad \sigma(\{2,3\},2)=2,\end{equation*}
\begin{equation*}\sigma(\{1,2\},1)=1, \qquad \sigma(\{1,2\},2)=2.\end{equation*}

Here, for any $A\in\widehat{\mathcal{P}}\!\left( 3\right) $ , the image set $\sigma(A, A)$ of the ranking function $\sigma(A,\cdot)$ is equal to $\left[ |A| \right]$ . We consider also the modified ranking pattern $\boldsymbol{\sigma}^{\prime}$ such that $\sigma^{\prime}(A,i)=\sigma(A,i)$ for any $A\in\widehat{\mathcal{P}}\!\left( 3\right) $ and $i\in A$ , but $\sigma^{\prime}(\{1,2\},1)=\sigma^{\prime}(\{1,2\},2)=1$ ; that is, one imposes that the elements 1 and 2 are equivalent in the set $A\equiv\{1,2\}$ for $\boldsymbol{\sigma}^{\prime}$ . Then $\boldsymbol{\sigma}^{\prime}$ becomes a weak ranking pattern and $\sigma^{\prime}(\{1,2\},\{1,2\})=\{1\}$ with $|\{1\}|<2=|A|$ .

The concept of a ranking pattern is a direct extension of that of a majority graph. In other words, a ranking pattern can be seen as an ordinal variant of a choice function.

We now come back to the random variables $X_{1},\ldots,X_{m}$ and associate to them a ranking pattern $\boldsymbol{\sigma}$ corresponding to the following definition.

Definition 2. We say that the (possibly weak) ranking pattern $\boldsymbol{\sigma}\equiv\{\sigma(A,\cdot);\, A\in\widehat{\mathcal{P}}(m)\}$ and the m-tuple $(X_{1},\dots,X_{m})$ are p-concordant whenever, for any $A\in\widehat{\mathcal{P}}(m)$ and $i,j\in A$ with $i\neq j$ ,

(14) \begin{equation}\sigma(A,i)<\sigma(A,j)\Leftrightarrow\alpha_{i}(A)>\alpha_{j}(A),\end{equation}
(15) \begin{equation}\sigma(A,i)=\sigma(A,j)\Leftrightarrow\alpha_{i}(A)=\alpha_{j}(A).\end{equation}

We remind the reader that the quantities $\sigma(A,i)$ are natural numbers belonging to $[|A|]$ , whereas the quantities $\alpha_{j}(A)$ are real numbers belonging to [0, 1], and such that $\sum_{i\in A}\alpha_{i}(A)=1$ . Motivations for such a definition will emerge in the sequel.

Example 2. With $m=3$ , consider nonnegative random variables $X_{1},X_{2},X_{3}$ such that

\begin{equation*}p(1,2,3)=\frac{1}{21},\qquad p(2,1,3)=\frac{2}{21},\qquad p(3,2,1)=\frac{6}{21},\end{equation*}
\begin{equation*}p(3,1,2)=\frac{4}{21},\qquad p(2,3,1)=\frac{3}{21},\qquad p_{3}(1,3,2)=\frac{5}{21}.\end{equation*}

Thus we have

\begin{equation*}\alpha_{1}([3])=p(1,2,3)+p(1,3,2)=\frac{1}{21}+\frac{5}{21}=\frac{6}{21}\end{equation*}

and

\begin{equation*}\alpha_{1}(\{1,2\})=\frac{1}{21}+\frac{4}{21}+\frac{5}{21}=\frac{10}{21},\end{equation*}
\begin{equation*}\alpha_{1}(\{1,3\})=\frac{5}{21}+\frac{1}{21}+\frac{2}{21}=\frac{8}{21}.\end{equation*}

Similarly,

\begin{equation*}\alpha_{2}([3])=\frac{5}{21}, \qquad \alpha_{2}(\{1,2\})=\frac{11}{21}, \qquad \alpha_{2}(\{2,3\})=\frac{6}{21},\end{equation*}
\begin{equation*}\alpha_{3}([3])=\frac{10}{21}, \qquad \alpha_{3}(\{1,3\})=\frac{13}{21}, \qquad \alpha_{3}(\{2,3\})=\frac{15}{21}.\end{equation*}

Then the triple $(X_{1},X_{2},X_{3})$ is p-concordant with the ranking pattern $\boldsymbol{\sigma}$ considered in Example 1 above.

Remark 1. Of course, the same ranking pattern $\boldsymbol{\sigma}$ can be p-concordant with several different m-tuples of random variables. Actually, the joint distribution $\mathbb{P}_{X}$ of $\left( X_{1},\ldots,X_{m}\right) $ determines the probability distribution $\rho$ over $\Pi_{\left( m\right)}$ induced by the set of probabilities $\mathbf{P}_{\mathbf{J}}$ in (6), and $\rho$ determines $\boldsymbol{\sigma}$ .

As mentioned above, and as is generally well known, the phenomenon of nontransitivity may arise in the analysis of the set of quantities of the type $\alpha_{i}(\{i,j\})$ , for $i\neq j\in\left[ m\right]$ , and of the induced digraph $\left( \left[ m\right] ,E\right) $ . Different types of paradoxes may also be encountered when considering a ranking pattern $\boldsymbol{\sigma}$ . In particular, for a set $A\in\widehat{\mathcal{P}}(m)$ and a triple of indexes i, j, k $\in\left[ m\right]$ with i, j $\in A$ and $k\notin A$ , it may simultaneously happen that

(16) \begin{equation}\sigma(A,i)>\sigma(A,j),\qquad \sigma(A\cup\{k\},i)<\sigma(A\cup\{k\},j).\end{equation}

Looking in particular at (16), one can conceptually imagine ranking patterns which are quite astonishing and paradoxical, as in the next example.

Example 3. Let us single out, say, the element $1\in\left[ m\right]$ , and fix our attention on ranking patterns $\boldsymbol{\sigma}\in\widehat{\Sigma}^{\left( m\right) }$ satisfying the conditions $\sigma(A,1)=1$ for $A=\{1,i\}$ with $i\neq1$ , and $\sigma(A,1)=|A|$ whenever $|A|>2$ , $1\in A$ . In other words, the element 1 precedes any other element $i\neq1$ when only two elements are compared, and it is preceded by any other element when more than two elements are compared. One can wonder whether there exist probability distributions for $\left( X_{1},\ldots,X_{m}\right) $ which are p-concordant with such $\boldsymbol{\sigma}$ .

We may say that a ranking pattern manifests paradoxes of ‘multivariate’ stochastic precedence when nontransitivity and/or (16) emerges for some indexes.

The following question, however, naturally arises: does an arbitrarily given $\boldsymbol{\sigma}\equiv\{\sigma(A,i);\, A\in\widehat{\mathcal{P}}(m)\}$ really admit any concordant models? One can furthermore wonder whether it is possible, in any case, to explicitly construct one such model. In relation to this, we show in Section 4 that for any given ranking pattern $\boldsymbol{\sigma}\in\widehat{\Sigma}^{(m)}$ , one can construct suitable probability models which are p-concordant with it and which belong to a restricted class of load-sharing models (Theorem 2). It will also be shown in Section 3 that, for an arbitrarily given distribution $\rho$ over $\Pi_{m}$ , it is possible to identify probability distributions $\mathbb{P}_{\mathbf{X}}$ belonging to the class of order-dependent load-sharing models and such that $\mathbb{P}_{X}\mathbb{\rightarrow}$ $\rho$ (Theorem 1).

3. Load-sharing models and related properties

In this section, our attention will be limited to the case of lifetimes admitting an absolutely continuous joint probability distribution. Such a joint distribution can then be described by means of the corresponding joint density function. An alternative description can also be made in terms of the family of multivariate conditional hazard rate (m.c.h.r.) functions. The two descriptions are in principle equivalent, from a purely analytical viewpoint. However, they each turn out to be convenient to highlight different features of stochastic dependence.

Definition 3. Let $X_{1},\ldots,X_{m}$ be nonnegative random variables with an absolutely continuous joint probability distribution. For fixed $k\in\lbrack m-1]$ , let $(i_{1},\ldots,i_{k},j)\in\mathcal{D}(\emptyset,k+1)$ . For an ordered sequence $0<t_{1}<\cdots<t_{k}<t$ , the multivariate conditional hazard rate function $\lambda_{j}(t|i_{1},\ldots,i_{k};\, t_{1},\ldots,t_{k})$ is defined by

\begin{equation*}\lambda_{j}(t|i_{1},\ldots,i_{k};\, t_{1},\ldots,t_{k})\,:\!=\,\end{equation*}
(17) \begin{equation}\lim_{\Delta t\rightarrow0^{+}}\frac{1}{\Delta t}\mathbb{P}\big(X_{j}\leq t+\Delta t|X_{i_{1}}=t_{1},\ldots,X_{i_{k}}=t_{k},X_{k+1:m}>t\big). \end{equation}

Furthermore, we put

(18) \begin{equation}\lambda_{j}(t|\emptyset)\,:\!=\,\lim_{\Delta t\rightarrow0^{+}}\frac{1}{\Delta t}\mathbb{P}\big(X_{j}\leq t+\Delta t|X_{1:m}>t\big). \end{equation}

For remarks, details, and general facts concerning this definition, see e.g. [Reference Shaked and Shanthikumar32], [Reference Shaked and Shanthikumar33], [Reference Spizzichino36], the review paper [Reference Shaked and Shanthikumar34], and references cited therein. It is pointed out in [Reference De Santis, Malinovsky and Spizzichino9] that the set of m.c.h.r. functions is a convenient tool for describing some aspects of the quantities $\alpha_{j}(A)$ in (3). It will turn out that such a description is especially convenient for our purposes, as well. This choice, in particular, leads us to single out the class of (time-homogeneous) load-sharing models and to appreciate the role they play in the present context.

For lifetimes $X_{1},\ldots,X_{m}$ , load-sharing is a simple condition of stochastic dependence which is defined in terms of the m.c.h.r. functions and which has a long history in reliability theory. See e.g. [Reference Spizzichino36] for references and for more detailed discussion and demonstrations. Such a condition amounts to imposing that the m.c.h.r. functions $\lambda_{j}(t|i_{1},\ldots,i_{k};\, t_{1},\ldots,t_{k})$ do not depend on the arguments $t_{1},\ldots,t_{k}$ . Here, we concentrate attention on the following specific definition.

Definition 4. The m-tuple $\left( X_{1},\ldots,X_{m}\right) $ is distributed according to a load-sharing model when, for $k\in\lbrack m-1]$ , $(i_{1},\ldots,i_{k},j)\in\mathcal{D}(\emptyset,k+1)$ , and for an ordered sequence $0<t_{1}<\cdots<t_{k}<t,$ one has

(19) \begin{equation}\lambda_{j}(t|i_{1},\ldots,i_{k};\,t_{1},\ldots,t_{k})=\mu_{j}\{i_{1},\ldots,i_{k}\},\qquad \lambda_{j}(t|\emptyset)=\mu_{j}(\emptyset),\end{equation}

for suitable positive set functions $\mu_{j}\{i_{1},\ldots,i_{k}\}$ and positive quantities $\mu_{j}(\emptyset)$ .

In (19) it is intended that, for fixed j $\in\lbrack m]$ , the function $\mu_{j}\{i_{1},\ldots,i_{k}\}$ does not depend on the order in which $i_{1},\ldots,i_{k}$ are listed. One can, however, admit the possibility that the function $\mu_{j}$ depends on the ordering of $i_{1},\ldots,i_{k}$ ; we give the following definition.

Definition 5. The m-tuple $\left( X_{1},\ldots,X_{m}\right) $ is distributed according to an order-dependent load-sharing model when, for $k\in\lbrack m-1]$ , $(i_{1},\ldots,i_{k},j)\in\mathcal{D}(\emptyset,k+1)$ , and for an ordered sequence $0<t_{1}<\cdots<t_{k}<t,$ one has

(20) \begin{equation}\lambda_{j}(t|i_{1},\ldots,i_{k};\, t_{1},\ldots,t_{k})=\mu_{j}(i_{1},\ldots,i_{k}), \qquad \lambda_{j}(t|\emptyset)=\mu_{j}(\emptyset),\end{equation}

for suitable functions $\mu_{j}\,:\,\mathcal{D}(\{j\},k)\rightarrow\lbrack 0,\infty)$ and positive quantities $\mu_{j}(\emptyset)$ .

A slightly different formulation of the above concept is given in the recent paper [Reference Foschi, Nappo and Spizzichino15]. Although it is not very natural in the engineering context of systems reliability, the possibility of considering order-dependence is potentially interesting both from a mathematical viewpoint and for different types of applications. In particular, order-dependent load-sharing models will appear in Theorem 1 below and have emerged in [Reference Foschi, Nappo and Spizzichino15], in relation to the construction of non-exchangeable probability models that still satisfy some symmetry properties implied by exchangeability.

When the order-dependent case is excluded, and for $I=\{i_{1},\ldots,i_{k}\}\subset [ m ] $ , it will be convenient also to use the notation $\mu_{j}(I)$ with the following meaning:

\begin{equation*}\mu_{j}(I)\,:\!=\,\mu_{j}\{i_{1},\ldots,i_{k}\}\end{equation*}

or $\mu_{j}(i_{1},\ldots,i_{k})\,:\!=\,\mu_{j}\{i_{1},\ldots,i_{k}\}$ . Obviously, it holds that $\mu_{j}(i_{1},\ldots,i_{k})=\mu_{j}\big(i_{\pi_{1}},\ldots,i_{\pi_{k}}\big)$ for any permutation $\pi\equiv\left( \pi_{1},\ldots,\pi_{k}\right) $ of the elements of $\left[ k\right] $ .

In addition to the concept of order-dependence, another way of weakening the condition (19) is by allowing ‘non-homogeneous load-sharing’. In this paper we do not need this type of generalization.

For a fixed family $\mathcal{M}$ of parameters $\mu_{j}\!\left(\emptyset\right) $ and $\mu_{j}(i_{1},\ldots,i_{k})$ , for $k\in\left[ m-1\right] $ and for $\left( i_{1},\ldots,i_{k}\right) $ $\in\mathcal{D}\!\left(\{j\},k\right)$ , set

(21) \begin{equation}M(i_{1},\ldots,i_{k})\,:\!=\,\sum_{j\in\lbrack m]\setminus\{i_{1},\ldots,i_{k}\}}\mu_{j}(i_{1},\ldots,i_{k}) \quad \text{ and } \quad M(\emptyset)=\sum_{j\in\lbrack m]}\mu_{j}(\emptyset). \end{equation}

As a relevant property of (possibly order-dependent) load-sharing models, one has $\mathbb{P}(J_{1}=j)=\frac{\mu_{j}(\emptyset)}{\text{ }M(\emptyset)}$ , and the above formula (10) reduces to the following simple identity:

(22) \begin{equation}\mathbb{P}\big(J_{k+1}=j|J_{1}=i_{1},\,J_{2}=i_{2},\,\ldots,\,J_{k}=i_{k}\big)=\frac{\mu_{j}(i_{1},\ldots,i_{k})}{M(i_{1},\ldots,i_{k})}\end{equation}

(see also [Reference Spizzichino36] and [Reference De Santis, Malinovsky and Spizzichino9]). A very simple form then follows for the probability $p_{k}^{(m)}(i_{1},\ldots,i_{k})$ , as given in (4), for which we can immediately obtain the following.

Lemma 1. Let $\left( X_{1},\ldots,X_{m}\right)$ follow an order-dependent load-sharing model described by the family $\mathcal{M}$ . Let $k\in\lbrack m]$ and let $(i_{1},\ldots,i_{k})\in\mathcal{D}(\emptyset,k)$ . Then

(23) \begin{equation}p_{k}^{(m)}(i_{1},\ldots,i_{k})=\frac{\mu_{i_{1}}(\emptyset)}{M(\emptyset)}\frac{\mu_{i_{2}}(i_{1})}{M(i_{1})}\frac{\mu_{i_{3}}(i_{1},i_{2})}{M(i_{1},i_{2})}\ldots \frac{\mu_{i_{k}}(i_{1},i_{2},\ldots i_{k-1})}{M(i_{1},i_{2},\ldots i_{k-1})}. \end{equation}

Notice that, for $k = m$ , we have $p_{m}(i_{1},\ldots, i_{m-1},i_{m}) = p_{m}(i_{1},\ldots, i_{m-1}) $ ; thus $p_{m}(i_{1},\ldots, ,i_{m}) $ is not influenced by $\mu_{i_{m}} (i_{1}, \ldots, i_{m-1})$ .

The previous result has already been stated as Proposition 2 in [Reference Spizzichino36], in relation to the special case when the order-dependence condition is excluded.

As a consequence of Proposition 1 and the above lemma, we can state the following proposition.

Proposition 2. Let $\left( X_{1},\ldots,X_{m}\right) $ follow an order-dependent load-sharing model described by the family $\mathcal{M}$ . Let $A\in\widehat{\mathcal{P}}(m)$ with $\ell=|A|$ . Then for $j\in A$ one has

(24) \begin{align}\alpha_{j}(A) & =\mathbb{P}\Big(X_{j} =\min_{i\in A}X_{i}\Big)=\frac{\mu_{j}(\emptyset)}{M(\emptyset)}+\nonumber\\& \quad +\sum_{k=1}^{m-\ell}{\sum_{(i_{1},\ldots,i_{k})\in\mathcal{D}( A,k)}}\frac{\mu_{i_{1}}(\emptyset)}{M(\emptyset)}\frac{\mu_{i_{2}}(i_{1} )}{M(i_{1})}\ldots \frac{\mu_{i_{k}}(i_{1},i_{2},\ldots i_{k-1})}{M(i_{1} ,i_{2},\ldots i_{k-1})}\frac{\mu_{j}(i_{1},i_{2},\ldots i_{k})}{M(i_{1} ,i_{2},\ldots i_{k})}. \end{align}

A direct implication of the above proposition is the following.

Corollary 2. Let $\left( X_{1},\ldots,X_{m}\right) $ follow an order-dependent load-sharing model described by the family $\mathcal{M}$ . For given $A\in\widehat{\mathcal{P}}(m)$ , the probabilities $\big\{\alpha_{j}(A)\,:\,j\in A\big\}$ depend only on $\{\mu_{h}(I)\,:\,I\subseteq A^{c},h\not \in I\}$ .

Consider now an arbitrary probability distribution $\rho^{\left( m\right) }$ on the set of permutations $\Pi_{m}$ . The next result shows the existence of some order-dependent load-sharing model such that the corresponding joint distribution of the vector $\left(J_{1},\ldots,J_{m}\right) $ coincides with $\rho^{\left( m\right) }$ .

Theorem 1. For $m\geq2$ let the function $\rho^{\left( m\right) }\,:\,\Pi_{m}\rightarrow\lbrack 0,1]$ satisfy the condition

\begin{equation*}\sum_{(j_{1},\ldots,j_{m})\in\Pi_{m}}\rho^{\left( m\right) }(j_{1},\ldots,j_{m})=1.\end{equation*}

Then there exists an order-dependent load-sharing model described by a family of coefficients $\mathcal{M} $ such that

\begin{equation*}p_{m}(j_{1},\ldots,j_{m})=\rho^{\left( m\right) }(j_{1},\ldots,j_{m}).\end{equation*}

Proof. For the fixed function $\rho^{\left( m\right) }$ and for $(j_{1},\ldots,j_{k})\in\mathcal{D}(\emptyset,k)$ we set

(25) \begin{equation}w(j_{1},\ldots,j_{k})=\sum_{(i_{1},\ldots,i_{m-k})\in\mathcal{D}(\{j_{1},\ldots,j_{k}\},m-k)}\rho^{\left( m\right) }(j_{1},\ldots,j_{k},i_{1},\ldots,i_{m-k}). \end{equation}

As suggested by the above formula (22), we now fix the family $\mathcal{M}$ formed by the parameters given as follows:

\begin{equation*}\mu_{j}(\emptyset)=w(j),\qquad \mu_{j_{2}}(j_{1})=\frac{w(j_{1},j_{2})}{w(j_{1})},\qquad \mu_{j_{3}}(j_{1},j_{2})=\frac{w(j_{1},j_{2},i_{3})}{w(j_{1},j_{2})},\end{equation*}
(26) \begin{equation}\ldots, \qquad \mu_{j_{m-1}}(j_{1},j_{2},\ldots,j_{m-2})=\frac{w(j_{1},\ldots,j_{m-1})}{w(j_{1},\ldots,j_{m-2})}. \end{equation}

In the previous formula we tacitly understand $0/0=0$ . For the order-dependent load-sharing model corresponding to $\mathcal{M}$ above, the proof can be concluded by simply applying Lemma 1.

Example 4. Here we continue Example 2. By taking into account the assessment of the values $p(j_{1},j_{2},j_{3})$ therein and recalling the definition (25), we set

\begin{equation*}w(j_{1},j_{2},j_{3})=p(j_{1},j_{2},j_{3}) \qquad \forall(j_{1},j_{2},j_{3})\in\Pi_{3},\end{equation*}
\begin{equation*}w(1,2)=\frac{1}{21},\qquad w(2,1)=\frac{2}{21},\qquad w(3,2)=\frac{6}{21},\end{equation*}
\begin{equation*}w(3,1)=\frac{4}{21},\qquad w(2,3)=\frac{3}{21},\qquad w(1,3)=\frac{5}{21},\end{equation*}
\begin{equation*}w(1)=w(1,2)+w(1,3)=\frac{6}{21},\end{equation*}
\begin{equation*}w(2)=w(2,1)+w(2,3)=\frac{5}{21}, \qquad w(3)=w(3,1)+w(3,2)=\frac{10}{21}.\end{equation*}

From this, by applying (26), we obtain

\begin{equation*}\mu_{1}(\emptyset)=\frac{6}{21}, \qquad \mu_{2}(\emptyset)=\frac{5}{21}, \qquad \text{}\mu_{1}(\emptyset)=\frac{10}{21},\end{equation*}
\begin{equation*}\mu_{1}(2)=\frac{w(2,1)}{w(2)}=\frac{2}{5}, \qquad \mu_{1}(3)=\frac{w(3,1)}{w(3)}=\frac{2}{5},\end{equation*}
\begin{equation*}\mu_{2}(1)=\frac{w(1,2)}{w(1)}=\frac{1}{6}, \qquad \mu_{2}(3)=\frac{w(3,2)}{w(3)}=\frac{6}{10},\end{equation*}
\begin{equation*}\mu_{3}(1)=\frac{w(1,3)}{w(1)}=\frac{5}{6}, \qquad \mu_{3}(2)=\frac{w(2,3)}{w(2)}=\frac{3}{5}.\end{equation*}

Finally, we can set

\begin{equation*}\mu_{j_{1}}(j_{2},j_{3})=1 \qquad \forall(j_{1},j_{2},j_{3})\in\Pi_{3}.\end{equation*}

In this way we obtain the family $\mathcal{M}$ of parameters for a load-sharing model with the following property. Let $X_{1}^{\prime},X_{2}^{\prime},X_{3}^{\prime}$ be lifetimes jointly distributed according to such a model, and let $J_{1}^{\prime},J_{2}^{\prime},J_{3}^{\prime}$ denote the corresponding variables defined as in (2). Then we can conclude that, for $(j_{1},j_{2},j_{3})\in\Pi_{3}$ , $\mathbb{P}\!\left( J_{1}^{\prime}=j_{1},J_{2}^{\prime}=j_{2},J_{3}^{\prime}=j_{3}\right) =p(j_{1},j_{2},j_{3})$ , and that $\left( X_{1}^{\prime},X_{2}^{\prime},X_{3}^{\prime}\right) $ is p-concordant with the ranking pattern $\boldsymbol{\sigma}$ of Example 1.

Let us now concentrate on the non-order-dependent case. For fixed $I=\{i_{1},\ldots,i_{k}\}$ , consider the set formed by the $\left(m-k\right)$ values $\mu_{j}(i_{1},\ldots,i_{k})$ , for $j\notin\{i_{1},\ldots,i_{k}\}$ .

Generally such a set of values actually depends on I. But there are interesting cases where, for any subset $I\subset [ m]$ , the collection of coefficients $\left\{ \mu_{j}(I)\,:\,j\notin I\right\} $ depends on I only through its cardinality $|I|$ ; that is,

(27) \begin{equation}\left\{ \mu_{j}(I)\,:\,j\notin I\right\} \equiv\left\{ \mu_{j}\{1,2,\ldots,|I|\}\,:\,j\neq1,\ldots,|I|\right\} .\end{equation}

In such cases, there exist constants $\widehat{M}_{1}, \ldots ,\widehat{M}_{m}$ such that

(28) \begin{equation}M\!\left( i_{1},\ldots ,i_{k}\right) =M\!\left( 1,\ldots ,k\right) =\widehat{M}_{k}.\end{equation}

Furthermore, we set $\widehat{M}_{0}=M(\emptyset)$ .

The family $\mathcal{M}$ constructed in the proof of Theorem 1 does generally correspond to an order-dependent load-sharing model, and this excludes the possibility of the condition in (27). Even if very special, on the other hand, the class of models satisfying (27) will have a fundamental role in the next section.

4. Existence and construction of load-sharing models concordant with ranking patterns

Let $\boldsymbol{\sigma}\in\widehat{\Sigma}^{(m)}$ be an assigned ranking pattern. In this section we aim to construct, for lifetimes $X_{1},\ldots ,X_{m}$ , a probabilistic model p-concordant with $\boldsymbol{\sigma}$ , according to the definition given in Section 2. In other words, by looking at the probabilities $\alpha_{j}(A)$ , we seek joint distributions for $X_{1},\ldots ,X_{m}$ such that the equivalence in (14) holds. The existence of such distributions will be in fact proven here. Actually we will constructively identify some such distributions, and this task will be accomplished by means of a search within the class of load-sharing models with special parameters satisfying the condition (27).

More specifically, we introduce a restricted class of load-sharing models by starting from the assigned ranking pattern $\boldsymbol{\sigma}$ . Such a class fits with our purposes and is defined as follows.

Definition 6. Let $\varepsilon(2),\ldots ,\varepsilon(m)$ be positive quantities such that

\begin{equation*}(\sigma(A,i)-1)\varepsilon(|A|)<1\end{equation*}

for all $A\in{\widehat{\mathcal P}}(m)$ and $i\in A$ . An $LS\!\left(\boldsymbol{\varepsilon},\boldsymbol{\sigma}\right) $ model is defined by parameters of the form

(29) \begin{equation}{\mu}_{i}([m]\setminus A)=1-(\sigma(A,i)-1)\varepsilon(|A|), \qquad A\in{\widehat{\mathcal P}}(m),\qquad i\in A. \end{equation}

Finally, for $A=\{i\}$ we set $\mu_{i}([m]\setminus A)=1$ , so that $\varepsilon(1)=0$ .

As it is possible to prove, in fact, $\varepsilon(2),\ldots ,\varepsilon(m)$ can be adequately fixed in order to let the model $LS\!\left(\boldsymbol{\varepsilon},\boldsymbol{\sigma}\right)$ satisfy the condition (14). In this direction, we first point out the following features of such models.

We notice that the set of numbers $\mu_{j}([m]\setminus A)$ (for $j\in A$ ) is the same for all subsets with given cardinality $h=|A|$ . Thus the identities in (28) hold for $LS\!\left( \boldsymbol{\varepsilon},\boldsymbol{\sigma}\right) $ , in view of the validity of (27). More precisely, by (21), one can write

(30) \begin{equation}\widehat{M}_{m-h}=\sum_{u=1}^{h}[1-(u-1)\varepsilon(m)]=h-\frac{h(h-1)}{2}\varepsilon(h), \end{equation}

for $h \in[m]$ .

As an application of Corollary 2, we observe that, for given $B\in\widehat{\mathcal{P}}( m )$ with $|B|=n\leq m$ and $j\in B$ , the probability $\alpha_{j}(B)$ depends only on $\varepsilon(n),\varepsilon(n+1), \ldots ,\varepsilon(m)$ and on the functions $\sigma(D, \cdot)$ for $D\in\widehat{P}(m) $ with $D\supseteq B$ .

On this basis it is possible to prove that, for any ranking pattern $\boldsymbol{\sigma}\in\widehat{\Sigma}^{(m)}$ , there exist constants $\varepsilon(2),\ldots,\varepsilon(m)$ such that $\boldsymbol{\sigma}$ is p-concordant with an m-tuple $(X_{1},\ldots,X_{m})$ distributed according to the model $LS\!\left( \boldsymbol{\varepsilon},\boldsymbol{\sigma}\right)$ , where $\boldsymbol{\varepsilon}=\left( 0,\varepsilon(2),\ldots,\varepsilon(m)\right) $ .

Here, however, by developing a suitable technical procedure, we shall constructively prove the following quantitative result which simultaneously shows the existence of the desired $LS(\boldsymbol{\varepsilon},\boldsymbol{\sigma})$ models and provides us with appropriate choices for $\boldsymbol{\varepsilon}$ .

Theorem 2. For any $\boldsymbol{\sigma}\in\widehat{\Sigma}$ and any $\boldsymbol{\varepsilon}=(0,\ldots,\varepsilon(m))$ such that, for $\ell=2,\ldots,m-1,$

(31) \begin{equation}\frac{(m-\ell)!(\ell-1)!}{2\cdot m!}\varepsilon(\ell)>8\ell\varepsilon(\ell+1),\end{equation}

the model LS $(\boldsymbol{\varepsilon},\boldsymbol{\sigma})$ is p-concordant with $\boldsymbol{\sigma}$ .

The inequalities in (31) can be obtained, for example, by simply letting

(32) \begin{equation}\varepsilon(\ell)=(17\cdot m\cdot m!)^{-\ell+1} \end{equation}

for $l=2,\ldots,m$ . We notice that the form of the coefficients $\varepsilon(1),\ldots,\varepsilon(m)$ is universal, in the sense that it is independent of the ranking pattern $\boldsymbol{\sigma}$ and can then be fixed a priori. Obviously the generated intensities $\mu$ , characterizing the p-concordant load-sharing model, depend on both $\boldsymbol{\varepsilon}$ and $\boldsymbol{\sigma}$ . As a result of the arguments above, one can now conclude as follows.

Let $\boldsymbol{\sigma}\in\widehat{\Sigma}^{(m)}$ be a given ranking pattern; then it is p-concordant with an m-tuple $(X_{1},\ldots,X_{m})$ distributed according to a load-sharing model with parameters of the form (29) under the choice (32), which are more precisely given by

(33) \begin{equation}\mu_{j}([m]\setminus A)=1-\frac{\sigma(A,j)-1}{(17\cdot m\cdot m!)^{|A|-1}}, \qquad j\in A.\end{equation}

Example 5. The above conclusion can, for instance, be applied to the search for load-sharing models that are p-concordant with the paradoxical ranking patterns $\boldsymbol{\sigma}$ which have been presented in Example 3. Let us consider, for any such $\boldsymbol{\sigma}$ , the related model $LS(\boldsymbol{\varepsilon,\sigma})$ with the vector $\boldsymbol{\varepsilon}$ of the special form given in (32). By taking into account (33), we can obtain that all such models are characterized by the following common conditions: for $A=\{1,j\}$ with $j\neq1$ ,

\begin{equation*}{\mu}_{1}([m]\setminus A)=1,\end{equation*}
\begin{equation*}{\mu}_{j}([m]\setminus A)=1-(17\cdot m\cdot m!)^{-1}<1,\end{equation*}

while, for A such that $1\in A$ and $|A|=\ell>2$ ,

\begin{equation*}{\mu}_{1}([m]\setminus A)=1-(\ell-1)(17\cdot m\cdot m!)^{-\ell+1}<{\mu}_{j}([m]\setminus A).\end{equation*}

The other intensities, by contrast, will depend on the choice of any special $\boldsymbol{\sigma}$ .

The proof of Theorem 2 is given below and is based upon some technical properties of $LS\!\left(\boldsymbol{\varepsilon},\boldsymbol{\sigma}\right) $ models, which we are now going to prove.

Preliminarily it is convenient to recall that, for a generic load-sharing model, the quantities $\alpha_{j}(A)$ take the form (24), and the special structure of $LS(\boldsymbol{\varepsilon},\boldsymbol{\sigma})$ allows us to reduce the construction of the desired models to the identification of a suitable vector $\boldsymbol{\varepsilon}$ .

A path to achieving such a goal is based on obtaining a suitable decomposition of $\alpha_{j}(A)$ into two terms (see (43)) and on showing that one of the two terms can be made dominant with respect to the other. First, it is useful to require that $\boldsymbol{\varepsilon}$ satisfy the conditions

(34) \begin{equation}\qquad \varepsilon(2)<\frac{1}{4}, \qquad 2(u-1)\varepsilon(u)<(u-2)\varepsilon(u-1), \quad \text{for }u=3,\ldots,m.\end{equation}

We notice that the latter condition is implied by (31).

Furthermore, we also introduce the following alternative symbols which will sometimes be used, when more convenient, in place of $\varepsilon$ : for $u=2,\ldots,m$ ,

(35) \begin{equation}\rho(u)=\varepsilon(u)\frac{(u-1)}{2}. \end{equation}

Written in terms of $\rho$ , the condition in (34) becomes

(36) \begin{equation}\rho(2)<\frac{1}{8}, \qquad 2\rho(u)<\rho(u-1), \quad \text{for }u=2,\ldots,m.\end{equation}

The following simple consequence of (36) will be used several times within the proofs below:

(37) \begin{equation}\sum_{u=k}^{m}\rho(u)<2\rho(k),\end{equation}

for $k=2,\ldots,m$ . We are now ready to present the useful inequalities in the lemma below.

Lemma 2. Let $m\geq2$ , $\boldsymbol{\sigma}\in\widehat{\Sigma}^{(m)}$ , and let the m-tuple $\left( X_{1},\ldots,X_{m}\right) $ be distributed according to a model $LS\!\left( \boldsymbol{\varepsilon},\boldsymbol{\sigma}\right) $ , under the condition (34). Then, for any $k\in\lbrack m]$ ,

(38) \begin{equation}p_{k}^{\left( m\right) }\left( j_{1},\ldots , j_{k}\right) \leq\frac{(m-k)!}{m!}\left( 1+2\sum_{u=m-k+1}^{m}\rho(u)\right) \end{equation}

and

(39) \begin{equation}p_{k}^{\left( m\right) }\left( j_{1},\ldots ,j_{k}\right) \geq\frac{(m-k)!}{m!}\left( 1-2\sum_{u=m-k+1}^{m}\rho(u)\right) . \end{equation}

Proof. We start by proving the inequality (38). For $k\in\lbrack m]$ , by taking into account the formulas (23) and (29), we obtain the following equality:

\begin{align*}& p_{k}^{\left( m\right) }\left( j_{1},\ldots ,j_{k}\right) \\& = \frac{\mu_{i_{1}}(\emptyset)}{m-\frac{m(m-1)}{2}\varepsilon(m)}\times\frac{{\mu}_{i_{2}}(i_{1})}{m-1-\frac{(m-1)(m-2)}{2}\varepsilon(m-1)}\times\ldots\\& \quad \ldots \times\frac{\mu_{i_{k}}(i_{1},i_{2},\ldots i_{k-1})}{m-k+1-\frac{(m-k+1)(m-k)}{2}\varepsilon(m-k+1)}.\end{align*}

Since all the parameters ${\mu}_{i}$ with $i\in\left[ m\right] $ are smaller than 1, we obtain

(40) \begin{align}& p_{k}^{\left( m\right) }\left( j_{1},\ldots,j_{k}\right)\nonumber\\& \leq\frac{1}{m-\frac{m(m-1)}{2}\varepsilon(m)}\times\frac{1}{m-1-\frac{(m-1)(m-2)}{2}\varepsilon(m-1)}\times\ldots\nonumber\\& \quad\ldots \times\frac{1}{{m-k+1-\frac{(m-k+1)(m-k)}{2}\varepsilon(m-k+1)}}\nonumber\\& =\frac{1}{m(m-1)\cdots(m-k+1)}\times\frac{1}{1-\rho(m)}\times\frac{1}{1-\rho(m-1)}\times\ldots\nonumber\\& \quad\ldots \times\frac{1}{{1-\rho(m-k+1)}}. \end{align}

By (36) and (37) one has $\sum_{k=2}^{m}\rho(k)<\frac{1}{4}<\frac{1}{2}$ . Furthermore, for $a\in\big(0,\frac{1}{2}\big)$ , the inequality $1/(1-a)<1+2a$ holds.

Hence, we can conclude that the quantity in (40) is less than or equal to

\begin{align*}& \frac{1}{m(m-1)\cdots(m-k+1)}\times\frac{1}{1-\sum_{u=m-k+1}^{m}\rho(u)}\nonumber\\& \quad\leq\frac{1+2\sum_{u=m-k+1}^{m}\rho(u)}{m(m-1)\cdots(m-k+1)}=\frac{(m-k)!}{m!}\left( 1+2\sum_{u=m-k+1}^{m}\rho(u)\right) .\end{align*}

We now prove the inequality (39). By again taking into account the formulas (23), (29), and (35), as well as (36) and (37), we can also give the following lower bound:

\begin{align*}& p_{k}^{\left( m\right) }\left( j_{1},\ldots,j_{k}\right)\geq\frac{\mu_{i_{1}}(\emptyset)\times\mu_{i_{2}}(i_{1})\times\cdots\times\mu_{i_{k}}(i_{1},i_{2},\ldots i_{m-1})}{m(m-1)\cdots(m-k+1)}\\[4pt]& \geq\frac{\lbrack 1-(m-1)\varepsilon(m)]\times\lbrack 1-(m-2)\varepsilon(m-1)]\times\cdots\times\lbrack 1-(m-k)\varepsilon(m-k+1)]}{m(m-1)\cdots(m-k+1)}\\[4pt]& \geq\frac{1-\sum_{u=m-k+1}^{m}(u-1)\varepsilon(u)}{m(m-1)\cdots(m-k+1)}=\frac{(m-k)!}{m!}\left( 1-2\sum_{u=m-k+1}^{m}\rho(u)\right) .\end{align*}

For an m-tuple $\left( X_{1},\ldots, X_{m}\right) $ distributed according to the load-sharing model $LS\!\left( \boldsymbol{\varepsilon},\boldsymbol{\sigma}\right) $ , the probabilities $p_{k}^{\left( m\right) }(i_{1},\ldots ,i_{k})$ in (23) depend on the pair $\boldsymbol{\varepsilon},\boldsymbol{\sigma}$ . Thus also the probabilities $\alpha_{i} (A)$ in (9) are determined by $\boldsymbol{\varepsilon},\boldsymbol{\sigma}$ .

In relation to the m-tuple $\left( X_{1},\ldots, X_{m}\right) $ and to the corresponding vector $(J_{1} , \ldots J_{m})$ , we now aim to give an expression for the probabilities $\alpha_{j} (A)$ that will be convenient for what follows. We shall use the symbol $\alpha_{j} (A, \boldsymbol{\sigma})$ , and in order to apply Proposition 1, we also introduce the following notation. Fix $A \in{\widehat{\mathcal P}} (m)$ ; for $i \in A$ and $\ell= |A|\leq m-1 $ , we consider the probabilities

(41) \begin{align}&\qquad \beta_{i} (A , \boldsymbol{\sigma}) \,:\!=\, \mathbb{P} (J_{1} =i ) \quad \text{ if }\ell= m -1 ,\nonumber\\\beta_{i} (A , \boldsymbol{\sigma}) \,:\!=\, \mathbb{P} (J_{1} =i ) & +\sum_{k= 1}^{m- \ell-1} {\sum_{ (i_{1}, \ldots, i_{k} )\in\mathcal{D} (A, k)}} \mathbb{P} \big(J_{1} =i_{1}, \, J_{2} =i_{2}, \, \ldots, \, J_{k} = i_{k} , \,J_{k+1} = i \big)\nonumber\\\end{align}

if $2 \leq\ell\leq m -2 $ . Also, we denote by $\gamma_{i} (A ,\boldsymbol{\sigma} ) $ the probability of the intersection

\begin{equation*}\Big\{X_{i}=\min_{j\in A}X_{j}\Big\}\cap\Big\{X_{i}>\max_{j\in A^{c}}X_{j}\Big\},\end{equation*}

i.e.

(42) \begin{equation}\gamma_{i} (A , \sigma) \,:\!=\, {\sum_{ (i_{1}, \ldots, i_{m - \ell})\in\mathcal{D} ( A, m- \ell) }} \mathbb{P} \big(J_{1} =i_{1}, \, J_{2} =i_{2}, \,\ldots, \, J_{n - \ell} = i_{m - \ell} , \, J_{m - \ell+1} = i \big) ,\end{equation}

for any $2 \leq\ell\leq m -1 $ . In words, while $\gamma_{i}(A,\mathbf{\sigma})$ is the probability that $X_{i}<X_{j}$ for all $j\in A$ and $X_{i}>X_{j}$ for all $j\in A^{c}$ , $\beta_{i}(A,\mathbf{\sigma})$ is the probability that $X_{i}<X_{j}$ for all $j\in A$ and $X_{i}>X_{j}$ for some $j\in A^{c}$ , when $l<m-1$ . In terms of this notation and recalling Proposition 1, we can now write

(43) \begin{equation}\alpha_{i} (A, \boldsymbol{\sigma}) = \beta_{i} (A,\boldsymbol{\sigma} ) + \gamma_{i} (A, \boldsymbol{\sigma}) ,\end{equation}

for $A \in{\widehat{\mathcal P}} (m)$ with $|A| \leq m-1$ .

By recalling Corollary 2 and (29), we can see that $\beta_{i}(A,\boldsymbol{\sigma})$ depends only on the family of values

(44) \begin{equation}\sigma(B,i)\,:\,|B|\geq\ell, \qquad i\in B,\end{equation}

and $\gamma_{i}(A,\sigma)$ depends only on the family of values

(45) \begin{equation}\sigma(B,i)\,:\,|B|\geq\ell-1, \qquad i\in B.\end{equation}

Let us now take into account the special form (29) for the parameters $\mu_{j}$ of a model $LS\!\left( \boldsymbol{\varepsilon},\boldsymbol{\sigma}\right) $ . We can notice a corresponding property of symmetry in the structure of the quantities $\beta_{i}(A,\boldsymbol{\sigma})$ : for any permutation $\pi\in\Pi_{m}$ , and with obvious meaning of notation, one has

\begin{equation*}\beta_{\pi(i)}\!\left( \pi(A),\pi(\boldsymbol{\sigma})\right) =\beta_{i}(A,\boldsymbol{\sigma})\text{.}\end{equation*}

Thus the set of values $\{\beta_{i}(A,\boldsymbol{\sigma}),i\in A\}$ is the same for all the subsets A with given cardinality $|A|=l$ . Then we can set

(46) \begin{equation}\mathcal{B}(\ell)\,:\!=\,\max\{\beta_{i}(A,\boldsymbol{\sigma})-\beta_{j}(A,\boldsymbol{\sigma})\},\end{equation}

where $A\in{\widehat{\mathcal P}}(m)$ is such that $|A|=\ell$ , and the maximum is computed with respect to all the ranking patterns $\boldsymbol{\sigma}\in\widehat{\Sigma}^{(m)}$ : $\mathcal{B}(\ell)$ only depends on the cardinality $\ell=|A|$ .

Similar arguments can be developed for the values $\gamma_{i}(A,\boldsymbol{\sigma})$ , and we can consider the quantities

(47) \begin{equation}\mathcal{C}(\ell)\,:\!=\,\min\{\gamma_{i}(A,\boldsymbol{\sigma})-\gamma_{j}(A,\boldsymbol{\sigma})\}>0,\end{equation}

where the minimum is computed over the family of all the ranking patterns $\boldsymbol{\sigma}\in\widehat{\Sigma}^{(m)}$ such that $\sigma(A,i)<\sigma(A,j)$ ; again, the quantity $\mathcal{C}(\ell)$ only depends on the cardinality $\ell=|A|$ .

Obviously, all the quantities $\alpha_{j} (A , \boldsymbol{\sigma})$ , $\beta_{j} (A,\boldsymbol{\sigma})$ , $\gamma_{j} (A , \boldsymbol{\sigma})$ , $\mathcal{B} (\ell)$ , and $\mathcal{C} (\ell)$ depend on the vector $\boldsymbol{\varepsilon}$ . At this point, referring to (43), we aim to show that $\boldsymbol{\varepsilon}$ can be suitably chosen in such a way that $\gamma_{j} $ gives a relevant contribution in imposing a comparison between the two values $\alpha_{i} (A, \boldsymbol{\sigma})$ and $\alpha_{j} (A,\boldsymbol{\sigma})$ . For this purpose, we can rely on the following result.

Lemma 3. For any $m \geq3$ and any $\boldsymbol{\sigma}\in\widehat{\Sigma}^{(m)}$ one has, for $\ell= 2, \ldots, m-1$ , that

(48) \begin{equation}\mathcal{B}(\ell) \leq8 \ell\varepsilon(\ell+1)\end{equation}

and

(49) \begin{equation}\mathcal{C} (\ell) \geq\frac{ (m-\ell) ! (\ell-1)!}{ 2 \cdot m!} \varepsilon(\ell) .\end{equation}

Proof. For $A \in{\widehat{\mathcal P}} (m)$ such that $|A| = \ell< m$ and $i,j \in A$ , one has by definition of $\mathcal{B} (\ell)$ and by (41) that

(50) \begin{align}\mathcal{B} (\ell) & =\max_{ \boldsymbol{\sigma} \in\widehat{\Sigma}^{(m)} } \{ \beta_{i} (A , \boldsymbol{\sigma}) - \beta_{j} (A ,\boldsymbol{\sigma}) \} \nonumber\\[4pt] & \quad \leq\max_{ \boldsymbol{\sigma} \in\widehat{\Sigma}^{(m)} } |p_{1}(i)- p_{1}(j) |\nonumber\\[4pt] & \quad + \sum_{k= 1}^{m- \ell-1} {\sum_{ (i_{1}, \ldots, i_{k})\in\mathcal{D} (A, k) }} \max_{ \boldsymbol{\sigma} \in\widehat{\Sigma}^{(m)} } |p_{k+1} (i_{1}, \ldots, i_{k}, i) - p_{k+1} (i_{1}, \ldots, i_{k}, j) |.\end{align}

By (38) and (39) in Lemma 2, one has

\begin{equation*}\max_{ \boldsymbol{\sigma} \in\widehat{\Sigma}^{(m)} } |p_{1}(i)- p_{1}(j) |\leq\frac{4 \rho(m) }{m}\end{equation*}

and

\begin{equation*}\max_{ \sigma\in\widehat{\Sigma}^{(m)} } | p_{k+1} (i_{1}, \ldots, i_{k}, i) -p_{k+1} (i_{1}, \ldots, i_{k}, j) | \leq4 \frac{(m-k-1)! }{m !} \sum_{u =m-k}^{m} \rho( u) .\end{equation*}

We can thus conclude by writing

(51) \begin{equation}\mathcal{B}(\ell) \leq\frac{4 \rho( m) }{m} + \sum_{k =1}^{m -\ell-1 } | \mathcal{D} (A, k) | \left[ 4\frac{(m-k-1)! }{m !} \sum_{u =m-k}^{m} \rho( u) \right] .\end{equation}

For $|A| =\ell$ , noticing that

\begin{equation*}| \mathcal{D} (A, k) | = \frac{(m-\ell) ! }{(m-\ell-k ) !} < \frac{ m ! }{(m -k ) !},\end{equation*}

one obtains that the right-hand side in the inequality (51) is smaller than

(52) \begin{equation}4 \rho( m) + 4\sum_{k =1}^{m - \ell-1 } \sum_{u = m-k}^{n} \rho(u) .\end{equation}

By (37), the quantity in (52) is then smaller than

\begin{equation*}4 \rho( m) + 8\sum_{k =1}^{m - \ell-1 } \rho( m-k) \leq8\sum_{k=0}^{m - \ell-1 } \rho( m-k) \leq16 \rho( \ell+1 ) .\end{equation*}

In conclusion,

\begin{equation*}\mathcal{B} (\ell) \leq16 \rho( \ell+1 ) = 8 \ell\varepsilon(\ell+1) .\end{equation*}

We now prove the inequality (49), for any $\ell=2, \ldots, m-1$ . By definition of $\mathcal{C} (\ell) $ and by (42), one has

(53) \begin{equation}\mathcal{C} (\ell) \geq{\sum_{ (i_{1}, \ldots, i_{m - \ell})\in\mathcal{D} (A, m- \ell) }} \min_{ \begin{array}{c}\small{ \boldsymbol{\sigma} \in \widehat{\Sigma}^{(m)} :} \\\small{\text{$ \sigma (A, i) < \sigma (A, j)$}} \end{array}}\!\!\!\big\{ p_{m- \ell+1} (i_{1}, \ldots, i_{m - \ell}, i) - p_{m - \ell+1} (i_{1},\ldots, i_{m- \ell}, j) \big\}.\end{equation}

We now notice, by (10), that the following inequality holds:

\begin{align*}& \min_{ \begin{array}{c}\small{ \boldsymbol{\sigma} \in \widehat{\Sigma}^{(m)} : } \\\small{\text{$ \sigma (A, i) < \sigma (A, j)$}}\end{array}} \big\{ p_{m- \ell+1} (i_{1}, \ldots, i_{m - \ell}, i) - p_{m -\ell+1} (i_{1}, \ldots, i_{m- \ell}, j) \big\} \\&\qquad\qquad\qquad\qquad \geq\min_{ \boldsymbol{\sigma} \in\widehat{\Sigma}^{(m)} } \big\{ p_{m -\ell} (i_{1},\ldots, i_{m - \ell}) \big\}\\& \times \min_{ \begin{array}{c}{\small{ \boldsymbol{\sigma} \in \widehat{\Sigma}^{(m)} : }} \\\small{\text{$ \sigma (A, i) < \sigma (A, j)$}}\end{array}} \!\!\big\{ \mathbb{P} \big( J_{m -\ell+1} =i| J_{1} = i_{1}, \ldots, J_{m-\ell} =i_{m -\ell} \big)\\& \qquad\qquad\qquad\quad\qquad - \mathbb{P} \big( J_{m -\ell+1} =j| J_{1} = i_{1}, \ldots,J_{m -\ell} =i_{m -\ell} \big) \big\} .\end{align*}

Hence the right-hand side of (53) is larger than the quantity

(54) \begin{align}& {\sum_{ (i_{1}, \ldots, i_{m - \ell} )\in\mathcal{D} (A, m- \ell) }} \min_{\boldsymbol{\sigma} \in\widehat{\Sigma}^{(m)} } \big\{ p_{m -\ell} (i_{1}, \ldots,i_{m - \ell}) \big\} \nonumber\\& \quad \times\min_{ \begin{array}{c}\small{\boldsymbol{\sigma} \in \widehat{\Sigma}^{(m)} : } \\\small{\text{$ \sigma (A, i) < \sigma (A, j)$}}\end{array}} \big\{ \mathbb{P} \big( J_{m -\ell+1} =i| J_{1} = i_{1}, \ldots, J_{m-\ell} =i_{m -\ell} \big) \nonumber\\& \quad - \mathbb{P} \big( J_{m -\ell+1} =j| J_{1} = i_{1}, \ldots, J_{m-\ell} = i_{m -\ell} \big) \big\} .\end{align}

On the other hand, by (39) in Lemma 2, one has

(55) \begin{equation}\min_{ \boldsymbol{\sigma} \in\widehat{\Sigma}^{(m)} } \big\{p_{m -\ell} (i_{1}, \ldots, i_{m - \ell}) \big\} \geq\left[ \frac{\ell!}{m!} \bigg(1 -2 \sum_{u = \ell+1}^{m} \rho(u)\bigg) \right] .\end{equation}

Furthermore, by recalling Lemma 1 and the identity (30), we have

(56) \begin{align}& \min_{ \begin{array}{c}\small{\boldsymbol{\sigma} \in \widehat{\Sigma}^{(m)} : }\\\small{\text{$ \sigma (A, i) < \sigma (A, j)$}}\end{array}} \big\{ \mathbb{P} \big( J_{m -\ell+1} =i| J_{1} = i_{1}, \ldots,J_{m -\ell} =i_{m -\ell} \big) \nonumber\\& \qquad - \mathbb{P} \big( J_{m -\ell+1} =j | J_{1} = i_{1}, \ldots, J_{m -\ell}=i_{m -\ell} \big) \big\} \nonumber\\& \quad = \min_{ \begin{array}{c}\small{\boldsymbol{\sigma} \in \widehat{\Sigma}^{(m)} : }\\\small{\text{$ \sigma (A, i) < \sigma (A, j)$}}\end{array}}\left\{ \frac{ \mu_{i} (i_{1}, \ldots, i_{m -\ell}) }{M(i_{1}, \ldots, i_{m - \ell})} - \frac{ \mu_{j} (i_{1}, \ldots, i_{m -\ell})}{M (i_{1}, \ldots, i_{m - \ell})} \right\} \nonumber\\& \quad =\frac{ 1}{ \ell- \frac{\ell(\ell-1)}{2} \varepsilon(\ell) } \cdot\min_{ \begin{array}{c}\small{\boldsymbol{\sigma} \in \widehat{\Sigma}^{(m)} : }\\\small{\text{$ \sigma (A, i) < \sigma (A, j)$}}\end{array}} \big\{ \mu_{i} \big(i_{1}, \ldots, i_{m -\ell}\big) - \mu_{j} \big(i_{1},\ldots, i_{m -\ell}\big) \big\}\nonumber\\& \quad \geq\frac{ 1}{ \ell- \frac{\ell(\ell-1)}{2} \varepsilon(\ell)} \cdot\varepsilon(\ell) ,\end{align}

where the last inequality follows from (29).

In view of (53) and (54), and by combining (55) and (56), we obtain

\begin{align*}\mathcal{C}(\ell)& \geq{\sum_{(i_{1},\ldots,i_{m-\ell})\in\mathcal{D}(A,m-\ell)}}\left[ \frac{\ell!}{m!}(1-2\sum_{u=\ell+1}^{m}\rho(u))\right] \frac{1}{\ell-\frac{\ell(\ell-1)}{2}\varepsilon(\ell)}\ \cdot\varepsilon(\ell)\\[5pt]& =(m-\ell)!\left[ \frac{\ell!}{m!}(1-2\sum_{u=\ell+1}^{m}\rho(u))\right]\frac{1}{\ell-\frac{\ell(\ell-1)}{2}\varepsilon(\ell)}\ \cdot\varepsilon(\ell).\end{align*}

Then we have

\begin{align*}\mathcal{C}(\ell)& \geq(m-\ell)!\left[ \frac{(\ell-1)!}{m!}(1-2\sum_{u=\ell+1}^{m}\rho(u))\right] \varepsilon(\ell)\nonumber\\[5pt]& \geq\frac{(m-\ell)!(\ell-1)!}{m!}(1-4\rho(\ell+1))\varepsilon(\ell)\geq\frac{(m-\ell)!(\ell-1)!}{2\cdot m!}\varepsilon(\ell),\end{align*}

where we have exploited (37) in the second inequality and $\rho(2)<\frac{1}{8}$ from (36) in the last step.

Let us now consider an arbitrary ranking pattern $\boldsymbol{\sigma}\in\widehat{\Sigma}^{(m)}$ . We are in a position to prove that it is possible to construct a suitable load-sharing model for an m-tuple $\left( X_{1},\ldots,X_{m}\right) $ , such that $\boldsymbol{\sigma}$ is p-concordant with $\left( X_{1},\ldots,X_{m}\right) $ .

Proof of Theorem 2. For a vector $\boldsymbol{\varepsilon}=\left(0,\varepsilon(2),\ldots,\varepsilon(m)\right)$ and a given ranking pattern $\boldsymbol{\sigma}\in\widehat{\Sigma}^{(m)}$ , we here denote by $\alpha_{i}(A,\boldsymbol{\sigma},\boldsymbol{\varepsilon})$ the probabilities in (3), corresponding to a vector $(X_{1},\ldots,X_{m})$ distributed according to the model $LS\!\left(\boldsymbol{\varepsilon},\boldsymbol{\sigma}\right) $ . We must thus prove that the equivalence

(57) \begin{equation}\alpha_{i}(A,\boldsymbol{\sigma},\boldsymbol{\varepsilon})>\alpha_{j}(A,\boldsymbol{\sigma},\boldsymbol{\varepsilon})\Leftrightarrow\sigma(A,i)<\sigma(A,j) \end{equation}

holds for any $\boldsymbol{\sigma}\in\widehat{\Sigma}^{(m)}$ , $A\in{\widehat{\mathcal P}}(m)$ , and $i,j\in A$ , provided that $\boldsymbol{\varepsilon}$ satisfies the condition (31). For this purpose we will first show that such a family of equivalences holds whenever $\boldsymbol{\varepsilon}$ is also such that both the following properties hold:

  1. (i) the equivalence (57) imposed only on the set $A=\left[m\right] $ , namely,

    (58) \begin{equation}\alpha_{i}([m],\boldsymbol{\sigma} , \boldsymbol{\varepsilon})>\alpha_{j}([m],\boldsymbol{\sigma}, \boldsymbol{\varepsilon})\Leftrightarrow\sigma([m],i)<\sigma([m],j); \end{equation}
  2. (ii) the inequalities

    (59) \begin{equation}\mathcal{C}(\ell, \boldsymbol{\varepsilon})>\mathcal{B}(\ell,\boldsymbol{\varepsilon}) \quad \text{ for $\ell=2,\ldots,m-1$,} \end{equation}
    where, in order to emphasize dependence on $\boldsymbol{\varepsilon}$ , the symbols $\mathcal{C}(\ell),\mathcal{B}(\ell)$ introduced in (46) and (47) are replaced by $\mathcal{C}(\ell,\boldsymbol{\varepsilon}),\mathcal{B}(\ell,\boldsymbol{\varepsilon})$ .

In fact, the relation (58) solves the case $|A|=m$ . Moreover, the relation (59) and the condition $\sigma(A,i)<\sigma(A,j)$ yield

(60) \begin{equation}0<\mathcal{C}(\ell,\boldsymbol{\varepsilon})-\mathcal{B}(\ell,\boldsymbol{\varepsilon})\leq\alpha_{i}(A,\boldsymbol{\sigma},\boldsymbol{\varepsilon})-\alpha_{j}(A,\boldsymbol{\sigma},\boldsymbol{\varepsilon}). \end{equation}

The latter inequality is immediately obtained by recalling Equation (43). Thus, (57) holds.

It then remains to prove (58) and (59). By Lemma 1 and in view of the choice (29), one has, for the case $h=m$ ,

\begin{equation*}\alpha_{i}([m],\boldsymbol{\sigma} ,\boldsymbol{\varepsilon})=\mathbb{P}(J_{1}=i)=\frac{\mu_{i}(\emptyset)}{M(\emptyset)}=\frac{2-2(\sigma([m],i)-1)\varepsilon(m)}{2m-m(m-1)\varepsilon(m)},\end{equation*}

taking into account in particular the equation (30). Then the equivalence in (58) holds true provided that $\varepsilon(m)\in\big(0,\frac{1}{m-1}\big)$ .

Let us now turn to proving the relation (59). When the cardinality $|A|$ belongs to $\{2,\ldots,m-1\}$ , one has, under the condition $\sigma(A,i)<\sigma(A,j)$ ,

(61) \begin{equation}\mathcal{C}(\ell,\boldsymbol{\varepsilon})-\mathcal{B}(\ell,\boldsymbol{\varepsilon})\geq\frac{(m-\ell)!(\ell-1)!}{2\cdot m!}\varepsilon(\ell)-8\ell\varepsilon(\ell+1),\end{equation}

by taking into account the inequalities (49) and (48) in Lemma 3.

Finally, one can obtain the inequalities (59) in view of the condition (31).

5. Discussion and concluding remarks

Our main results, Theorem 1 and Theorem 2 respectively, show the role of load-sharing models as possible solutions in the search for dependence models with certain types of probabilistic features. As mentioned in the introduction, such results can be applied to a number of different fields. In particular, direct applications emerge in the study of paradoxes of voting theory, as we aim to sketch below. For this purpose, we refer to the following standard scenario (see e.g. [Reference Nurmi25], [Reference Gehrlein and Lepelley16], [Reference Bachmeier3], [Reference Montes, Rademaker, Perez-Fernandez and De Baets22], and the references cited therein).

The symbol [m] here denotes a set of candidates, or alternatives, and $\mathcal{V}^{(n)}=\{v_{1},\ldots,v_{n}\}$ is a set of voters. It is assumed that the individual preferences of the voter $v_{l}$ , for $l=1,\ldots,n$ , give rise to a linear preference ranking; i.e. those preferences are complete and transitive, and indifference between two candidates is not allowed. Thus, for $l=1,\ldots,n$ , the linear preference ranking of the voter $v_{l}$ triggers a permutation $r_{l}$ over the set [m]. Each voter $v_{l}$ is supposed to cast her/his own vote in any possible election, whatever the set $A\in\widehat{\mathcal{P}}(m)$ formed by the candidates participating in that specific election. Furthermore, the voter $v_{l}$ casts a single vote, in favor of the candidate within A who is the preferred one according to the voter’s own linear preference ranking $r_{l}$ .

For $h\in\lbrack m]$ and for an ordered list $(j_{1},\ldots,j_{h})$ of candidates (i.e. $(j_{1},\ldots,j_{h})\in\mathcal{D}(\emptyset,h)$ in the notation used above), denote by $N_{h}^{(m)}(j_{1},\ldots,j_{h})$ the number of all the voters who rank $j_{1},\ldots,j_{h}$ in the positions $1,\ldots,h$ , respectively. That is, in the preferences of those voters, $j_{1},\ldots,j_{h}$ are the h most preferred candidates, listed in order of preference. In particular, $N^{(m)}(j_{1},\ldots,j_{m})\equiv$ $N_{m}^{(m)}(j_{1},\ldots,j_{m})$ denotes the number of voters $v_{l}$ who share the same linear preference ranking $\left( j_{1},\ldots,j_{m}\right)$ . The total number of voters is then given by

\begin{equation*}n=\sum_{(j_{1},\ldots,j_{m})\in\Pi_{m}.}N^{(m)}(j_{1},\ldots,j_{m}),\end{equation*}

and the set of numbers $\mathcal{N}^{(m)}=\big\{N^{(m)}(j_{1},\ldots,j_{m})\,:\,(j_{1},\ldots,j_{m})\in\Pi_{m}\big\}$ is typically referred to as the voting situation.

In an election where $A\subseteq$ $\left[ m\right] $ is the set of candidates, let $n_{i}(A)$ denote the total number of votes obtained by the candidate $i\in A$ according to the aforementioned scenario. Thus, $\sum_{i\in A}n_{i}(A)=n$ .

The voting-theory scenario described so far, furthermore, gives rise to a ranking pattern $\boldsymbol{\tau}\in\Sigma^{(m)}$ analogous to the ranking pattern associated to the m-tuple $\left( X_{1},\ldots,X_{m}\right)$ , according to the definitions introduced in Section 2. In particular, for any $A\in\widehat{\mathcal{P}}(m)$ , and $i,j\in A$ with $i\neq j$ , such a ranking pattern $\boldsymbol{\tau}$ satisfies the conditions

(62) \begin{equation}\tau(A,i)<\tau(A,j)\Leftrightarrow n_{i}(A)>n_{j}(A); \qquad \tau(A,i)=\tau(A,j)\Leftrightarrow n_{i}(A)=n_{j}(A).\end{equation}

We say that the ranking pattern $\boldsymbol{\tau}$ is N-concordant with the voting situation $\mathcal{N}^{(m)}$ .

As is well known (see e.g. the references cited above), different voting procedures can be used for aggregating individual preferences and determining the winner of an election. Here we assume that the winner of the election is chosen according to the plurality rule: the winner is a candidate $j\in A$ such that $n_{j}(A)=\max_{i\in A}n_{i}(A)$ (that is, the winner has obtained plurality support among the members of A). Notice that such a winner is indicated by the N-concordant ranking pattern $\boldsymbol{\tau}$ , for any subset of candidates $A\in\widehat{\mathcal{P}}(m)$ . The ranking pattern $\boldsymbol{\tau}$ , in particular, determines the majority graph which indicates the winner for any possible direct match for pairs of candidates. More generally, for a set $A\subseteq\lbrack m]$ of candidates, a voting situation $\mathcal{N}^{(m)}$ , and a ranking pattern $\boldsymbol{\tau}$ N-concordant with $\mathcal{N}^{(m)}$ , the candidate $i\in A$ is ranked in the $\tau(A,i)$ th position as to the number of votes obtained among all the candidates in A. In particular, the winner, or the candidate in A with the most votes, is j such that $\tau(A,j)=1$ .

As a key point of our discussion, notice that a voting situation $\mathcal{N}^{(m)}$ gives rise to a probability distribution over $\Pi_{m}$ , by setting

(63) \begin{equation}p^{(m)}(j_{1},\ldots,j_{m})=\frac{N^{(m)}(j_{1},\ldots,j_{m})}{n}.\end{equation}

Hence we set

(64) \begin{equation}\mathbf{P_{J}^{\mathcal{N}}}=\left\{ \frac{N^{(m)}(j_{1},\ldots,j_{m})}{n}\,:\,(j_{1},\ldots,j_{m})\in\Pi_{m}\right\}, \end{equation}

and we denote by $\big(X_{1}^{\mathcal{N}},\ldots,X_{m}^{\mathcal{N}}\big)$ the corresponding order-dependent load-sharing model constructed in the proof of Theorem 1. It is easy to check that

(65) \begin{equation}\alpha_{i}(A)=\frac{n_{i}(A)}{n}.\end{equation}

The apparently harmless formula (65) shows the equivalence between the following statements:

  1. (i) the ranking pattern $\boldsymbol{\tau}$ is N-concordant with the voting situation

$\mathcal{N}^{(m)}=\left\{ N^{(m)}(j_{1},\ldots,j_{m}) \,:\,(j_{1},\ldots,j_{m}) \in\Pi_{m} \right\} $ ;

  1. (ii) the ranking pattern $\boldsymbol{\tau}$ is p-concordant with the order-dependent load-sharing model $\big(X_{1}^{\mathcal{N}},\ldots,X_{m}^{\mathcal{N}}\big)$ $\big($ and with any model that shares the same $\mathbf{P_{J}^{\mathcal{N}}}\big)$ .

We thus claim that, for any given voting situation $\mathcal{N}^{(m)}$ , Theorem 1 allows us to determine an order-dependent load-sharing model $\big(X_{1}^{\mathcal{N}},\ldots,X_{m}^{\mathcal{N}}\big)$ such that the two objects give rise to the same ranking pattern $\boldsymbol{\tau}$ .

We turn now to an inverse problem: for a given ranking pattern $\boldsymbol{\sigma}$ , how can we construct an N-concordant voting situation? This problem is solved in terms of Theorem 2 by considering the probabilities $p^{(m)}(j_{1},\ldots,j_{m})$ for the model $LS(\boldsymbol{\varepsilon},\boldsymbol{\sigma})$ corresponding to the condition (33). Notice in this regard that such numbers $p^{(m)}(j_{1},\ldots,j_{m})$ are definitely rational. Therefore, there exists a suitable $n\in\mathbb{N}$ such that

\begin{equation*}\mathcal{N}^{(m)}=\big\{ np^{(m)}(j_{1},\ldots,j_{m})\,:\,(j_{1},\ldots,j_{m})\in\Pi_{m}\big\}\end{equation*}

can be seen as a voting situation $\mathcal{N}^{(m)}$ . The equivalence between the statements (i) and (ii) says that such $\mathcal{N}^{(m)}$ is N-concordant with $\boldsymbol{\sigma}$ . That is, Theorem 2 guarantees the existence of an N-concordant voting situation for any ranking pattern and also provides a possible construction. It thus solves a problem in the same direction as the result of Saari which was mentioned in the introduction. In the version considered here, our method is limited to non-weak ranking patterns $\boldsymbol{\sigma}$ . On the other hand, it provides us with an explicit construction of the desired N-concordant voting situations, and it is remarkable that such a construction can be accomplished relying only on the very special class of models of the type $LS(\boldsymbol{\varepsilon},\boldsymbol{\sigma})$ .

As mentioned, our method can also provide an alternative proof for the classic theorem by McGarvey [Reference McGarvey21] (see also [Reference Erdős and Moser12], [Reference Alon1], [Reference Shelah35]). In fact, along the same lines as Theorem 2, one can construct voting situations able to produce arbitrary ‘preference patterns’, in place of non-weak ranking patterns.

For Theorem 1, some direct applications can be found in other fields of probability. In particular, dealing with the lifetimes $X_{1},\ldots,X_{m}$ of the m components of a binary (‘on–off’) system S in the field of reliability theory, it can be applied to construct load-sharing models giving rise to an assigned probability signature. The latter object is a probability distribution $\mathbf{p}\equiv\left( p_{1},...,p_{m}\right) $ over $\left[ m\right] $ which can emerge in the computation of the survival function of S (see [Reference Samaniego30], [Reference Navarro, Spizzichino and Balakrishnan24], [Reference Marichal and Mathonet20]). Generally $\mathbf{p}$ depends, by its definition, both on the structure function $\phi$ of S and on the joint probability law of $\left( X_{1},...,X_{m}\right) $ . Concerning the potential interest of Theorem 1, we notice that when $\phi$ is given, the vector $\mathbf{p}$ is simply determined by the set $\mathbf{P}_{\mathbf{J}}$ of probabilities in (6).

Acknowledgements

We thank two anonymous referees for useful comments and our colleague Giovanna Nappo for helpfully commenting on a preliminary version. The referees of a previous work provided us with comments concerning the terminology of voting theory.

Funding information

This research was partially supported by the Ateneo Sapienza project Simmetrie e Disuguaglianze in Modelli Stocastici (2018).

Competing interests

There were no competing interests to declare which arose during the preparation or publication process for this article.

References

Alon, N. (2002). Voting paradoxes and digraphs realizations. Adv. Appl. Math. 29, 126135.CrossRefGoogle Scholar
Arcones, M. A. and Samaniego, F. J. (2000). On the asymptotic distribution theory of a class of consistent estimators of a distribution satisfying a uniform stochastic ordering constraint. Ann. Statist. 28, 116150.CrossRefGoogle Scholar
Bachmeier, G. et al. (2019). k-Majority digraphs and the hardness of voting with a constant number of voters. J. Comput. System Sci. 105, 130157.CrossRefGoogle Scholar
Blyth, C. R. (1972). Some probability paradoxes in choice from among random alternatives. J. Amer. Statist. Assoc. 67, 366373.CrossRefGoogle Scholar
Blom, G. and Thorburn, D. (1982). How many random digits are required until given sequences are obtained? J. Appl. Prob. 19, 518531.CrossRefGoogle Scholar
Boland, P. J., Singh, H. and Cukic, B. (2004). The stochastic precedence ordering with applications in sampling and testing. J. Appl. Prob. 41, 7382.CrossRefGoogle Scholar
De Santis, E. (2021). Ranking graphs through hitting times of Markov chains. Random Structures Algorithms 59, 189203.CrossRefGoogle Scholar
De Santis, E., Fantozzi, F. and Spizzichino, F. (2015). Relations between stochastic orderings and generalized stochastic precedence. Prob. Eng. Inf. Sci. 29, 329343.CrossRefGoogle Scholar
De Santis, E., Malinovsky, Y. and Spizzichino, F. (2021). Stochastic precedence and minima among dependent variables. Methodology Comput. Appl. Prob. 23, 187205.CrossRefGoogle Scholar
De Santis, E. and Spizzichino, F. (2012). First occurrence of a word among the elements of a finite dictionary in random sequences of letters. Electron. J. Prob. 17, 19.CrossRefGoogle Scholar
De Santis, E. and Spizzichino, F. (2016). Some sufficient conditions for stochastic comparisons between hitting times for skip-free Markov chains. Methodology Comput. Appl. Prob. 18, 10211034.CrossRefGoogle Scholar
Erdős, P. and Moser, L. (1964). On the representation of directed graphs as unions of orderings. Publ. Math. Inst. Hung. Acad. Sci. A 9, 125132.Google Scholar
Finkelstein, M. and Hazra, N. K. (2021). Generalization of the pairwise stochastic precedence order to the sequence of random variables. Prob. Eng. Inform. Sci. 35, 699707.CrossRefGoogle Scholar
Fishburn, P. C. (1981). Inverted orders for monotone scoring rules. Discrete Appl. Math. 3, 2736.CrossRefGoogle Scholar
Foschi, R. Nappo, G. and Spizzichino, F. (2021). Diagonal sections of copulas, multivariate conditional hazard rates and distributions of order statistics for minimally stable lifetimes. Dependence Modeling 9, 394423.CrossRefGoogle Scholar
Gehrlein, W. V. and Lepelley, D. (2017). Elections, Voting Rules and Paradoxical Outcomes. Springer, Cham.CrossRefGoogle Scholar
Guibas, L. J. and Odlyzko, A. M. (1981). String overlaps, pattern matching, and nontransitive games. J. Combinatorial Theory A 30, 183208.CrossRefGoogle Scholar
Hazla, J., Mossel, E., Ross, N. and Zheng, G. (2020). The probability of intransitivity in dice and close elections. Prob. Theory Relat. Fields 178, 9511009.CrossRefGoogle Scholar
Li, S.-Y. R. (1980). A martingale approach to the study of occurrence of sequence patterns in repeated experiments. Ann. Prob. 8, 11711176.CrossRefGoogle Scholar
Marichal, J.-L. and Mathonet, P. (2011). Extensions of system signatures to dependent lifetimes: explicit expressions and interpretations. J. Multivariate Anal. 102, 931936.CrossRefGoogle Scholar
McGarvey, D. C. (1953). A theorem on the construction of voting paradoxes. Econometrica 21, 608610.CrossRefGoogle Scholar
Montes, I., Rademaker, M., Perez-Fernandez, R. and De Baets, B. (2020). A correspondence between voting procedures and stochastic orderings. Europ. J. Operat. Res. 285, 977987.CrossRefGoogle Scholar
Navarro, J. and Rubio, R. (2010). Comparisons of coherent systems using stochastic precedence. Test 19, 469486.CrossRefGoogle Scholar
Navarro, J., Spizzichino, F. and Balakrishnan, A.N. (2010). The role of average and projected systems in the study of coherent systems. J. Multivariate Anal. 101, 14711482.CrossRefGoogle Scholar
Nurmi, H. (1999). Voting Paradoxes and How to Deal with Them. Springer, Berlin.CrossRefGoogle Scholar
Saari, D. G. (1989). A dictionary for voting paradoxes. J. Econom. Theory 48, 443475.CrossRefGoogle Scholar
Saari, D. G. (1990). The Borda dictionary. Social Choice Welfare 7, 279317.CrossRefGoogle Scholar
Saari, D. G. (1995). A chaotic exploration of aggregation paradoxes. SIAM Rev. 37, 3752.CrossRefGoogle Scholar
Saari, D. G. (2018). Discovering aggregation properties via voting. In New Handbook of Mathematical Psychology, Vol. 2, Cambridge University Press, pp. 271321.Google Scholar
Samaniego, F. J. (2007). System Signatures and Their Applications in Engineering Reliability. Springer, New York.CrossRefGoogle Scholar
Savage, R. P., Jr. (1994). The paradox of nontransitive dice. Amer. Math. Monthly 101, 429436.CrossRefGoogle Scholar
Shaked, M. and Shanthikumar, J. G. (1990). Dynamic construction and simulation of random vectors. In Topics in Statistical Dependence (IMS Lecture Notes—Monogr. Ser. 16), Institute of Mathematical Statistics, Hayward, CA, pp. 415–433.CrossRefGoogle Scholar
Shaked, M. and Shanthikumar, J. G. (1994). Stochastic Orders and Their Applications. Academic Press, Boston.Google Scholar
Shaked, M. and Shanthikumar, J. G. (2015). Multivariate conditional hazard rate functions—an overview. Appl. Stoch. Models Business Industry 31, 285296.CrossRefGoogle Scholar
Shelah, S. (2009). What majority decisions are possible. Discrete Math. 309, 23492364.CrossRefGoogle Scholar
Spizzichino, F. (2019). Reliability, signature, and relative quality functions of systems under time-homogeneous load-sharing models. Appl. Stoch. Models Business Industry 35, 158176.CrossRefGoogle Scholar
Steinhaus, H. and Trybula, S. (1959). On a paradox in applied probabilities. Bull. Acad. Pol. Sci. Ser. Math. 7, 6769.Google Scholar
Trybula, S. (1969). Cyclic random inequalities. Zastos. Mat. 10, 123127.Google Scholar