Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-27T07:34:09.800Z Has data issue: false hasContentIssue false

Method-of-moments estimators of a scale parameter based on samples from a coherent system

Published online by Cambridge University Press:  16 February 2023

Claudio Macci
Affiliation:
Dipartimento di Matematica, Università di Roma Tor Vergata, Via della Ricerca Scientifica, I-00133 Rome, Italy
Jorge Navarro*
Affiliation:
Facultad de Matemáticas, Universidad de Murcia, 30100 Murcia, Spain
*
*Corresponding author. E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we study the estimation of a scale parameter from a sample of lifetimes of coherent systems with a fixed structure. We assume that the components are independent and identically distributed having a common distribution which belongs to a scale parameter family. Some results are obtained as well for dependent (exchangeable) components. To this end, we will use the representations for the distribution function of a coherent system based on signatures. We prove that the efficiency of the estimators depends on the structure of the system and on the scale parameter family. In the dependence case, it also depends on the baseline copula function.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

1. Introduction

In several practical situations, when one studies (non-repairable) coherent systems, the information about the component lifetimes is not available. In these cases, one just has information about the system lifetimes. If we assume that the component lifetimes are independent and identically distributed (i.i.d.) with a common distribution in a scale parameter family with an unknown parameter $\theta$, the purpose is to estimate this parameter from the system sample.

Of course, to this end, we have to take into account the system structure. Thus, the procedure is not the same if we have information about series systems (i.e. the first component failures in groups of size $h$) or lifetimes from any other system structures. Several results for this kind of data have been obtained in the literature under different assumptions/models. For example, in [Reference Fallah, Asgharzadeh and Ng8], a parametric proportional reversed hazard rate model is assumed for the common distribution of the components while a proportional hazard rate model is assumed in [Reference Ng, Navarro and Balakrishnan16]. A load-sharing model with active redundancy is analyzed in [Reference Ling, Ng, Chan and Balakrishnan10]. The best linear unbiased estimator (BLUE) is obtained in [Reference Balakrishnan, Ng and Navarro4] under a scale parameter model. A numerical method is used in [Reference Yang, Ng and Balakrishnan20] to get the maximum likelihood estimator (MLE) in a general parametric model with i.i.d. components. A nonparametric approach was developed in [Reference Balakrishnan, Ng and Navarro3].

In this paper, we consider method-of-moments estimators of $\theta$ and we study the rate of convergence to $\theta$ by referring to (the square of) the coefficient of variation in (3.3) which can be seen as a suitable asymptotic variance. In our analysis, we will use the representations based on signatures for coherent systems. The concept of signature was introduced in 1985 by F.J. Samaniego (see [Reference Samaniego18] or Section 2). It can be used to represent the system distribution function as a mixture of the distribution functions of the $k$-out-of-$h$ systems (or the order statistics). In the i.i.d. case, the signature only depends on the system structure and will allow us to get the estimators for $\theta$. In other cases, it is better to use the concept of minimal signature (see [Reference Navarro, Ruiz and Sandoval14] or Section 2) which can be used to get a similar representation based on series systems. Sometimes, this representation is more convenient since it simplifies the calculations (see the illustrative examples presented in Section 4).

Both representations can be extended to the case of exchangeable (i.e. permutation symmetric) components. So we can also obtain similar results for this case. The dependence structure between the components in the same system is represented by a given copula function with a dependence parameter.

We will prove that the rate of convergence of the estimator will depend on the system structure (signature) and on the scale parameter family. In the illustrative examples, we analyze all the system structures with four or less components, and the exponential and Pareto scale parameter families. In the final Examples 4.3 and 4.4, we consider cases in which the dependence structure of the component lifetimes is modeled with a Farlie-Gumbel-Morgenstern (FGM for short) copula or a Clayton copula.

We shall see that the performance of the estimators can be related with the Lorenz order. This stochastic order is based on the very well-known Lorenz curve that is used to measure inequality in several economic scenarios, see, e.g., [Reference Arnold and Sarabia1]. The main properties of the Lorenz order can be seen in e.g. [Reference Arnold and Sarabia1,Reference Belzunce, Martínez-Riquelme and Mulero5]. In our context, this order can be used to measure the dispersion of the data obtained from the different system structures and to determine which cases provide better results (i.e. which cases allow to consider estimators with a faster convergence).

The rest of the paper is scheduled as follows. The notation, basic definitions and some preliminary results are placed in Section 2. In Section 3, we present the estimation problem (based on a method-of-moments estimator) and the results about the efficiency of such estimators. Some illustrative examples are shown in Section 4. The conclusions and pending tasks for future research are placed in Section 5.

Throughout the paper, the terms increasing and decreasing are used to represent nondecreasing and nonincreasing, respectively. Whenever we use an expectation we are tacitly assuming that it exists.

2. Preliminaries

In this section, we recall some preliminaries on coherent systems and signatures. A (binary) system with $h$ components is a Boolean function

$$\varphi:\{0,1\}^h\to\{0,1\}$$

where $\varphi (x_1,\ldots,x_h)=1$ (resp. $0$) indicates that the system works (fails) when the components have fixed states represented by $x_1,\ldots,x_n\in \{0,1\}$ ($x_i=1$ means that the $i$th component works). A system $\varphi$ is semi-coherent if it is increasing and satisfies $\varphi (0,\ldots,0)=0$ and $\varphi (1,\ldots,1)=1$. A semi-coherent system might contain irrelevant components that do not affect the system performance. If this is not the case, it is called coherent. This property is equivalent to assume that $\varphi$ is strictly increasing in each variable in at least one point (for each variable).

The lifetime of the coherent system will be represented by $T$. It depends on the system structure $\varphi$ and on the component lifetimes $X_1,\ldots,X_h$. In this paper, we assume that they are exchangeable (EXC for short), that is, that the random vector $(X_1,\ldots,X_h)$ is permutation invariant in distribution. Thus, for any permutation $\sigma$ of $\{1,\ldots,h\}$, the random vector $(X_{\sigma (1)},\ldots,X_{\sigma (h)})$ is distributed as $(X_1,\ldots,X_h)$. Then the random variables $X_1,\ldots,X_h$ are identically distributed (i.d.) and, of course, a particular case in which the EXC condition holds is when $X_1,\ldots,X_h$ are independent and identically distributed (i.i.d.) random variables.

The first signature representation was obtained in Samaniego [Reference Samaniego18]. In that reference, it is proved that if the components are i.i.d. with a common continuous distribution function, then the system reliability function $\bar F_T(t)=P(T \gt t)$ can be written as

(2.1) \begin{equation} \bar F_T(t) =\sum_{i=1}^h s_i \bar F_{i:h}(t)\quad (\text{for all}\ t \gt 0), \end{equation}

where $\bar F_{1:h},\ldots, \bar F_{h:h}$ are the reliability functions of the ordered component lifetimes (order statistics) $X_{1:h},\ldots, X_{h:h}$ which in this context represent the lifetimes of $i$-out-of-$h$ systems (i.e. systems that work when at least $h-i+1$ of their $h$ components work). In particular, $X_{1:h}$ and $X_{h:h}$ represent the lifetimes of series and parallel systems with $h$ components, respectively.

The vector $\underline {s}=(s_1,\ldots,s_h)$ with the coefficients in that representation is known as the (Samaniego) signature of the system. These coefficients are nonnegative values that only depend on the system structure $\varphi$. They can be computed from

(2.2) \begin{equation} s_i=\frac 1 {\binom {h}{i-1} }\sum_{x_1+\cdots+x_h=h-i+1}\varphi(x_1,\ldots, x_h)- \frac 1 {\binom {h}{i} }\sum_{x_1+\dots+x_h=h-i}\varphi(x_1,\ldots, x_h). \end{equation}

Samaniego's representation (2.1) can be extended to the case of EXC components whenever we use this formula to compute the signature (see, e.g., [Reference Navarro12]). For some properties and applications of signatures, see [Reference Izadkhah, Amini-Seresht and Balakrishnan9,Reference Navarro12,Reference Ross, Shahshahani and Weiss17,Reference Samaniego18,Reference Yi, Balakrishnan and Li21] and the references therein.

An alternative representation under the EXC condition was introduced in [Reference Navarro, Ruiz and Sandoval14] showing that the system reliability function can also be written as

$$\bar F_T(t) =\sum_{i=1}^h a_i \bar F_{1:i}(t)\quad (\text{for all}\ t\geq 0),$$

where $\bar F_{1:i}(t)=P(X_{1:i} \gt t)$ is the reliability function of the series system with $i$ components for $i=1,\ldots,h$ and $\underline {a}=(a_1,\ldots,a_h)$ is the minimal signature of the system. The coefficients in $\underline {a}$ are integer numbers that only depend on the system structure. Note that some of them can be negative. Unfortunately, we do not have an explicit expression similar to (2.2) to compute $\underline {a}$ from $\varphi$. However, $\underline {a}$ can be computed from $\underline {s}$ and vice versa (see Remark 2.2 in [Reference Navarro12], p. 45]). For some explicit computations of $\underline {s}$ and $\underline {a}$, see [Reference Navarro12,Reference Navarro and Rubio13,Reference Shaked and Suárez-Llorens19].

Note that if we consider coherent systems with $k$ EXC components labeled from $1$ to $k$ for some $k \gt h$, then this representation can be extended as

$$\bar F_T(t) =\sum_{i=1}^k a_i \bar F_{1:i}(t)\quad (\text{for all}\ t\geq 0),$$

where $a_i=0$ for $i=h+1,\ldots,k$. Then we can write the minimal signatures of all these systems as numerical vectors $\underline {a}=(a_1,\ldots,a_k)$ of dimension $k \gt h$.

The reliability functions $\bar F_{1:1},\ldots,\bar F_{1:k}$ can be computed from the survival copula representation for the joint reliability function of $(X_1,\ldots,X_k)$ (see, e.g., [Reference Nelsen15])

$$P(X_1 \gt x_1, \ldots, X_h \gt x_k)=\widehat C(\bar G(x_1),\ldots,\bar G(x_k) )$$

as

$$\bar F_{1:i}(t)=P(X_1 \gt t, \ldots, X_i \gt t)=\widehat C(\underbrace{\bar G(t),\ldots,\bar G(t)}_{i\ \mathrm{times}},1,\ldots,1)$$

for $i=1,\ldots,k$, where $\bar G=1-G$ is the common reliability function of the components and $\widehat C$ is the survival copula of $(X_1,\ldots,X_k)$. In particular, if the components are i.i.d., then $\bar F_{1:i}(t)=\bar G^i(t)$ for $i=1,\ldots,k$.

By using this representation, the expected lifetime of the system $T$ can be computed as

$$\mathbb{E}[T]=\sum_{i=1}^k a_i \mathbb{E}[X_{1:i}].$$

Analogously, the variance of $T$ can be computed as $\mathrm {Var}[T]=\mathbb {E}[T^2]-\mathbb {E}^2[T]$, where

$$\mathbb{E}[T^2]=\sum_{i=1}^k a_i\mathbb{E}[X_{1:i}^2].$$

As we will see later (see in particular some examples in Section 4), this representation in terms of the minimal signature is more convenient for the computation of the mean and the variance of $T$ because the moments of $X_{1:i}$ are (usually) easier to compute than the moments of $X_{i:k}$.

3. The method-of-moments estimator and results

In this section, we present the estimation problem based on a method-of-moments estimator, together with some properties of this estimator. We also discuss some connections with the theory of large deviations. Finally, we prove a result related with the Lorenz order and we obtain confidence intervals based on an asymptotic normality result.

3.1. The method-of-moments estimator

Let $T_1,\ldots,T_n$ be $n$ i.i.d. replications of the lifetime $T$ of a coherent system with $h$ EXC components. In this paper, we assume that, given a continuous distribution function with support on $(0,\infty )$ (thus $G(0)=0$), the common distribution function $G_\theta$ of the component lifetimes $X_1,\ldots,X_h$ can be written as

$$G_\theta(x):=G\left(\frac{x}{\theta}\right),$$

where $\theta \gt 0$ is an unknown scale parameter and $G$ is a known baseline distribution function. Thus, we assume that the i.d. component lifetimes belong to a (unidimensional) scale parameter family.

Several items depend on $\theta$ (probabilities $P_\theta (\cdot )$, expected values $\mathbb {E}_\theta [\cdot ]$, etc.), and some formulas can be expressed in terms of the case $\theta =1$; indeed, for all $\theta \gt 0$, we have

$$P_\theta(X\in A)=P_1(\theta X\in A)\quad \text{for all measurable sets $A$}.$$

Then, we recall the following formulas in terms of the signature $\underline {s}=(s_1,\ldots,s_h)$. Similar expressions can be obtained for the minimal signature. Thus, we get

$$\bar{F}_{T;\theta}(t):=P_\theta(T \gt t)=\sum_{i=1}^h s_iP_\theta(X_{i:h} \gt t) =\sum_{i=1}^h s_iP_1(\theta X_{i:h} \gt t)\quad \text{for all}\ t \gt 0$$

and

(3.1) \begin{equation} \mathbb{E}_\theta[e^{\gamma T}]=\sum_{i=1}^h s_i\mathbb{E}_\theta[e^{\gamma X_{i:h}}] =\sum_{i=1}^h s_i\mathbb{E}_1[e^{\gamma\theta X_{i:h}}] \end{equation}

for all $\gamma \in \mathbb {R}$ such that these expectations exist. Moreover, if we consider the notation

$$\mu_1(\underline{s},G):=\mathbb{E}_1[T]=\sum_{i=1}^h s_i\mathbb{E}_1[X_{i:h}]$$

and

$$\sigma_1^2(\underline{s},G):=\mathrm{Var}_1[T]=\mathbb{E}_1[T^2]-\mathbb{E}_1^2[T] =\sum_{i=1}^h s_i\mathbb{E}_1[X_{i:h}^2]-\mu_1^2(\underline{s},G),$$

we have

$$\mathbb{E}_\theta[T]=\sum_{i=1}^h s_i\mathbb{E}_\theta[X_{i:h}]=\theta\mu_1(\underline{s},G)$$

and

$$\mathrm{Var}_\theta[T]=\mathbb{E}_\theta[T^2]-\mathbb{E}_\theta^2[T] =\sum_{i=1}^h s_i\mathbb{E}_\theta[X_{i:h}^2]-(\theta\mu_1(\underline{s},G))^2 =\theta^2\sigma_1^2(\underline{s},G).$$

Now, we recall the method-of-moments estimator of $\theta$. We have to consider the solution of the equation

$$\mathbb{E}_\theta[T]=\frac{T_1+\cdots+T_n}{n}$$

with unknown quantity $\theta$. By taking into account the equality $\mathbb {E}_\theta [T]=\theta \mu _1(\underline {s},G)$ shown above, this equation can be immediately solved; indeed the solution, which is a random variable $\hat {\Theta }_n$ depending on the sample mean $(T_1+\cdots +T_n)/n$, is given by

(3.2) \begin{equation} \hat{\Theta}_n=\frac{T_1+\cdots+T_n}{n\mu_1(\underline{s},G)}. \end{equation}

In view of what follows we also introduce the notation $\sigma _\bullet ^2(\underline {s},G)$ for the square of the coefficient of variation under $P_1$ of $T$, that is,

(3.3) \begin{equation} \sigma_\bullet^2(\underline{s},G):=\frac{\sigma_1^2(\underline{s},G)}{\mu_1^2(\underline{s},G)}. \end{equation}

Then, for every fixed $\theta \gt 0$, we can immediately check that

$$\mathbb{E}_\theta[\hat{\Theta}_n]=\theta$$

(i.e. $\hat {\Theta }_n$ is an unbiased estimator of $\theta$) and

(3.4) \begin{equation} \mathrm{Var}_\theta[\hat{\Theta}_n]=\frac{\theta^2}{n}\sigma_\bullet^2(\underline{s},G). \end{equation}

Moreover, as an immediate consequence of the law of the large numbers, $\hat {\Theta }_n\to \theta$ almost surely under $P_\theta$ (i.e. $\hat {\Theta }_n$ is a consistent estimator of $\theta$).

We can also say that the smaller is the coefficient of variation of $T$ (under $P_1$), the faster is the convergence of $\hat {\Theta }_n$ to $\theta$. So we would like to find conditions on the distribution of the random variable $T$ in order to find inequalities between the corresponding values of $\sigma _\bullet ^2(\underline {s},G)$. This will be done in Section 3.3 (see Proposition 3.1) by referring to a condition in terms of the Lorenz order recalled in the next definition (see, e.g., [Reference Arnold and Sarabia1,Reference Belzunce, Martínez-Riquelme and Mulero5]).

Definition 3.1. Let $Z$ be a nonnegative random variable with mean $\mathbb {E}[Z] \gt 0$ and distribution function $F_Z$. Then, the Lorenz curve of $Z$ is defined by

$$L_Z(u):=\frac{1}{\mathbb{E}[Z]}\int_0^{F_Z^{{-}1}(u)}zF_Z(dz)\quad \text{for}\ u\in(0,1),$$

where $F_Z^{-1}$ is the quantile function of $Z$. Then we say that $Z_1$ is smaller than $Z_2$ in the Lorenz order (and we write $Z_1\leq _LZ_2$ for short) if $L_{Z_1}(u)\leq L_{Z_2}(u)$ for every $u\in (0,1)$.

Finally, in view of the confidence intervals presented in Section 3.3 (see Eq. (3.8) and Remark 3.4), we obtain an asymptotic normality result for $\hat {\Theta }_n$.

Lemma 3.1. Under $P_\theta$, $\sqrt {n}(\hat {\Theta }_n-\theta )/(\theta \sigma _\bullet (\underline {s},G))$ converges weakly to the standard Normal distribution or, equivalently, $\sqrt {n}(\hat {\Theta }_n-\theta )$ converges weakly to a centered Normal distribution with variance $\theta ^2\sigma _\bullet ^2(\underline {s},G)$.

Proof. We remark that

$$\frac{\sqrt{n}}{\theta\sigma_1(\underline{s},G)}\left(\frac{T_1+\cdots+T_n}{n}-\theta\mu_1(\underline{s},G)\right) =\frac{\sqrt{n}}{\theta\sigma_\bullet(\underline{s},G)}(\hat{\Theta}_n-\theta);$$

so the asymptotic normality result in the statement is an immediate consequence of the Central Limit Theorem (together with the definition of $\sigma _\bullet (\underline {s},G)$ in Eq. (3.3)).  □

3.2. Some connections with the theory of large deviations

Here, we assume that the moment generating function of $T$, given in Eq. (3.1), is finite in a neighborhood of the origin $\gamma =0$. In view of what follows we consider the function $\kappa _{T;\theta }$ defined by

$$\kappa_{T;\theta}(\gamma):=\log\mathbb{E}_\theta[e^{\gamma T}],$$

and its Legendre transform $\kappa _{T;\theta }^*$ defined by

$$\kappa_{T;\theta}^*(t):=\sup_{\gamma\in\mathbb{R}}\{t\gamma-\kappa_{T;\theta}(\gamma)\}.$$

Actually, we can refer to the case $\theta =1$; indeed, we can easily check that

$$\kappa_{T;\theta}(\gamma)=\kappa_{T;1}(\theta\gamma)\quad \text{and}\quad \kappa_{T;\theta}^*(t)=\kappa_{T;1}^*\left(\frac{t}{\theta}\right).$$

The functions $\kappa _{T;\theta }$ and $\kappa _{T;\theta }^*$ have some well-known properties. Here, we recall some of them for $\theta =1$: $\kappa _{T;1}^*$ is a nonnegative convex function (regular in the interior of the set in which it is finite), $\kappa _{T;1}^*(t)=\infty$ if $t\leq 0$, $\kappa _{T;1}^*(t)=0$ if and only if $t=\mu _1(\underline {s},G)$, $(\kappa _{T;1}^*)^\prime (\mu _1(\underline {s},G))=0$ and $(\kappa _{T;1}^*)^{\prime \prime }(\mu _1(\underline {s},G))=1/\sigma _1^2(\underline {s},G)$.

Then, as a consequence of the Cramér theorem on $\mathbb {R}$ (see, e.g., Theorem 2.2.3 in [Reference Dembo and Zeitouni7]), we can say that the sequence $\{\hat {\Theta }_n:n\geq 1\}$ defined by (3.2) satisfies the large deviation principle with a good rate function $I_{\hat {\Theta },\theta }$ defined by

$$I_{\hat{\Theta},\theta}(\hat{\theta}):= \sup_{\gamma\in\mathbb{R}}\{\gamma\hat{\theta}\mu_1(\underline{s},G)-\kappa_{T;\theta}(\gamma)\},$$

and we easily get

$$I_{\hat{\Theta},\theta}(\hat{\theta}) =\sup_{\gamma\in\mathbb{R}}\{\gamma\hat{\theta}\mu_1(\underline{s},G)-\kappa_{T;1}(\theta\gamma)\} =\kappa_{T;1}^*\left(\frac{\hat{\theta}}{\theta}\mu_1(\underline{s},G)\right).$$

This means that we have

$$\limsup_{n\to\infty}\frac{1}{n}\log P_\theta(\hat{\Theta}_n\in C)\leq{-}\inf_{\hat{\theta}\in C}I_{\hat{\Theta},\theta}(\hat{\theta}) \quad \text{for all closed sets}\ C$$

and

$$\liminf_{n\to\infty}\frac{1}{n}\log P_\theta(\hat{\Theta}_n\in O)\geq{-}\inf_{\hat{\theta}\in O}I_{\hat{\Theta},\theta}(\hat{\theta}) \quad \text{for all open sets}\ O.$$

Some properties of the rate function $I_{\hat {\Theta },\theta }$ can be obtained as consequences of the properties of the function $\kappa _{T;1}^*$ cited above. In particular we have $I_{\hat {\Theta },\theta }(\hat {\theta })=0$ if and only if $\hat {\theta }=\theta$ (this is not surprising because, as we said above, $\hat {\Theta }_n$ is a consistent estimator of $\theta$). Moreover, with some easy computations, one can check that

(3.5) \begin{equation} I_{\hat{\Theta},\theta}^{\prime\prime}(\theta)=\frac{\mu_1^2(\underline{s},G)}{\theta^2}(\kappa_{T;1}^*)^{\prime\prime}(\mu_1(\underline{s},G)) =\frac{1}{\theta^2\sigma_\bullet^2(\underline{s},G)} \end{equation}

because $(\kappa _{T;1}^*)^{\prime \prime }(\mu _1(\underline {s},G))=1/\sigma _1^2(\underline {s},G)$ as we said above, and by taking into account (3.3).

In particular, if we take $B_\varepsilon (\theta )=(\theta -\varepsilon,\theta +\varepsilon )$ with $\varepsilon \gt 0$ small enough, we have

$$\lim_{n\to\infty}\frac{1}{n}\log P_\theta(|\hat{\Theta}_n-\theta|\geq\varepsilon)={-}I_{\hat{\Theta},\theta}(B_\varepsilon^c(\theta)),$$

where $I_{\hat {\Theta },\theta }(B_\varepsilon ^c(\theta )):=\inf _{\hat {\theta }\in B_\varepsilon ^c(\theta )}I_{\hat {\Theta },\theta }(\hat {\theta }) \gt 0.$ Thus, roughly speaking, $P_\theta (|\hat {\Theta }_n-\theta |\geq \varepsilon )$ tends to 0 as $\exp (-nI_{\hat {\Theta },\theta }(B_\varepsilon ^c(\theta )))$ when $n\to \infty$. So we can say that the larger is $I_{\hat {\Theta },\theta }(\hat {\theta })$ around $\hat {\theta }=\theta$, the faster is the convergence of $\hat {\Theta }_n$ to $\theta$. Moreover, this fact agrees with what we said above, i.e., $\hat {\Theta }_n$ converges faster to $\theta$ when we have a smaller $\sigma _\bullet ^2(\underline {s},G)$ because, for $\hat {\theta }$ near to $\theta$, $I_{\hat {\Theta },\theta }(\hat {\theta })$ behaves like the parabola $\hat {\theta }\mapsto {(\hat {\theta }-\theta )^2}/{(2\theta ^2\sigma _\bullet ^2(\underline {s},G))}$.

Finally, we can also provide the asymptotic decay of probabilities of other rare events. For instance, for $\alpha \gt 1$, we have

$$\inf_{\hat{\theta}\geq\alpha\theta}I_{\hat{\Theta},\theta}(\hat{\theta}) =\kappa_{T;1}^*(\alpha\mu_1(\underline{s},G))$$

and

$$\lim_{n\to\infty}\frac{1}{n}\log P_\theta(\hat{\Theta}_n\geq\alpha\theta) ={-}\kappa_{T;1}^*(\alpha\mu_1(\underline{s},G));$$

thus, roughly speaking, $P_\theta (\hat {\Theta }_n\geq \alpha \theta )$ tends to 0 as $\exp (-n\kappa _{T;1}^*(\alpha \mu _1(\underline {s},G)))$ when $n\to \infty$. However, we must note that it is not easy to compare $\kappa _{T;1}^*(\alpha \mu _1(\underline {s},G))$ and $\kappa _{T;1}^*(\alpha \mu _1(\underline {s}^\diamond,G^\diamond ))$ (for two signatures $\underline {s}$ and $\underline {s}^\diamond$ and for two distribution functions $G$ and $G^\diamond$) because we do not have an explicit expression of the rate function $\kappa _{T;1}^*$.

3.3. A result on Lorenz order and confidence intervals

We start showing that, if the lifetimes of two coherent systems are ordered with respect to the Lorenz ordering $\leq _L$, we have the same inequality for the respective coefficients of variation. The result was given in [Reference Marshall and Olkin11], p. 69], and can also be proved from Theorem 2.7.16 in [Reference Belzunce, Martínez-Riquelme and Mulero5]. It can be stated as follows.

Proposition 3.1. Let $T(\underline {s},G)$ and $T(\underline {s}^\diamond,G^\diamond )$ be the lifetimes of coherent systems associated with $(\underline {s},G)$ and $(\underline {s}^\diamond,G^\diamond )$, respectively. Then, $T(\underline {s},G)\leq _LT(\underline {s}^\diamond,G^\diamond )$ yields $\sigma _\bullet ^2(\underline {s},G)\leq \sigma _\bullet ^2(\underline {s}^\diamond,G^\diamond )$.

Remark 3.1. The inequality $\sigma _\bullet ^2(\underline {s},G)\leq \sigma _\bullet ^2(\underline {s}^\diamond,G^\diamond )$ is equivalent to

$$\frac{\sum_{i=1}^hs_i\mathbb{E}_1[X_{i:h}^2]-\mu_1^2(\underline{s},G)}{\mu_1^2(\underline{s},G)} \leq\frac{\sum_{i=1}^hs_i^\diamond \mathbb{E}^\diamond_1 [X_{i:h}^2]-\mu_1^2(\underline{s}^\diamond,G^\diamond)}{\mu_1^2(\underline{s}^\diamond,G^\diamond)},$$

and therefore, it is also equivalent to

(3.6) \begin{equation} \frac{\sum_{i=1}^hs_i\mathbb{E}_1[X_{i:h}^2]}{(\sum_{i=1}^hs_i\mathbb{E}_1[X_{i:h}])^2} \leq\frac{\sum_{i=1}^hs_i^\diamond {\mathbb{E}^\diamond_1}[X_{i:h}^2]}{(\sum_{i=1}^hs_i^\diamond {\mathbb{E}^\diamond_1}[X_{i:h}])^2}, \end{equation}

where $\mathbb {E}^\diamond _1$ refers to expectations for the baseline distribution function $G^\diamond$.

Thus, Proposition 3.1 shows that the Lorenz ordering between the lifetimes of coherent systems is enough to determine which estimators converge faster. The Lorenz comparisons of order statistics $X_{1:h},\ldots,X_{h:h}$ were studied in [Reference Arnold and Villaseñor2,Reference Da, Xu and Balakrishnan6]. In that references, it is shown that it is not easy to get this ordering for order statistics. Actually, this ordering depends on the baseline distribution $G$ (this is not the case for other stochastic orders). Of course, it is also not easy to determine the Lorenz ordering between general coherent systems with i.d. components by using signatures since the signature representation is a mixture of the distribution functions of order statistics.

Remark 3.2. Note that the order statistics also represent the ordered data in a sample from the components. Hence, the results in this paper can also be used to determine which ordered data provide better (faster) estimators for $\theta$. For example, if $h=3$, we can compare the estimators obtained from $X_{1:3}$ (the first data in the sample), $X_{2:3}$ (the median) or $X_{3:3}$ (the maximum value). As shown in [Reference Arnold and Villaseñor2], the answers to these questions depend on $G$. Also, note that in practice the minimum value $X_{1:3}$ is available early and so, in a finite time sampling procedure, we will have more (uncensored) data from it than from $X_{2:3}$ or $X_{3:3}$.

Remark 3.3. By taking into account what we said in Section 2 on minimal signatures, we can say that (3.6) is equivalent to

(3.7) \begin{equation} \frac{\sum_{i=1}^k a_i\mathbb{E}_1[X_{1:i}^2]}{(\sum_{i=1}^k a_i\mathbb{E}_1[X_{1:i}])^2}\leq \frac{\sum_{i=1}^k a_i^\diamond\mathbb{E}_1[X_{1:i}^2]}{(\sum_{i=1}^k a_i^\diamond\mathbb{E}_1[X_{1:i}])^2} \end{equation}

for two systems with common EXC (or i.i.d.) components and minimal signatures $\underline {a}$ and $\underline {a}^\diamond$. Note that, in particular, in these cases, we assume $G=G^\diamond$.

We conclude with the construction of some confidence intervals that can be derived from Lemma 3.1. Let $\ell \in (0,1)$ be an arbitrarily fixed confidence level, let $\Phi$ be the standard Normal distribution function, and therefore, let $\Phi ^{-1}({(1+\ell )}/{2})$ be the quantile of order ${(1+\ell )}/{2}$. Then,

$$\lim_{n\to\infty}P_\theta\left(\frac{\sqrt{n}}{\theta\sigma_\bullet(\underline{s},G)}|\hat{\Theta}_n-\theta| \leq\Phi^{{-}1}\left(\frac{1+\ell}{2}\right)\right)=\ell.$$

Moreover, since

$$\left\{\frac{\sqrt{n}}{\theta\sigma_\bullet(\underline{s},G)}|\hat{\Theta}_n-\theta|\leq\Phi^{{-}1}\left(\frac{1+\ell}{2}\right)\right\} =\left\{\left|\frac{\hat{\Theta}_n}{\theta}-1\right|\leq\frac{\sigma_\bullet(\underline{s},G)}{\sqrt{n}}\Phi^{{-}1}\left(\frac{1+\ell}{2}\right)\right\}$$

and $\hat {\Theta }_n$ is $P_\theta$ almost surely positive, we can easily obtain the following approximate confidence interval for $1/\theta$ at the level $\ell \in (0,1)$:

(3.8) \begin{equation} \left(\frac{1}{\hat{\Theta}_n}\left(1-\frac{\sigma_\bullet(\underline{s},G)}{\sqrt{n}}\Phi^{{-}1}\left(\frac{1+\ell}{2}\right)\right), \frac{1}{\hat{\Theta}_n}\left(1+\frac{\sigma_\bullet(\underline{s},G)}{\sqrt{n}}\Phi^{{-}1}\left(\frac{1+\ell}{2}\right)\right)\right). \end{equation}

Remark 3.4. Note that the length of the interval tends to zero as $n\to \infty$. Moreover, as $\theta \gt 0$, if the left-end point of the interval is negative, it can be replaced with zero. In this way, we can obtain a confidence interval for $\theta$, where the right-end point could be infinite. Note that if $G$ is known (e.g. exponential), then we can compute the boundary points of the interval for each system structure. Again, we note that in many models, it is better to use the minimal signature $\underline {a}$ than the signature $\underline {s}$ to compute the mean and the variance that we need to get the coefficient of variation $\sigma _\bullet (\underline {s},G)$ and the confidence interval in (3.8).

4. Examples

In this section, we analyze several baseline distribution functions $G$, and we find the best systems (we also use the terms best samples and faster samples) for the estimation of $\theta$, that is, the cases with a smaller $\sigma _\bullet ^2(\underline {s},G)$. We analyze i.i.d. cases in Examples 4.1, 4.2, and 4.5, and two EXC cases with dependence in Examples 4.3 and 4.4 (in these cases, $\sigma _\bullet ^2(\underline {s},G)$ also depends on the copula $C$).

In Example 4.1, we consider the main distribution in this field: the exponential model. The scale parameter family associated with this distribution has been widely studied in the literature. By taking into account (3.6) in Remark 3.1 (and also (3.7) in Remark 3.3), we introduce the notation

(4.1) \begin{equation} \phi(\underline{s}):=\frac{\sum_{i=1}^hs_i\mathbb{E}_1[X_{i:h}^2]}{(\sum_{i=1}^hs_i\mathbb{E}_1[X_{i:h}])^2}. \end{equation}

We will use the same notation for the expression based on the minimal signature

(4.2) \begin{equation} \phi(\underline{a}):= \frac{\sum_{i=1}^h a_i\mathbb{E}_1[X_{1:i}^2]}{(\sum_{i=1}^h a_i\mathbb{E}_1[X_{1:i}])^2} \end{equation}

(but note that the functions for $\underline {s}$ and $\underline {a}$ are different). Moreover, if $h=2$ (as happens at the beginning of Example 4.1), in (4.1), we have $\phi (s_1,s_2)=\phi (s_1,1-s_1)$, and so we simply write $\phi (s_1)$. It is easy to check that

$$\sigma_\bullet^2(\underline{s},G)=\phi(\underline{s})-1$$

and, by taking into account (3.5), we consider the function

(4.3) \begin{equation} \psi(\underline{s}):=\frac{1}{\sigma_\bullet^2(\underline{s},G)}=\frac{1}{\phi(\underline{s})-1}. \end{equation}

A similar notation is used for the minimal signature.

Example 4.1. Let us consider $G(t)=1-\exp (-t)$ for $t\geq 0$, and i.i.d. component lifetimes. If $h=2$, a straightforward calculation shows that $\mathbb {E}_1[X_{1:2}]=1/2$, $\mathbb {E}_1[X_{2:2}]=3/2$, $\mathbb {E}_1[X^2_{1:2}]=1/2$, and $\mathbb {E}_1[X^2_{2:2}]=7/2$. Hence,

$$\phi(s_1)=\frac{s_1\mathbb{E}_1[X_{1:2}^2]+s_2\mathbb{E}_1[X_{2:2}^2]}{(s_1\mathbb{E}_1[X_{1:2}]+s_2\mathbb{E}_1[X_{2:2}])^2} =2\frac{s_1+7s_2}{(s_1+3s_2)^2}=2\frac {7-6s_1}{(3-2s_1)^2}.$$

By plotting this function (see Figure 1), we see that the minimum value is attained at $s_1=0$ (and $s_2=1$) getting $\phi (0)=2(7/9)=1.555556$, that is, the best samples to estimate $\theta$ are those from $X_{2:2}$ (parallel systems). This result can also be obtained from the results for the Lorenz order given in [Reference Arnold and Villaseñor2] since $X_{2:2}\leq _L X_{1:2}$.

Figure 1. Function $\phi (s_1)$ in Example 4.1.

Note that the samples from $X_{2:2}$ (maximum values) are also better than the samples from the components ($X_1$ or $X_2$) which are represented by the mixed system with signature $s_1=s_2=1/2$ (see, e.g., [Reference Navarro12], p. 50]) which leads to the value $\phi (1/2)=2$. This value coincides with the value obtained for the samples from the series system $X_{1:2}$ with $\phi (1)=2$. This is an expected property since both $X_1$ and $X_{1:2}$ have exponential distributions. However, we must note that if we are working with lifetimes, in practice, at a given time of our time-dependent experiment, we always have more data from the series system $X_{1:2}$ than from $X_1$ or $X_{2:2}$ (since the series systems fail first). Actually, $X_{1:2}$ can be seen as an accelerated life test for $X_1$ (with double hazard rate) and with the same rate of convergence in the respective estimators. However, if we consider all the mixed systems with $h=2$, that is, $s_1\in [0,1]$, then the worst value is obtained with $s_1=0.833333$ getting $\phi (0.833333)=2.25$ (see Figure 1).

From now on we consider the minimal signature $\underline {a}$ in place of $\underline {s}$. In order to study the cases $h=1,2,3,4$ in coherent systems with i.i.d. components, we note that

$$\mathbb{E}_1[X_{1:i}]=\int_0^\infty \bar G^i(t)\,dt=\frac 1 i$$

and

$$\mathbb{E}_1[X^2_{1:i}]=\int_0^\infty 2t \bar G^i(t)\,dt=\frac 2 {i^2}.$$

So the method-of-moments estimator of $\theta$ is

$$\widehat{\Theta}_n:=\frac{\bar{T}_n}{\mu_1(\underline{a},G)}=\frac{(T_1+\cdots+T_n)/n}{ a_1+(1/2)a_2+(1/3)a_3+(1/4)a_4 }.$$

Hence, from (3.7), to determine the faster estimator we look for the minimum of the function

$$\phi(a_1,a_2,a_3,a_4)= \frac{\sum_{i=1}^h a_i\mathbb{E}_1[X_{1:i}^2]}{(\sum_{i=1}^h a_i\mathbb{E}_1[X_{1:i}])^2} =\frac{2a_1+(1/2)a_2+(2/9)a_3+(1/8)a_4}{(a_1+(1/2)a_2+(1/3)a_3+(1/4)a_4 )^2}.$$

Now we use the minimal signatures given Table 2.2 of [Reference Navarro12], p. 43], obtaining the results for $\phi$ given in Table 1. There we also provide the value of $\psi (\underline {a})$ (see (4.3)) which, in some sense, measures the rate of convergence of the estimator $\widehat {\Theta }_n$ for $\theta$ (see (3.5)). The system structures that provide the best (faster) estimators for $h=2,3,4$ are highlighted in bold case.

Table 1. Minimal signatures $\underline {a}$ and values of the system expected lifetime $\mu _1(\underline {a},G)=\mathbb {E}_1[T_i]$ and the functions $\phi (\underline {a})$ and $\psi (\underline {a})$ for the exponential distribution $G$ and for all the coherent systems with $1$$4$ i.i.d. components.

Again we see that the best samples are those from the parallel systems $X_{2:2}$, $X_{3:3}$, and $X_{4:4}$ (lines $3,8$, and $28$, respectively). The samples from $X_{3:4}$ (line $23$) are also good. In [Reference Arnold and Villaseñor2], it is noted that $X_{3:4}$ and $X_{4:4}$ are not ordered in the Lorenz order for the exponential distribution. So we cannot use Proposition 3.1 here to compare the samples from these systems. Note that all the series systems have the same behavior (since all of them have exponential distributions) that is actually the worst result in all the coherent systems with four components or less (in this case, we use the term $1$$4$ components). This is also the result for the usual samples (i.e. the samples from the components). Therefore, in this case (exponential distribution and i.i.d. samples), any coherent system with $h\leq k=4$ provides better (faster) estimators. We believe that this is a general property for this case and for any order $k$. The next example shows that this property is not true for other distributions $G$.

Example 4.2. Let us consider $\bar G(t)=(1+t)^{-\alpha }$ for $t\geq 0$ and $\alpha \gt 2$, that is, a Pareto type II distribution. Moreover, we still assume i.i.d. component lifetimes as in Example 4.1. Here, we use the symbol $\phi _\alpha$ for the function $\phi$ in (4.2). To study the cases $h=1,2,3,4$ in coherent systems, we note that

$$\mathbb{E}_1[X_{1:i}]=\int_0^\infty \bar G^i(t)\,dt=\frac 1 {i\alpha-1}$$

and

$$\mathbb{E}_1[X^2_{1:i}]=\int_0^\infty 2t \bar G^i(t)\,dt=\frac 2 {(i\alpha-1)(i\alpha-2)}$$

for $i=1,2,\ldots$ and $\alpha \gt 2$. Hence, by taking into account (3.7), we look for the minimum of the function

$$\phi_\alpha(a_1,a_2,a_3,a_4)= \frac{\frac 2{(\alpha-1)(\alpha-2)}a_1+\frac 1{(2\alpha-1)(\alpha-1)}a_2+\frac 2{(3\alpha-1)(3\alpha-2)}a_3+\frac 1{(4\alpha-1)(2\alpha-1)}a_4}{\left(\frac 1{\alpha-1}a_1+\frac 1{2\alpha-1}a_2+\frac 1{3\alpha-1}a_3+\frac 1{4\alpha-1}a_4 \right)^2}$$

for a fixed $\alpha \gt 2$. The results for $\alpha =3,4,5$ can be seen in Table 2. Note that for $\alpha =3$, the samples from $X_{1:2}$ (line $2$) are two times faster than the samples from the components $X_i$ (line $1$). Moreover, in lifetime tests, the data from $X_{1:2}$ are available early on time. The samples from $X_{2:2}$ (line $3$) are also faster than the samples from the components $X_i$ (line $1$). However, for $h\leq 3$, the best samples are those from $X_{2:3}$ (line 6) and for $h\leq 4$, the ones from $X_{3:4}$ (line $23$). The best systems for $\alpha =4,5$ can be seen in Table 2 where they are highlighted in bold case. The only change is that for $h=2$, the best samples are those from $X_{2:2}$ (line $3$). In all the cases, we can see that the worst samples are those from the components (line $1$).

Table 2. Functions $\phi _\alpha (\underline {a})$ and $\psi _\alpha (\underline {a})$ for the Pareto distribution in Example 4.2 with $\alpha =3,4,5$ and for all the coherent systems with $1$-$4$ i.i.d. components given Table 1.

Now, we see some examples with EXC-dependent components by assuming that the survival copula is completely known. In practice, the copula (dependence structure) should be checked with the data and, if it contains a dependence parameter, it should be estimated as well (e.g. by using the Kendall's tau coefficient, see [Reference Nelsen15]). We start with a weak dependence case (Example 4.3), and later, we present a case with a strong positive dependence (Example 4.4).

Example 4.3. Let us consider the inference problem described in Section 3.1, with $G(t)=1-\exp (-t)$ for $t\geq 0$. Moreover, we assume that the component lifetimes have the following FGM survival copula

$$\widehat C(u_1,u_2,u_3,u_4)=u_1u_2u_3u_4+\alpha u_1u_2u_3u_4(1-u_1)(1-u_2)(1-u_3)(1-u_4),$$

where $\alpha \in [-1,1]$ is a dependence parameter (see, e.g., [Reference Nelsen15], p. 77]). Here, we use the symbol $\phi _\alpha$ for the function $\phi$ in (4.2). Note that we recover the i.i.d. case (studied in Example 4.1) for $\alpha =0$.

A straightforward calculation shows that $\mathbb {E}_1[X_{1:i}]=1/i$ for $i=1,2,3$ and

$$\mathbb{E}_1[X_{1:4}]=\frac 1 4 +\alpha \left(\frac 1 4 -\frac 4 5 + 1-\frac 47+\frac 1 8\right)=0.25+0.003571429\alpha .$$

Analogously, we get $\mathbb {E}_1[X^2_{1:i}]=1/i^2$ for $i=1,2,3$ and

$$\mathbb{E}_1[X^2_{1:4}]=0.125 +0.006318027 \alpha .$$

These moments are used as in the preceding examples to compute $\phi _\alpha (\underline {a})$. The values obtained for $\alpha =-1,-0.5,0,0.5,1$ are given in Table 3. The results of the faster estimators for $h=2,3,4$ are in bold case. Note that the changes due to the dependence parameter $\alpha$ are small. Even more, for some systems (those with $a_4=0$, lines 1–8), $\phi _\alpha (\underline {a})$ does not depend on $\alpha$. In particular, we obtain exponential distributions in $X_{1:i}$ for $i=1,2,3$. However, now $X_{1:4}$ (line $9$) does not have an exponential distribution and $\phi _\alpha (\underline {a})$ changes with $\alpha$. In this system, the best results (i.e. the faster estimators) are obtained with $\alpha =-1$ (negative correlation), but this is not the case for other systems. In all the cases, the best samples are those obtained from parallel systems (as in the i.i.d. case). The best result is obtained for $X_{4:4}$ and $\alpha =-1$ (line $28$, column $2$) with $\phi _{-1}(\underline {a})= 1.324909$ and $\psi _{-1}(\underline {a})= 3.077783$.

Table 3. Function $\phi _\alpha (\underline {a})$ for all the coherent systems with $1$-$4$ i.d. components given Table 1 with a baseline exponential distribution function and the FGM survival copula in Example 4.3 with $\alpha =-1,-0.5,0,0.5,1$.

Now, we present an example with a strong positive dependence and, in particular, we study how this dependence affects the rate of convergence of the method-of-moments estimator.

Example 4.4. Let us consider two components with lifetimes $(X_1,X_2)$ having a common exponential distribution with scale parameter $\theta$ and the following Clayton survival copula (see, e.g., [Reference Nelsen15], p. 116])

$$\widehat C(u,v)=(u^{-\alpha}+v^{-\alpha}-1)^{{-}1/\alpha}$$

for $u,v\in [0,1]$ and $\alpha \gt 0$ (positive dependence). The case of independent component is obtained when $\alpha \to 0$ and the case of comonotonic components (maximum positive dependence) when $\alpha \to \infty$. There are just two coherent systems with two components, the series system $X_{1:2}$ and the parallel system $X_{2:2}$. Let us compare the performance of the method-of-moments estimator from samples of these systems with the method-of-moments estimator obtained from the exponential components by assuming that $\alpha$ is known (in practice, we would need a training sample from $(X_1,X_2)$ to estimate $\alpha$ and confirm the copula). To get the method-of-moments estimators from (3.2), we need to compute $\mathbb {E}_1[X_{1:i}]$ for $i=1,2$. The first expectation is immediate since $\mathbb {E}_1[X_{1:1}]= \mathbb {E}_1[X_{1}]=1$. To get the second, we note that the reliability function of $X_{1:2}$ under $\theta =1$ is

$$\bar F_{1:2}(t)=( 2(\bar G(t))^{-\alpha}-1)^{{-}1/\alpha}=(2e^{\alpha t}-1)^{{-}1/\alpha},$$

where $\bar G(t)=\exp (-t)$ for $t\geq 0$. Hence,

$$\mathbb{E}_1[X_{1:2}]= \int_0^\infty (2e^{\alpha t}-1 )^{{-}1/\alpha}\,dt=\frac 1 \alpha \int_1^\infty \frac {u^{{-}1/\alpha}} {1+u}\,du.$$

The values for $\alpha = 0,0.25,0.5,0.75,1,2,3,4,5,10, 50$ can be seen in Table 4. Clearly, $\mathbb {E}_1[X_{1:2}]$ goes to $1$ when $\alpha \to \infty$ (comonotonic case). Analogously, as the minimal signature of the second system is $(2,-1)$, its mean can be obtained as

$$\mathbb{E}_1[X_{2:2}]=2\mathbb{E}_1[X_{1:1}]-\mathbb{E}_1[X_{1:2}]=2-\mathbb{E}_1[X_{1:2}].$$

Some values can be seen in Table 4. Here, we also have $\mathbb {E}_1[X_{2:2}]\to 1$ when $\alpha \to \infty$. So the three estimators coincide in the comonotonic case (as expected).

Table 4. Expectations and function $\phi _\alpha (\underline {a})$ for the systems in Example 4.4.

To determine the faster estimator, from (4.2), we look for the minimum of the function

$$\phi_\alpha(a_1,a_2)= \frac{a_1\mathbb{E}_1[X_{1:1}^2]+a_2\mathbb{E}_1[X_{1:2}^2]}{(a_1\mathbb{E}_1[X_{1:1}] +a_2\mathbb{E}_1[X_{1:2}])^2},$$

where $\mathbb {E}_1[X_{1:1}]=1$, $\mathbb {E}_1[X^2_{1:1}]=2$ and

$$\mathbb{E}_1[X^2_{1:2}]=\int_0^\infty 2t \bar F_{1:2}(t)\,dt=\int_0^\infty 2t (2e^{\alpha t}-1)^{{-}1/\alpha}\,dt .$$

The values of $\phi _\alpha$ for the three systems can be seen in Table 4. Note that their respective minimal signatures are $(1,0)$ ($X_1$), $(0,1)$ ($X_{1:2}$), and $(2,-1)$ ($X_{2:2}$). In the table values, we observe that $\phi _\alpha (0,1)$ is decreasing for $\alpha \gt 1$ and $\phi _\alpha (2,-1)$ is increasing for $\alpha \gt 0$ and that both go to $2$ when $\alpha \to \infty$. The best estimator for the values in the table is the one from the parallel system with $\alpha \to 0$ (i.i.d. case) with $\phi _1(2,-1)=1.555556$ (obtained also in Table 1, line $3$). However, the worst case for the series system is that with $\alpha =1$ since $\phi _\alpha (0,1)$ is increasing in $\alpha$ in the interval $(0,1)$ getting $\phi _0(1,0)=2$ when $\alpha \to 0$. In the independent case, $X_{1:2}$ has an exponential distribution and so the estimator is equivalent to the one obtained from the components (which also have exponential distributions).

In general, it is not easy to compare the method-of-moments estimator with others (in terms of rates of convergence). However, this could be easily done in the next Example 4.5, and we discuss the comparison with the MLE.

Example 4.5. Let us consider the inference problem described in Section 3.1, with a series system and i.i.d. random variables $X_1,\ldots,X_h$ such that, for some $\alpha \gt 0$, $G(t)=1-\exp (-t^\alpha )$ for $t\geq 0$. So $X_1,\ldots,X_h$ are Weibull $W(\alpha,\theta )$ distributed. Moreover, since we consider a series system, we have

$$\bar{F}_{T;\theta}(t)=P_\theta(T \gt t)=(1-G(t/\theta))^h=\exp({-}h(t/\theta)^\alpha)=\exp(-(t/(\theta/h^{1/\alpha}))^\alpha)\quad \text{for all}\ t \gt 0$$

and therefore, the random variable $T$ is Weibull $W(\alpha,\theta /h^{1/\alpha })$ distributed. It is easy to check with some standard computations (we omit the details) that the MLE for $\theta$ is

$$\widehat{\Theta}_n^{({\rm MLE})}:=\left(\frac{h}{n}\sum_{i=1}^nT_i^\alpha\right)^{1/\alpha};$$

then we have to compare it with the method-of-moments estimator in Eq. (3.2)

$$\widehat{\Theta}_n=\frac{h^{1/\alpha}}{\Gamma(1+1/\alpha)}\frac{T_1+\cdots+T_n}{n}$$

(here, we take into account some well-known formulas for Weibull distribution). We remark that $\widehat {\Theta }_n$ is unbiased, while $\widehat {\Theta }_n^{({\rm MLE})}$ is unbiased only if $\alpha =1$. Moreover, if $\alpha =1$, the estimators $\widehat {\Theta }_n$ and $\widehat {\Theta }_n^{({\rm MLE})}$ coincide. We know that $\mathrm {Var}_\theta [\hat {\Theta }_n]=({\theta ^2}/{n})\sigma _\bullet ^2(\underline {s},G)$ (see Eq. (3.4)), but we cannot compute $\mathrm {Var}_\theta [\hat {\Theta }_n^{({\rm MLE})}]$. However, one can easily check that $hT^\alpha$ is exponentially distributed with mean $\theta ^\alpha$ and, by some standard arguments in large deviations, one can say that $\{\widehat {\Theta }_n^{({\rm MLE})}:n\geq 1\}$ satisfies the large deviation principle with good rate function $I_{\hat {\Theta }^{({\rm MLE})},\theta }$ defined by

$$I_{\hat{\Theta}^{({\rm MLE})},\theta}(\widehat{\theta}):= \frac{\widehat{\theta}^\alpha}{\theta^\alpha}-1-\log\frac{\widehat{\theta}^\alpha}{\theta^\alpha}\quad (\text{for}\ \widehat{\theta} \gt 0)$$

(note that this rate function uniquely vanishes at $\widehat {\theta }=\theta$, because the estimator $\widehat {\Theta }_n^{({\rm MLE})}$ is consistent). So, if we consider some arguments in Section 3.2 and in particular the equality in Eq. (3.5), we can check that

$$(I_{\hat{\Theta}^{({\rm MLE})},\theta}^{\prime\prime}(\widehat{\theta})|_{\widehat{\theta}=\theta})^{{-}1}=\frac{\theta^2}{\alpha^2}.$$

In conclusion, we have to compare the asymptotic variance of the MLE ${1}/{\alpha ^2}$ (that is the coefficient of $\theta ^2$ in the last equality), and the asymptotic variance of the method-of-moments estimator

$$\sigma_\bullet^2(\underline{s},G)=\frac{\Gamma(1+2/\alpha)}{\Gamma^2(1+1/\alpha)}-1$$

(here, again, we take into account some well-known formulas for Weibull distribution). Note that the two variances coincide for $\alpha =1$ (as expected) and, otherwise (for $\alpha \neq 1$), we have

$$\frac{1}{\alpha^2} \lt \frac{\Gamma(1+2/\alpha)}{\Gamma^2(1+1/\alpha)}-1.$$

So the MLE is better than the method-of-moments estimator. The maximum of the difference

$$D(\alpha)=\frac{\Gamma(1+2/\alpha)}{\Gamma^2(1+1/\alpha)}-1-\frac{1}{\alpha^2}$$

for $\alpha \gt 1$ is attained at $\alpha \simeq 2.2059...$; on the contrary, for $\alpha \in (0,1)$, the difference increases as $\alpha$ decreases to zero and it goes to zero as $\alpha \to \infty$ (see Figure 2). Note that the variances are very similar for $\alpha \gt 1$. Recall also that the method-of-moments estimator is unbiased but that the MLE is biased for $\theta$ when $\alpha \neq 1$. Moreover, note that for other systems (not series systems), it is not so easy to get an explicit expression for the MLE and we have to use numerical procedures to compute it (see, e.g., [Reference Ng, Navarro and Balakrishnan16,Reference Yang, Ng and Balakrishnan20]).

Figure 2. Difference $D(\alpha )$ between the asymptotic variances of the method-of-moments estimator and the MLE in Example 4.5.

5. Conclusions

We have provided a method-of-moments estimator to estimate the scale parameter in the common distribution of the component lifetimes of a coherent system. We show that the performance of this estimator depends on the scale model baseline distribution $G$, the structure of the system (its signature vectors) and the dependence structure (the copula $C$). We compare this performance with that of method-of-moments estimator from the components. The main advantage of the method-of-moments estimator with respect to other estimators in the literature (MLE, BLUE, etc.) is that we have the explicit expression in Eq. (3.2); moreover, this explicit expression only depends on the signature vector $\underline {s}$ (or the minimal signature vector $\underline {a}$) and the means of the order statistics (series systems) from $G$ and $C$.

There are several tasks for future works. Many of them can be obtained by changing or relaxing some of the assumptions made in the paper. For example, we could consider other parametric models different from the scale parameter model considered here as the proportional hazard rate or reversed hazard rates considered in other papers. We might also consider that the components have different distributions which include a common scale parameter. If we are not able to fix a parametric model for the component distribution, we might try to get semiparametric or nonparametric procedures different from that considered in [Reference Balakrishnan, Ng and Navarro3]. The estimation of the dependence parameters included in the copula $C$ is also a problem of interest in practice.

Acknowledgments

We would like to thank the anonymous reviewers for several helpful suggestions that have served to improve the earlier version of this paper.

Funding statement

C.M. thanks the partial support of MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata (CUP E83C18000100006), by the University of Rome Tor Vergata (project “Asymptotic Methods in Probability” (CUP E89C20000680005) and project “Asymptotic Properties in Probability” (CUP E83C22001780005)) and by Indam-GNAMPA. J.N. thanks the partial support of Ministerio de Ciencia e Innovación of Spain under grant PID2019-103971GB-I00/AEI/10.13039/501100011033.

Competing interests

The authors declare no conflict of interest.

References

Arnold, B.C. & Sarabia, J.M. (2018). Majorization and the Lorenz order with applications in applied mathematics and economics. Cham, Switzerland: Springer.CrossRefGoogle Scholar
Arnold, B.C. & Villaseñor, J.A. (1991). Lorenz ordering of order statistics. In Stochastic orders and decision under risk. Lecture Notes-Monograph Series Vol. 19. Institute of Mathematical Statistics, pp. 38–47.CrossRefGoogle Scholar
Balakrishnan, N., Ng, H.K.T., & Navarro, J. (2011). Exact nonparametric inference for component lifetime distribution based on lifetime data from systems with known signatures. Journal of Nonparametric Statistics 23: 741752.CrossRefGoogle Scholar
Balakrishnan, N., Ng, H.K.T., & Navarro, J. (2011). Linear inference for type-II censored lifetime data of reliability systems with known signatures. IEEE Transactions on Reliability 60: 426440.CrossRefGoogle Scholar
Belzunce, F., Martínez-Riquelme, C., & Mulero, J. (2016). An introduction to stochastic orders. London: Elsevier.Google Scholar
Da, G., Xu, M., & Balakrishnan, N. (2014). On the Lorenz ordering of order statistics from exponential populations and some applications. Journal of Multivariate Analysis 127: 8897.CrossRefGoogle Scholar
Dembo, A. & Zeitouni, O. (1998). Large deviations techniques and applications, 2nd ed. New York: Springer.CrossRefGoogle Scholar
Fallah, A., Asgharzadeh, A., & Ng, H.K.T. (2021). Statistical inference for component lifetime distribution from coherent system lifetimes under a proportional reversed hazard model. Communications in Statistics Theory and Methods 50(16): 38093833.CrossRefGoogle Scholar
Izadkhah, S., Amini-Seresht, E., & Balakrishnan, N. (2022). Preservation properties of some reliability classes by lifetimes of coherent, mixed systems and their signatures. Probability in the Engineering and Informational Sciences, to appear. Published online first 23 September 2022. doi:10.1017/S0269964822000316Google Scholar
Ling, M.H., Ng, H.K.T., Chan, P.S., & Balakrishnan, N. (2016). Autopsy data analysis for a series system with active redundancy under a load-sharing model. IEEE Transactions on Reliability 65(2): 957968.CrossRefGoogle Scholar
Marshall, A.W. & Olkin, I. (2007). Life distributions. New York: Springer.Google Scholar
Navarro, J. (2022). Introduction to system reliability theory. Cham, Switzerland: Springer.CrossRefGoogle Scholar
Navarro, J. & Rubio, R. (2010). Computations of signatures of coherent systems with five components. Communications in Statistics Simulation and Computation 39: 6884.CrossRefGoogle Scholar
Navarro, J., Ruiz, J.M., & Sandoval, C.J. (2007). Properties of coherent systems with dependent components. Communications in Statistics Theory and Methods 36: 175191.CrossRefGoogle Scholar
Nelsen, R.B. (2006). An introduction to copulas. New York: Springer.Google Scholar
Ng, H.K.T., Navarro, J., & Balakrishnan, N. (2012). Parametric inference from system lifetime data under a proportional hazard rate model. Metrika 75: 367388.CrossRefGoogle Scholar
Ross, S.M., Shahshahani, M., & Weiss, G. (1980). On the number of component failures in systems whose component lives are exchangeable. Mathematics of Operations Research 5: 358365.CrossRefGoogle Scholar
Samaniego, F.J. (1985). On closure of the IFR class under formation of coherent systems. IEEE Transactions on Reliability R-34: 6972.CrossRefGoogle Scholar
Shaked, M. & Suárez-Llorens, A. (2003). On the comparison of reliability experiments based on the convolution order. Journal of the American Statistical Association 98: 693702.CrossRefGoogle Scholar
Yang, Y., Ng, H.K.T., & Balakrishnan, N. (2016). A stochastic expectation-maximization algorithm for the analysis of system lifetime data with known signature. Computational Statistics 31: 609641.CrossRefGoogle Scholar
Yi, H., Balakrishnan, N., & Li, X. (2022). Ordered multi-state system signature, its dynamic version in evaluating used multi-state systems. Probability in the Engineering and Informational Sciences, to appear. Published online first July 2022. doi:10.1017/S0269964822000237Google Scholar
Figure 0

Figure 1. Function $\phi (s_1)$ in Example 4.1.

Figure 1

Table 1. Minimal signatures $\underline {a}$ and values of the system expected lifetime $\mu _1(\underline {a},G)=\mathbb {E}_1[T_i]$ and the functions $\phi (\underline {a})$ and $\psi (\underline {a})$ for the exponential distribution $G$ and for all the coherent systems with $1$$4$ i.i.d. components.

Figure 2

Table 2. Functions $\phi _\alpha (\underline {a})$ and $\psi _\alpha (\underline {a})$ for the Pareto distribution in Example 4.2 with $\alpha =3,4,5$ and for all the coherent systems with $1$-$4$ i.i.d. components given Table 1.

Figure 3

Table 3. Function $\phi _\alpha (\underline {a})$ for all the coherent systems with $1$-$4$ i.d. components given Table 1 with a baseline exponential distribution function and the FGM survival copula in Example 4.3 with $\alpha =-1,-0.5,0,0.5,1$.

Figure 4

Table 4. Expectations and function $\phi _\alpha (\underline {a})$ for the systems in Example 4.4.

Figure 5

Figure 2. Difference $D(\alpha )$ between the asymptotic variances of the method-of-moments estimator and the MLE in Example 4.5.