Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-26T17:55:47.617Z Has data issue: false hasContentIssue false

Quantile-based information generating functions and their properties and uses

Published online by Cambridge University Press:  22 May 2024

Suchandan Kayal
Affiliation:
Department of Mathematics, National Institute of Technology Rourkela, Odisha, India
N. Balakrishnan*
Affiliation:
Department of Mathematics and Statistics, McMaster University, Hamilton, Canada
*
Corresponding author: N. Balakrishnan; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Information generating functions (IGFs) have been of great interest to researchers due to their ability to generate various information measures. The IGF of an absolutely continuous random variable (see Golomb, S. (1966). The information generating function of a probability distribution. IEEE Transactions in Information Theory, 12(1), 75–77) depends on its density function. But, there are several models with intractable cumulative distribution functions, but do have explicit quantile functions. For this reason, in this work, we propose quantile version of the IGF, and then explore some of its properties. Effect of increasing transformations on it is then studied. Bounds are also obtained. The proposed generating function is studied especially for escort and generalized escort distributions. Some connections between the quantile-based IGF (Q-IGF) order and well-known stochastic orders are established. Finally, the proposed Q-IGF is extended for residual and past lifetimes as well. Several examples are presented through out to illustrate the theoretical results established here. An inferential application of the proposed methodology is also discussed

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press.

1. Introduction

The notion of entropy, especially the Shannon entropy due to [Reference Elwood Shannon24], has seen a great importance in many fields. Shannon entropy quantifies the amount of information needed to accurately send and receive messages in a communication channel. We refer to [Reference Elwood Shannon24] and [Reference Nanda and Shovan21] for several important properties and applications of Shannon entropy, and its generalizations. For a random variable (RV) X with mass function $\{p_i\ge0,~i=1,\ldots,n\}$, such that $\sum_{i=1}^{n}p_{i}=1$, the Shannon entropy is given by $ H(X)=-\sum_{i=1}^{n}p_i\log p_i. $ For a nonnegative absolutely continuous RV X with probability density function (PDF) fX, the Shannon entropy (or differential entropy) is analogously given by

(1.1)\begin{eqnarray} S(X)=-\int_{0}^{\infty}f_{X}(x)\log f_{X}(x)dx, \end{eqnarray}

which may take values in $(-\infty,\infty)$. For uniform RV in $(0,\theta)$, S(X) is negative when θ < 1. In (1.1), we have considered nonnegative RVs, but for an RV with support $(-\infty, \infty)$, Shannon entropy can be defined by changing the limits of integration.

Motivated by the concepts of moment and probability generating functions, Golomb [Reference Golomb4] proposed the information generating function (IGF) of an RV X. For a nonnegative continuous RV X, the IGF is

(1.2)\begin{eqnarray} I_{\beta}(X)=\int_{0}^{\infty}f_{X}^{\beta}(x)dx,~~\beta\ge 1. \end{eqnarray}

The derivative of IGF in (1.2), evaluated at $1,$ yields the negative Shannon entropy or negentropy. Further, $I_{\beta=1}(X)=1,$ and when $\beta=2,$ we get informational energy, which has been widely used in Physics. Golomb [Reference Golomb4] explained the reason for considering $\beta\ge 1$ in (1.2) instead of $\beta\ge0$. Along the same lines, throughout this work, we also consider $\beta\ge1$. Recently, many authors have studied IGFs due to their importance in information theory; one may refer to [Reference Kharazmi and Balakrishnan10], [Reference Kharazmi and Balakrishnan12], [Reference Kharazmi and Balakrishnan11], [Reference Kharazmi and Balakrishnan13], [Reference Zamani, Kharazmi and Balakrishnan29], [Reference Kharazmi, Balakrishnan and Ozonur14], and [Reference Kharazmi, Contreras-Reyes and Balakrishnan15] for some IGFs and their diverse properties and applications.

We observe that the IGF in (1.2) depends on the PDF of the distribution. However, there are cases for which the quantile function (QF) is available in an explicit form, but not the density function as such. For example, lambda distributions [Reference Ramberg and Schmeiser23], power-Pareto distributions [Reference Hankin and Lee5], and Govindarajulu distribution [Reference Unnikrishnan Nair, Sankaran and Vinesh Kumar19] do not have exact distribution functions. For these cases, it is impossible to generate information measures using (1.2). To overcome this difficulty, the IGF in (1.2) needs to be re-defined. In this regard, we consider here IGF based on QF and then explore its properties. We also study the proposed quantile-based IGF (Q-IGF) for residual and past lifetimes, which are important characteristics in reliability theory. For some early developments regarding quantile-based information measures, one may refer to [Reference Sunoj and Sankaran25], [Reference Sunoj, Sankaran and Nanda26], [Reference Baratpour and Khammar1], [Reference Kayal6], [Reference Kayal7], [Reference Kayal and Tripathy8], [Reference Kumar, Taneja and Chhoker17], [Reference Kayal, Moharana and Sunoj9], [Reference Zamania and Madadia30], and the references therein. The QF of an RV $X,$ with cumulative distribution function (CDF) $F_{X},$ is given by

(1.3)\begin{eqnarray} Q_{X}(p)=F^{-1}_{X}(p)=\inf\{x|F_{X}(x)\ge p\},~~0\le p\le 1. \end{eqnarray}

Note that the QF does not share all of its properties with the distribution function. For example, the sum of two QFs is a QF and the product of two positive QFs is also a QF. Interested readers may refer to [Reference Unnikrishnan Nair, Sankaran and Balakrishnan20] for some more features and properties of QF.

The key contributions of this work are as follows:

  1. (a) There are distributions for which the QF is available in an explicit form, but not the distribution or density function. In order to study the IGF for such distributions, in this work, we propose Q-IGF of a nonnegative absolutely continuous RV. We then show that the Q-IGF can be represented in terms of hazard and reversed hazard QFs;

  2. (b) The effect of monotone increasing transformations on Q-IGF is examined. An order based on Q-IGF is then introduced, and its connection to some existing stochastic orders is established. We have also studied the proposed Q-IGF for quantile-based escort and generalized escort distributions;

  3. (c) Residual lifetime is an important mathematical concept, usually applied in the study of predicting future performance of a working system. The past lifetime is also a useful concept while dealing with the past performance of a failed system. In this work, we finally study Q-IGF for both residual and past lifetimes, and then establish various properties of them. Several examples are presented throughout this work for illustrating the theoretical results established here.

The rest of this paper is as follows. In Section 2, we introduce the IGF based on QF. We show that the Q-IGF can be represented in terms of quantile-based fractional Shannon entropy. Further, the effect of monotone increasing transformations on Q-IGF is studied. Quantile-based escort and generalized escort distributions are also considered. Based on the newly proposed IGF, an ordering is introduced and its connection to hazard and reversed hazard quantile orders is shown. The Q-IGF for residual lifetime is discussed in Section 3, while Section 4 deals with past lifetime. Some of their properties are also established in these sections. Further, an inferential application of the proposed methodology has been illustrated in Section 5. Finally, Section 6 presents some concluding remarks.

We assume throughout the paper that the nonnegative RVs are absolutely continuous. All the involved integration and differentiation are assumed to exist. Moreover, the words increasing and decreasing are used in a wider sense.

2. Quantile-based IGF

This section discusses briefly the IGF due to [Reference Golomb4] based on the QF. Denote the PDF of X by fX and the CDF by $F_{X}.$ We recall that the quantile density function (QDF) of X, denoted by $q_{X},$ can be obtained from QF QX as $\frac{d}{dp}Q_{X}(p)=q_{X}(p)$. It is then easy to see that $F_{X}(Q_{X}(p))=p,$ implying $f_{X}(Q_{X}(p))q_{X}(p)=1.$ We then present the following definition.

Definition 2.1. Suppose X is an RV with QF QX and QDF $q_{X}.$ Then, for $\beta\ge1$, the Q-IGF of X is given by

(2.1)\begin{eqnarray} I^{Q}_{\beta}(X)=\int_{0}^{1} f_{X}^{\beta}(Q_{X}(p))dQ_{X}(p)=\int_{0}^{1}f_{X}^{\beta-1}(Q_{X}(p))dp=\int_{0}^{1}q_{X}^{1-\beta}(p)dp. \end{eqnarray}

Note that $I^{Q}_{\beta}(X)$ in (2.1) provides a quantile version of the IGF, which measures information contained in a distribution through QDF. From (2.1), the following facts are evident:

  1. (i) $I^{Q}_{\beta}(X)|_{\beta=1}=1;~~I^{Q}_{\beta}(X)|_{\beta=2}=\int_{0}^{1}(q_{X}(p))^{-1}dp=-2J^{Q}(X)$,

  2. (ii) $\frac{\partial}{\partial \beta}I^{Q}_{\beta}(X)|_{\beta=1}=-\int_{0}^{1} \log q_{X}(p)dp=-S^{Q}(X),$

where $J^{Q}(X)$ and $S^{Q}(X)$ are, respectively, the quantile-based extropy (see (6) of [Reference Krishnan, Sunoj and Unnikrishnan Nair16]) and quantile-based Shannon entropy (see (7) of [Reference Sunoj and Sankaran25]). Using the hazard QF given by $H_{X}(p)=((1-p)q_{X}(p))^{-1}$, and the reversed hazard QF given by $\tilde{H}_{X}(p)=(pq_{X}(p))^{-1},$ the Q-IGF in (2.1) can be rewritten as

(2.2)\begin{eqnarray} I^{Q}_{\beta}(X)=\int_{0}^{1}\big\{(1-p)H_{X}(p)\big\}^{\beta-1}dp~~\mbox{and}~~I^{Q}_{\beta}(X)=\int_{0}^{1}\big\{p\tilde{H}_{X}(p)\big\}^{\beta-1}dp,~~\beta\ge1. \end{eqnarray}

The following example gives closed-form expressions for the Q-IGF for different distributions. Some plots of Q-IGF exponential density functions, for example, are presented in Figure 1.

Example 2.2.

  • For an RV X following Exponential$(\theta)$ distribution, $Q_{X}(p)=-\frac{\log(1-p)}{\theta}$, and so $q_{X}(p)=\frac{1}{\theta(1-p)}$. Thus, from (2.1), we obtain $I_{\beta}^{Q}(X)=\frac{\theta^{\beta-1}}{\beta}$. The plots of the Q-IGF of exponential distribution are presented in Figure 1 for different choices of $\theta$.

  • For an RV X following Uniform(a, b) distribution, $Q_{X}(p)=a+(b-a)p$, and so $q_{X}(p)=b-a.$ From (2.1), we get $I_{\beta}^{Q}(X)=(b-a)^{1-\beta}$;

  • For an RV X having Pareto-I(a, b) distribution, $Q_{X}(p)=\frac{b}{(1-p)^{\frac{1}{a}}},$ and so $q_{X}(p)=\frac{b}{a(1-p)^{\frac{1}{a}+1}}.$ From (2.1), we obtain $I_{\beta}^{Q}(X)=\frac{(a/b)^{\beta-1}}{(\frac{1}{a}+1)(\beta-1)+1}$;

  • Let X follow inverted exponential distribution with $Q_{X}(p)=-\frac{\lambda}{\log x},$ and so $q_{X}(p)=\frac{\lambda}{p(\log p)^{2}}$. From (2.1), we find the Q-IGF as $I_{\beta}^{Q}(X)=\frac{\Gamma(2\beta-1)}{\lambda^{\beta-1}\beta^{2\beta-1}}$.

Figure 1. Plots of Q-IGF for exponential distributionconsidered in Example 2.2, for (a) $\theta=0.1,0.6,0.8,1$ (presented from below) and (b) $\theta=2.7,3.5,4,4.5$ (presented from below). Along the x-axis, we have taken the values of β.

Next, we consider some distributions which do not have closed-form distribution functions, and then discuss their Q-IGFs.

Example 2.3.

  • Consider Davies distribution with $Q_{X}(p)=c p^{\lambda_{1}}(1-p)^{-\lambda_2},~c \gt 0,~\lambda_1,~\lambda_2 \gt 0.$ Here,

    (2.3)\begin{eqnarray} q_{X}(p)=c\lambda_1\frac{p^{\lambda_1-1}}{(1-p)^{\lambda_2}}+c\lambda_2\frac{p^{\lambda_1}}{(1-p)^{\lambda_2+1}}. \end{eqnarray}

    Thus, the Q-IGF is given by

    (2.4)\begin{eqnarray} I_{\beta}^{Q}(X)=c^{1-\beta}\int_{0}^{1}\left[\lambda_1\frac{p^{\lambda_1-1}}{(1-p)^{\lambda_2}}+\lambda_2\frac{p^{\lambda_1}}{(1-p)^{\lambda_2+1}}\right]^{1-\beta}dp, \end{eqnarray}

    which is difficult to obtain in closed-form. Suppose $\lambda_1=1$ and $\lambda_2=1$. Then, from (2.4), we obtain

    (2.5)\begin{eqnarray} I_{\beta}^{Q}(X)=c^{1-\beta}\int_{0}^{1}(1-p)^{2(\beta-1)}dp=\frac{c^{1-\beta}}{2\beta-1},~\beta \gt 1. \end{eqnarray}
  • Next, consider Govindarajulu’s distribution with QF $Q_{X}(p)=a\{(b+1)p^{b}-bp^{b+1}\},$ $0\le p\le 1,~a,~b \gt 0.$ The QDF is given by

    (2.6)\begin{eqnarray} q_{X}(p)=ab(b+1)(1-p)p^{b-1}. \end{eqnarray}

    Thus, the Q-IGF is obtained as

    (2.7)\begin{eqnarray} I_{\beta}^{Q}(X)&=&\int_{0}^{1}\left[ab(b+1)(1-p)p^{b-1}\right]^{1-\beta}dp\nonumber\\ &=&\{ab(b+1)\}^{1-\beta}\int_{0}^{1}(1-p)^{1-\beta}p^{(b-1)(1-\beta)}dp\nonumber\\ &=&\{ab(b+1)\}^{1-\beta} B(2-\beta,(b-1)(1-\beta)+1), \end{eqnarray}

    provided $2-\beta \gt 0$ and $(b-1)(1-\beta)+1 \gt 0,$ where $B(\cdot,\cdot)$ is the complete beta function.

  • Consider the QF (see [Reference Midhu, Sankaran and Unnikrishnan Nair18]) $Q_{X}(p)=-(c+\mu)\log(1-p)-2cp$, where µ > 0 and $-\mu\le c \lt \mu$, corresponding to the linear mean residual QF. In this case,

    (2.8)\begin{eqnarray} q_{X}(p)=\frac{c+\mu}{1-p}-2c. \end{eqnarray}

    Thus, the Q-IGF is obtained as

    (2.9)\begin{eqnarray} I_{\beta}^{Q}(X)=\int_{0}^{1}\left[\frac{c+\mu}{1-p}-2c\right]^{1-\beta}dp,~\beta \gt 1. \end{eqnarray}

    Note that it is difficult to evaluate the integral in (2.9) in an explicit form. Thus, to have a rough idea about the behavior of the Q-IGF in (2.9), we have plotted the function in Figure 2 with respect to β for c = 1 and µ = 1.5.

Figure 2. Plot of Q-IGF for the QDF given by (2.8) considered in Example 2.3, for c = 1 and µ = 1.5. Along the x-axis, we have taken the values of $\beta.$

The fractional order Shannon entropy (FSE) was introduced by [Reference Ubriaco27], which was subsequently extended by [Reference Xiong, Shang and Zhang28] and [Reference Di Crescenzo, Kayal and Meoli3] to residual lifetime and past lifetime. We now provide a new representation for the Q-IGF.

Proposition 2.4. Suppose the QDF of X is denoted by $q_{X}.$ Then,

(2.10)\begin{eqnarray} I_{\beta}^{Q}(X)=\sum_{k=0}^{\infty}\frac{(1-\beta)^{k}}{k!}S^{Q}_{k}(X), \end{eqnarray}

where $S^{Q}_{k}(X)=\int_{0}^{1}\left\{\log q_{X}(p)\right\}^{k}dp$ is the quantile-based FSE of order k.

Proof. Using Maclaurin’s theorem, we obtain from (2.1) that

\begin{eqnarray*} I_{\beta}^{Q}(X)=\int_{0}^{1}e^{(1-\beta)\log q_{X}(p)}dp&=&\int_{0}^{1}\sum_{k=0}^{\infty}\frac{(1-\beta)^{k}}{k!}\{\log q_{X}(p)\}^{k}dp\\ &=&\sum_{k=0}^{\infty}\frac{(1-\beta)^{k}}{k!} \int_{0}^{1}\{\log q_{X}(p)\}^{k}dp=\sum_{k=0}^{\infty}\frac{(1-\beta)^{k}}{k!}S^{Q}_{k}(X), \end{eqnarray*}

as required.

In the following, we present lower and upper bounds for Q-IGF in terms of the quantile-based Shannon entropy and the hazard QF.

Proposition 2.5. For an RV X with QDF qX, we have

(2.11)\begin{eqnarray} L(\beta)\le I_{\beta}^{Q}(X)\le U(\beta), \end{eqnarray}

where $L(\beta)=\max\{0,(1-\beta)S^{Q}(X)\}$ and $U(\beta)=\int_{0}^{1}\{H_{X}(p)\}^{\beta-1}dp.$

Proof. Making use of the inequality $x^{1-\beta}\ge (1-\beta)\log x +1,$ the lower bound is easily obtained. The upper bound can be obtained from (2.2) by using the fact that $1-p\le 1.$

Next, we consider monotone transformations of RVs to see their effect on Q-IGF. Suppose ψ is an increasing function and $Y=\psi(X)$, where X is an RV with PDF fX and QF $Q_{X}.$ Then, it is known that the PDF of Y is $f_{Y}(y)=\frac{f_{X}(\psi^{-1}(y))}{\psi'(\psi^{-1}(y))}$. Moreover, $F_{Y}(y)=F_{X}(\psi^{-1}(y)\Rightarrow F_{Y}(Q_{Y}(p))=F_{X}(\psi^{-1}(Q_{Y}(p)))\Rightarrow \psi^{-1}(Q_{Y}(p))=F_{X}^{-1}(p)=Q_{X}(p).$ Upon using this, the PDF of Y can be expressed as

(2.12)\begin{eqnarray} f_{Y}(Q_{Y}(p))=\frac{f_{X}(\psi^{-1}(Q_{Y}(p)))}{\psi'(\psi^{-1}(Q_{Y}(p)))}=\frac{f_{X}(Q_{X}(p))}{\psi'(Q_{X}(p))}=\frac{1}{q_{X}(p)\psi'(Q_{X}(p))}=\frac{1}{q_{Y}(p)}. \end{eqnarray}

Theorem 2.6. Suppose X is an RV with QF QX and QDF qX. Further, suppose ψ is a positive-valued increasing function. Then,

(2.13)\begin{eqnarray} I_{\beta}^{Q}(\psi(X))=\int_{0}^{1}\frac{q_{X}^{1-\beta}(p)}{\{\psi'(Q_{X}(p))\}^{\beta-1}}dp. \end{eqnarray}

Proof. The proof follows readily from (2.1) upon using (2.12).

The following example provides an illustration for the result in Theorem 2.6.

Example 2.7. Consider exponentially distributed RV $X,$ as in Example 2.2. Further, consider an increasing transformation $\psi(X)=X^{\alpha}$, α > 0. Then, it is known that $Y=\psi(X)$ follows a Weibull distribution with QF $Q_{\psi(X)}(p)=\theta^{-\alpha}\{-\log(1-p)\}^{\alpha}.$ Now, by using (2.13), we obtain the Q-IGF for the Weibull distribution as

(2.14)\begin{equation} I_{\beta}^{Q}(\psi(X))=\int_{0}^{1}\frac{(\theta(1-p))^{\beta-1}}{(-\alpha \log(1-p)/\theta)^{\beta-1}}dp=\frac{\theta^{2\beta-2}}{\alpha^{\beta-1}}\int_{0}^{1}\frac{(1-p)^{\beta-1}}{(-\log(1-p))^{\beta-1}}dp=\frac{\theta^{2\beta-2}\Gamma(2-\beta)}{\beta^{2-\beta}\alpha^{\beta-1}}, \end{equation}

provided $1\le\beta \lt 2.$

Weighted distributions are useful in many areas, such as renewal theory, reliability theory, and ecology. For an RV X with PDF fX, the PDF of the weighted RV Xω is $f_{\omega}(x)=\frac{\omega(x)f_{X}(x)}{E(\omega(X))},$ where $\omega(x)$ is a positive-valued weight function having finite expectation. We now consider a particular case of the weighted RV, known as an escort RV. Associated with X, the PDF of the escort distribution is given by

(2.15)\begin{eqnarray} f_{X_{e,c}}(x)=\frac{f_{X}^{c}(x)}{\int_{0}^{\infty}f_{X}^{c}(x)dx},~~c \gt 0, \end{eqnarray}

provided the involve integral exists. Observe that the escort distribution can be obtained as a weighted distribution with a suitable weight function. We now study the Q-IGF for the escort distribution in (2.15). Using the QF QX in (2.15), we obtain the density QF of $X_{e,c}$ as

(2.16)\begin{equation} f_{X_{e,c}}(Q_{X}(p))=\frac{f_{X}^{c}(Q_{X}(p))}{\int_{0}^{1}f_{X}^{c}(Q_{X}(p))dQ_{X}(p)}=\frac{f_{X}^{c}(Q_{X}(p))}{\int_{0}^{1}f_{X}^{c-1}(Q_{X}(p))dp} =\frac{1}{q^{c}_{X}(p)\int_{0}^{1}q_{X}^{1-c}(p)dp}=\frac{1}{q_{X_{e,c}}(p)}, \end{equation}

where $q_{X_{e,c}}(p)$ is the QDF of $X_{e,c}.$

Proposition 2.8. Suppose $X_{e,c}$ is an escort RV with QDF $q_{e,c}$ corresponding to an RV X with QDF qX and PDF $f_{X}.$ Then,

(2.17)\begin{eqnarray} I^{Q}_{c\beta}(X)=I_{\beta}^{Q}(X_{e,c})(I_{c}^{Q}(X))^{\beta}. \end{eqnarray}

Proof. From (2.1), the Q-IGF of $X_{e,c}$ is given by

(2.18)\begin{eqnarray} I^{Q}_{\beta}(X_{e,c})=\int_{0}^{1} f_{X_{e,c}}^{\beta}(Q_{X}(p))dQ_{X}(p). \end{eqnarray}

Now, using $f_{X_{e,c}}(Q_{X}(p))=\frac{f_{X}^{c}(Q_{X}(p))}{\int_{0}^{1}f_{X}^{c}(Q_{X}(p))dQ_{X}(p)}$ from (2.16) into (2.18) we obtain

\begin{eqnarray*} I_{\beta}^{Q}(X_{e,c})=\frac{\int_{0}^{1}f_{X}^{c\beta}(Q_{X}(p))dQ_{X}(p)}{\{\int_{0}^{1}f_{X}^{c}(Q_{X}(p))dQ_{X}(p)\}^{\beta}}=\frac{I_{c\beta}^{Q}(X)}{\{I_{c}^{Q}(X)\}^{\beta}}, \end{eqnarray*}

as required.

For two RVs X and Y, with respective PDFs fX and fY, the PDF of a generalized escort distribution is given by

(2.19)\begin{eqnarray} f_{Z_{ge,c}}(x)=\frac{f_{X}^{c}(x) f_{Y}^{1-c}(x)}{\int_{0}^{\infty}f_{X}^{c}(x) f_{Y}^{1-c}(x)dx},~~c \gt 0, \end{eqnarray}

provided the involve integral exists. Like escort distributions, the generalized escort distributions can also be derived as a weighted distribution with a suitable weight function. The quantile version of the generalized escort distribution is given by

(2.20)\begin{equation} f_{Z_{ge,c}}(Q_{X}(p))=\frac{f_{X}^{c}(Q_{X}(p)) f_{Y}^{1-c}(Q_{X}(p))}{\int_{0}^{1}f_{X}^{c}(Q_{X}(p)) f_{Y}^{1-c}(Q_{X}(p))dQ_{X}(p)}=\frac{f_{X}^{c}(Q_{X}(p)) f_{Y}^{1-c}(Q_{X}(p))}{\int_{0}^{1}f_{X}^{c-1}(Q_{X}(p)) f_{Y}^{1-c}(Q_{X}(p))dp}. \end{equation}

Theorem 2.9. Let $X_{e,\beta}$ and $Y_{e,\beta}$ be two escort RVs associated with X and Y, respectively. Further, let $Z_{ge,c}$ be the generalized escort RV. Then,

\begin{equation*}I_{\beta}^{Q}(Z_{ge,c})=\frac{(I_{\beta}^{Q}(X))^{c}(I_{\beta}^{Q}(Y))^{1-c}}{\{R_{c}^{Q}(X,Y)\}^{\beta}}R_{c}^{Q}(X_{e,\beta},Y_{e,\beta}).\end{equation*}

Proof. From (2.1), the Q-IGF of $Z_{ge,c}$ is given by

(2.21)\begin{eqnarray} I^{Q}_{\beta}(Z_{ge,c})=\int_{0}^{1} f_{Z_{ge,c}}^{\beta}(Q_{X}(p))dQ_{X}(p). \end{eqnarray}

Further, using (2.20) in (2.21), we obtain

\begin{eqnarray*} I_{\beta}^{Q}(Z_{ge,c})&=&\frac{\int_{0}^{1}\{f_{X}^{c}(Q_{X}(p)) f_{Y}^{1-c}(Q_{X}(p))\}^{\beta}dQ_{X}(p)}{\{\int_{0}^{1}f_{X}^{c-1}(Q_{X}(p)) f_{Y}^{1-c}(Q_{X}(p))dp\}^{\beta}}\\ &=&\frac{1}{\{R_{c}^{Q}(X,Y)\}^{\beta}}\int_{0}^{1}\left(\frac{f_{X}^{\beta}(Q_{X}(p))}{\int_{0}^{1}f_{X}^{\beta}(Q_{X}(p))dQ_{X}(p)}\right)^{c}\left(\frac{f_{Y}^{\beta}(Q_{X}(p))}{\int_{0}^{1}f_{Y}^{\beta}(Q_{X}(p))dQ_{X}(p)}\right)^{1-c}dQ_{X}(p)\\ &~&\times \left(\int_{0}^{1}f_{X}^{\beta}(Q_{X}(p))dQ_{X}(p)\right)^{c}\left(\int_{0}^{1}f_{Y}^{\beta}(Q_{X}(p))dQ_{X}(p)\right)^{1-c}\\ &=&\frac{(I_{\beta}^{Q}(X))^{c}(I_{\beta}^{Q}(Y))^{1-c}}{\{R_{c}^{Q}(X,Y)\}^{\beta}}\int_{0}^{1}f_{X_{e,\beta}}^{c}(Q_{X}(p))f_{Y_{e,\beta}}^{1-c}(Q_{X}(p))dQ_{X}(p)\\ &=&\frac{(I_{\beta}^{Q}(X))^{c}(I_{\beta}^{Q}(Y))^{1-c}}{\{R_{c}^{Q}(X,Y)\}^{\beta}}R_{c}^{Q}(X_{e,\beta},Y_{e,\beta}), \end{eqnarray*}

as required.

We now introduce Q-IGF order between two RVs X and $Y.$

Definition 2.10. A RV X is said to be smaller than Y in the sense of Q-IGF order, denoted by $X\le_{qgf}Y,$ if $I_{\beta}^{Q}(X)\le I_{\beta}^{Q}(Y),$ for all $\beta\ge1.$

Example 2.11. Let X and Y have QFs $Q_{X}(p)=-\frac{\log(1-p)}{\theta_1}$ and $Q_{Y}(p)=-\frac{\log(1-p)}{\theta_2}$, with $\theta_1\le \theta_2.$ Now, for $\beta\ge 1$, it can be easily shown that $I_{\beta}^{Q}(X)=\frac{\theta_1^{\beta-1}}{\beta}\le \frac{\theta_2^{\beta-1}}{\beta}=I_{\beta}^{Q}(Y).$ Hence, $X\le_{qgf}Y,$ thus providing an example for Q-IGF order.

We now establish a relation between the hazard QF order, denoted by $\le_{hq},$ and the Q-IGF order. For hazard QF order, one may refer to Definition $2.1(v)$ of [Reference Krishnan, Sunoj and Unnikrishnan Nair16]. The following example is presented to illustrate the hazard QF order.

Example 2.12. Consider two inverted exponential RVs X and Y with QDFs $q_{X}(p)=\frac{\lambda_1}{p(\log p)^{2}}$ and $q_{Y}(p)=\frac{\lambda_2}{p(\log p)^{2}},$ respectively, with $\lambda_1\le \lambda_2.$ Then,

(2.22)\begin{eqnarray} H_{X}(p)=\frac{1}{(1-p)q_{X}(p)}=\frac{p(\log p)^{2}}{(1-p)q_{X}(p)}\ge \frac{p(\log p)^{2}}{(1-p)q_{Y}(p)}=H_{Y}(p), \end{eqnarray}

implying that $X\le_{hq}Y.$

Theorem 2.13. We have $X\le_{hq}Y\Rightarrow X\ge_{qgf}Y.$

Proof. From Definition $2.1(v)$ of [Reference Krishnan, Sunoj and Unnikrishnan Nair16], we obtain

\begin{eqnarray*} X\le_{hq}Y\Rightarrow \frac{1}{(1-p)q_{X}(p)}\ge \frac{1}{(1-p)q_{Y}(p)}\Rightarrow \left(\frac{1}{q_{X}(p)}\right)^{\beta-1}\ge \left(\frac{1}{q_{Y}(p)}\right)^{\beta-1}\\ \Rightarrow \int_{0}^{1}\left(\frac{1}{q_{X}(p)}\right)^{\beta-1}dp\ge \int_{0}^{1}\left(\frac{1}{q_{Y}(p)}\right)^{\beta-1}dp\Rightarrow I_{\beta}^{Q}(X)\ge I_{\beta}^{Q}(Y)\Rightarrow X\ge_{qgf}Y, \end{eqnarray*}

as required.

It can be easily proved that the hazard QF order and reversed hazard QF order are equivalent, that is $X\le_{hq}Y\Leftrightarrow X\ge_{rhq}Y$. One may refer to Definition $2.1(vi)$ of [Reference Krishnan, Sunoj and Unnikrishnan Nair16] for the definition of reversed hazard QF order. Hence, the condition “$X\le_{hq}Y$” in Theorem 2.13 can be replaced by “$X\ge_{rhq}Y$” in order to get $X\ge_{qgf}Y$.

Definition 2.14. A RV X is said to be smaller than Y in the sense of dispersive ordering, denoted by $X\le_{disp}Y,$ if $Q_{Y}(p)-Q_{X}(p)$ is increasing in $p\in(0,1).$

Theorem 2.15. Let X and Y be two RVs with QFs QX and QY, respectively.

  1. (i) Let $X\le_{rhq}Y$ and ψ be increasing and concave. Then, $ X\le_{disp}Y\Rightarrow \psi(X)\le_{qgf} \psi(Y); $

  2. (ii) Let $X\le_{hq}Y$ and ψ be increasing and convex. Then, $ X\le_{disp}Y\Rightarrow \psi(X)\ge_{qgf} \psi(Y). $

Proof. We prove the first part of the theorem, while the second part can be proved in an analogous manner. First, we have $X\le_{disp}Y$ implying $Q_{X}(p)\le Q_{Y}(p)$. Then, as ψ is increasing and concave, we have

(2.23)\begin{eqnarray} \psi'(Q_{X}(p))\ge \psi'(Q_{Y}(p))\Rightarrow 0\le \frac{1}{\psi'(Q_{X}(p))}\le \frac{1}{\psi'(Q_{Y}(p))}. \end{eqnarray}

Further,

(2.24)\begin{eqnarray} X\le_{rhq}Y\Rightarrow \frac{1}{q_{X}(p)}\le \frac{1}{q_{Y}(p)}. \end{eqnarray}

Upon combining (2.23) and (2.24), we obtain

(2.25)\begin{equation} \frac{1}{q_{X}(p) \psi'(Q_{X}(p))}\le \frac{1}{q_{Y}(p) \psi'(Q_{Y}(p))}\Rightarrow \int_{0}^{1}\frac{dp}{\left\{q_{X}(p) \psi'(Q_{X}(p))\right\}^{\beta-1}}\le \int_{0}^{1}\frac{dp}{\left\{q_{X}(p) \psi'(Q_{X}(p))\right\}^{\beta-1}}. \end{equation}

Hence, the required result follows from (2.25) and (2.13).

3. Quantile-based residual IGF

The residual lifetime of a system with lifetime X, given that the system is working at time $t \gt 0,$ is defined as $X_{t}=[X-t|X \gt t].$ The IGF has been studied for residual lifetimes by [Reference Kharazmi and Balakrishnan12]. We now consider here the residual IGF based on QF. We first give the definition of quantile-based residual IGF (Q-RIGF).

Definition 3.1. Let QX and qX be the QF and QDF of an RV X. Then, the Q-RIGF of X is defined as

(3.1)\begin{eqnarray} I_{\beta}^{Q}(X;Q_{X}(u))=\frac{1}{(1-u)^{\beta}}\int_{u}^{1}f_{X}^{\beta}(Q_{X}(p))dQ_{X}(p)=\frac{1}{(1-u)^{\beta}}\int_{u}^{1}q_{X}^{1-\beta}(p)dp,~\beta\ge1. \end{eqnarray}

The Q-RIGF in (3.1) can be expressed in terms of hazard and reversed hazard QFs as

(3.2)\begin{eqnarray} I_{\beta}^{Q}(X;Q_{X}(u))=\frac{1}{(1-u)^{\beta}}\int_{u}^{1}\{(1-p)H_{X}(p)\}^{\beta-1}dp=\frac{1}{(1-u)^{\beta}}\int_{u}^{1}(p\tilde{H}_{X}(p))^{\beta-1}dp. \end{eqnarray}

Upon taking the derivative of (3.1) with respect to β, we get

(3.3)\begin{eqnarray} \frac{\partial}{\partial\beta}I_{\beta}^{Q}(X;Q_{X}(u))=-\left[\frac{1}{(1-u)^{\beta}}\int_{u}^{1}q_{X}^{1-\beta}(p)\log q_{X}(p)dp+\log(1-u)I_{\beta}^{Q}(X;Q_{X}(u))\right], \end{eqnarray}

from which the following observations can be readily made:

  1. (i) $I_{\beta}^{Q}(X;Q_{X}(u))|_{\beta=1}=1;$   $I_{\beta}^{Q}(X;Q_{X}(u))|_{\beta=2}=-2J^{Q}(X;Q_{X}(u)),$

  2. (ii) $I_{\beta}^{Q}(X;Q_{X}(u))|_{u=0}=I_{\beta}^{Q}(X);$  $\frac{\partial}{\partial\beta}I_{\beta}^{Q}(X;Q_{X}(u))|_{\beta=1}=-S^{Q}(X;Q_{X}(u)),$

where $J^{Q}(X;Q_{X}(u))=-\frac{1}{2(1-u)^{2}}\int_{u}^{1}\frac{dp}{q_{X}(p)}$ is the quantile-based residual extropy (see Eq. (4.1) of [Reference Krishnan, Sunoj and Unnikrishnan Nair16]) and $S^{Q}(X;Q_{X}(u))=\log(1-u)+\frac{1}{1-u}\int_{u}^{1}\log q_{X}(p)dp$ is the quantile-based residual Shannon entropy (see Eq. (8) of [Reference Sunoj and Sankaran25]). The following example presents closed-form expressions for the Q-RIGF for some distributions.

Example 3.2.

  • Consider exponential distribution with hazard rate $\theta.$ Then, from (3.1), we obtain $I_{\beta}^{Q}(X;Q_{X}(u))=\frac{\theta^{\beta-1}}{\beta}$;

  • For the power distribution with QF $Q_{X}(p)=\alpha p^{\delta},~\alpha,\delta \gt 0$, we obtain from (3.1) that $I_{\beta}^{Q}(X;Q_{X}(u))=\frac{(\alpha\delta)^{1-\beta}}{\left\{(\delta-1)(1-\beta)+1\right\}(1-u)^{\beta}}[1-u^{(\delta-1)(1-\beta)+1}];$

  • For Davies distribution with QF $Q_{X}(p)=\frac{cp}{1-p},~c \gt 0$, from (3.1), we obtain $I_{\beta}^{Q}(X;Q_{X}(u))=\frac{c^{1-\beta}(1-u)^{\beta-1}}{2(\beta-1)+1};$

  • For the re-scaled beta distribution with $Q_{X}(p)=r[1-(1-p)^{\frac{1}{c}}],~c,r \gt 0$, from (3.1), we obtain $I_{\beta}^{Q}(X;Q_{X}(u))=(\frac{r}{c})^{1-\beta} \frac{(1-u)^{\frac{1-\beta}{c}}}{(\frac{1}{c}-1)(1-\beta)+1}$, provided $(\frac{1}{c}-1)(1-\beta)+1 \gt 0.$

In Figure 3, we have plotted the Q-RIGF of power distribution and Davies distribution to show that it is not monotone in general with respect to $u.$

Figure 3. (a) Plot of Q-RIGF for power distribution considered in Example 3.2, for α = 0.1, β = 2.2, and δ = 2.3; (b) Plot of Q-RIGF for Davies distribution considered in Example 3.2 for $\beta=1.2,1.25,1.5,1.75,2.$ Here, along the x-axis, we take the values of $u\in(0,1)$.

In the following example, we consider linear mean residual QF family of distributions, having no tractable distribution function (see [Reference Midhu, Sankaran and Unnikrishnan Nair18]), and it includes exponential and uniform distributions as special cases.

Example 3.3. Let $Q_{X}(p)=-(c+\mu)\log(1-p)-2cp,~\mu \gt 0,~-\mu \le c \lt \mu,~0 \lt p \lt 1.$ Then, the Q-RIGF is given by

(3.4)\begin{eqnarray} I_{\beta}^{Q}(X;Q_{X}(u))=\frac{1}{(1-u)^{\beta}}\int_{u}^{1}\left\{\frac{c+\mu}{1-p}-2c\right\}^{1-\beta}dp. \end{eqnarray}

Note that it is not possible to obtain a closed-form expression for the integral in (3.4). However, in order to get an idea regarding its behavior, the graph of $I_{\beta}^{Q}(X;Q_{X}(u))$ in (3.4) has been plotted in Figure 4 for some specific values of $\beta,$ c, and $\mu.$

Figure 4. Plot of Q-RIGF for the distribution with QF in Example 3.3 with c = 1 and µ = 2. Here, along the x-axis, we take the values of $u\in(0,1).$ Three values of β have been considered, viz., $\beta=1.2, 1.4$, and 1.7 (presented from above).

Further, by differentiating (3.1) with respect to u, we get

(3.5)\begin{eqnarray} q_{X}(u)=\left[\beta (1-u)^{\beta-1}I_{\beta}^{Q}(X;Q_{X}(u))-(1-u)^{\beta}\frac{\partial}{\partial u}I_{\beta}^{Q}(X;Q_{X}(u))\right]^{\frac{1}{1-\beta}}. \end{eqnarray}

The expression in (3.5) can be utilized in two directions. First, it shows that the distribution of X is indeed characterized based on $I_{\beta}^{Q}(X;Q_{X}(u))$. Second, it shows that a new QF can be generated based on the assumed functional form of $I_{\beta}^{Q}(X;Q_{X}(u))$. From the expression in the first equality in (3.2), we obtain

(3.6)\begin{eqnarray} H_{X}(u)=\left[(u-1)\frac{\partial }{\partial u}I_{\beta}^{Q}(X;Q_{X}(u))+\beta I_{\beta}^{Q}(X;Q_{X}(u))\right]^{\frac{1}{\beta-1}}, \end{eqnarray}

which is useful in determining $H_{X}(u)$ based on $I_{\beta}^{Q}(X;Q(u)).$ Further, from the expression in the second equality in (3.2), we obtain

(3.7)\begin{eqnarray} \tilde{H}_{X}(u)=\frac{1-u}{u}\left[(u-1)\frac{\partial }{\partial u}I_{\beta}^{Q}(X;Q_{X}(u))+\beta I_{\beta}^{Q}(X;Q_{X}(u))\right]^{\frac{1}{\beta-1}}. \end{eqnarray}

Analogous to (3.6), the expression in (3.7) can be utilized for determining the reversed hazard QF $\tilde{H}_{X}(u)$ of $X.$ We now introduce two nonparametric classes of life distributions based on Q-RIGF.

Definition 3.4. A RV X is said to have increasing (decreasing) Q-RIGF, that is, IQ-RIGF (DQ-RIGF) if $I_{\beta}^{Q}(X;Q_{X}(u))$ is increasing (decreasing) with respect to $u.$

Based on the proposed nonparametric classes, the following bounds can be provided in terms of the hazard and reversed hazard QFs:

(3.8)\begin{equation} I_{\beta}^{Q}(X;Q_{X}(p)) \begin{cases} \ge (\le) \displaystyle\frac{(\frac{u}{1-u})^{\beta-1}}{\beta}\tilde{H}_{X}^{\beta-1}(u) & \text{if}\ X \ \text{is IQ-RIGF (DQ-RIGF)},\\ \le (\ge)\displaystyle\frac{1}{\beta}H_{X}^{\beta-1}(u) & \text{if}\ X \ \text{is IQ-RIGF (DQ-RIGF)}. \end{cases} \end{equation}

Further, from (3.5), we obtain bounds for Q-IGF as

(3.9)\begin{equation} I_{\beta}^{Q}(X;Q_{X}(p)) \begin{cases} \ge \displaystyle\frac{((1-u)q_{X}(u))^{1-\beta}}{\beta} & \text{if}\ X \ \text{is IQ-RIGF},\\ \le \displaystyle\frac{((1-u)q_{X}(u))^{1-\beta}}{\beta} & \text{if}\ X \ \text{is DQ-RIGF}. \end{cases} \end{equation}

For exponential distribution, Q-RIGF is independent of u (see Example 3.2), implying that this distribution is the boundary of IQ-RIGF and DQ-RIGF classes. Similar to Theorem 2.6, the following result can be established, which provides the effect of increasing transformations on Q-RIGF.

Theorem 3.5. Suppose X is an RV with QF QX and QDF qX. Further, suppose ψ is a positive-valued increasing function. Then,

(3.10)\begin{eqnarray} I_{\beta}^{Q}(\psi(X);Q_{X}(u))=\frac{1}{(1-u)^{\beta}}\int_{u}^{1}\frac{q_{X}^{1-\beta}(p)}{\{\psi'(Q_{X}(p))\}^{\beta-1}}dp. \end{eqnarray}

Proof. The proof is similar to that of Theorem 2.6, and is therefore omitted.

Similar to Definition 2.10, we now present the definition of Q-RIGF order for two RVs X and Y.

Definition 3.6. A RV X is said to be smaller than Y in the sense of Q-RIGF order, denoted by $X\le_{q-ri}Y,$ if $I_{\beta}^{Q}(X;Q_{X}(u))\le I_{\beta}^{Q}(Y;Q_{Y}(u)),$ for all $\beta\ge1.$

Next, we obtain a relation between dispersive and Q-RIGF orders.

Theorem 3.7. We have $X\le_{disp}Y\Rightarrow X\ge_{q-ri}Y.$

Proof. Note that $X\le_{disp}Y$ implies $Q_{Y}(p)-Q_{X}(p)$ is increasing with respect to $p\in(0,1)$. So, $\frac{d}{dp}(Q_{Y}(p)-Q_{X}(p))\ge0$, implying $q_{Y}(p)\ge q_{X}(p)$. Hence, we have

(3.11)\begin{eqnarray} \frac{1}{(1-u)^{p}}\int_{u}^{1}q_{Y}^{1-\beta}(p)dp\le \frac{1}{(1-u)^{p}}\int_{u}^{1}q_{X}^{1-\beta}(p)dp\Rightarrow X\ge_{q-ri}Y, \end{eqnarray}

as required.

Next, we discuss stochastic orders connecting two random lifetimes X and Y with Q-RIGFs $I_{\beta}^{Q}(X;Q_{X}(u))$ and $I_{\beta}^{Q}(Y;Q_{Y}(u)),$ respectively.

Theorem 3.8. For two RVs X and Y with QFs $Q_{X}(.)$ and $Q_{Y}(.)$ and QDFs $q_{X}(.)$ and $q_{Y}(.)$, respectively, we have $ X\le_{hq}Y\Rightarrow X\ge_{q-ri}Y. $

Proof. Under the assumptions made, we have

\begin{eqnarray*} X\le_{hq}Y&\Rightarrow &\frac{1}{(1-p)q_{X}(p)}\ge \frac{1}{(1-p)q_{Y}(p)}\nonumber\\ &\Rightarrow& \frac{1}{(1-u)^{\beta}q_{X}(p)^{\beta-1}}\ge \frac{1}{(1-u)^{\beta}q_{Y}(p)^{\beta-1}}\nonumber\\ &\Rightarrow& \frac{1}{(1-u)^{\beta}}\int_{u}^{1}(q_{X}(p))^{1-\beta}dp\ge \frac{1}{(1-u)^{\beta}}\int_{u}^{1}(q_{Y}(p))^{1-\beta}dp\nonumber\\ &\Rightarrow&I_{\beta}^{Q}(X;Q_{X}(u))\ge I_{\beta}^{Q}(Y;Q_{Y}(u)), \end{eqnarray*}

as required.

Moreover, since the hazard QF order and reversed hazard QF order are equivalent, in Theorem 3.8, we can also consider $\le_{rhq}$ instead of $\le_{hq}$. Note that the reverse implication in Theorem 3.8 may not hold.

Theorem 3.9. Let $\frac{I_{\beta}^{Q}(X;Q_{X}(u))}{I_{\beta}^{Q}(Y;Q_{Y}(u))}$ be increasing with respect to $u\in(0,1)$. Then, $X\le_{q-ri}Y\Rightarrow X\le_{hq}Y.$

Proof. Under the assumptions made, we have

(3.12)\begin{eqnarray} \frac{q_{X}^{1-\beta}(u)}{q_{Y}^{1-\beta}(u)}\le\frac{\int_{u}^{1}q_{X}^{1-\beta}(p)dp}{\int_{u}^{1}q_{Y}^{1-\beta}(p)dp}\le1. \end{eqnarray}

Thus,

(3.13)\begin{eqnarray} \frac{1}{q_{X}(u)}\le \frac{1}{q_{Y}(u)}\Rightarrow \frac{1}{(1-u)q_{X}(u)}\le \frac{1}{(1-u)q_{Y}(u)}\Rightarrow X\le_{hq} Y, \end{eqnarray}

as required.

We end this section with a characterization result of the exponential distribution in connection with the Q-RIGF.

Theorem 3.10. The Q-RIGF of a nonnegative RV is constant (independent of time) if and only if it is exponentially distributed.

Proof. The “if” part is clear from Example 3.2. To establish the “only if” part, we consider the Q-RIGF to be constant, that is, $I_{\beta}^{Q}(X;Q_{X}(u))=k,$ where k is a constant (here, independent of u). Further, differentiating (3.1) with respect to u, and then substituting $I_{\beta}^{Q}(X;Q_{X}(u))=k,$ we obtain after some simplification

(3.14)\begin{eqnarray} q_{X}(u)=\frac{(k\beta)^{1-\beta}}{1-u}=\frac{k^*}{1-u}, \end{eqnarray}

which is indeed the QF of the exponential distribution. This completes the proof of the theorem.

4. Quantile-based past IGF

Just as the concept of residual lifetimes in reliability, the past lifetime of a system also plays an important role in studying the past history of a failed system. The past lifetime of a system with lifetime X is given by $\widetilde{X}_{t}=[t-X|X \lt t]$, where t is the prefixed inspection time. We now define the quantile-based past IGF (Q-PIGF).

Definition 4.1. Let QX and qX be the QF and QDF of X. Then, the Q-PIGF of X is defined as

(4.1)\begin{eqnarray} \widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))=\frac{1}{u^{\beta}}\int_{0}^{u}f_{X}^{\beta}(Q_{X}(p))dQ_{X}(p)=\frac{1}{u^{\beta}}\int_{0}^{u}q_{X}^{1-\beta}(p)dp,~\beta\ge1. \end{eqnarray}

Similar to (3.2), the Q-PIGF can be expressed in terms of hazard and reversed hazard QFs as follows:

(4.2)\begin{eqnarray} \widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))=\frac{1}{u^\beta}\int_{0}^{u}\left\{(1-p)H_{X}(p)\right\}^{\beta-1}dp=\frac{1}{u^\beta}\int_{0}^{u}\left\{p\widetilde{H}_{X}(p)\right\}^{\beta-1}dp. \end{eqnarray}

By differentiating (4.1) with respect to β, we obtain

(4.3)\begin{eqnarray} \frac{\partial}{\partial\beta}\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))=-\left[\frac{1}{u^\beta}\int_{0}^{u}q_{X}^{1-\beta}(p)\log q_{X}(p)dp+ \widetilde{I}_{\beta}^{Q}(X;Q_{X}(u)) \log u\right], \end{eqnarray}

from which the following observations can be easily made:

  1. (i) $\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))|_{\beta=1}=1;$   $\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))|_{\beta=2}=-2\widetilde{J}^{Q}(X;Q_{X}(u)),$

  2. (ii) $\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))|_{u=1}=I_{\beta}^{Q}(X);$  $\frac{\partial}{\partial\beta}\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))|_{\beta=1}=-\widetilde{S}^{Q}(X;Q_{X}(u)),$

where $\widetilde{J}^{Q}(X;Q_{X}(u))=-\frac{1}{2u^{2}}\int_{0}^{u}\frac{dp}{q_{X}(p)}$ is the quantile-based past extropy (see Eq. (5.5) of [Reference Krishnan, Sunoj and Unnikrishnan Nair16]) and $\widetilde{S}^{Q}(X;Q_{X}(u))=\log u+\frac{1}{u}\int_{0}^{u}\log q_{X}(p)dp$ is the quantile-based residual Shannon entropy (see Eq. (8) of [Reference Sunoj, Sankaran and Nanda26]). In the following example, we present closed-form expressions for the Q-PIGF for some distributions.

Example 4.2.

  • Consider power distribution with QF $Q_{X}(p)=\alpha p^{\delta},~\alpha,\delta \gt 0$. Then, the Q-PIGF can be obtained as $\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))=\frac{(\alpha\delta)^{1-\beta}}{u^{\beta}}\frac{u^{(\delta-1)(1-\beta)+1}}{(\delta-1)(1-\beta)+1},$ provided $(\delta-1)(1-\beta)+1 \gt 0;$

  • For exponential distribution with hazard rate $\theta,$ the Q-PIGF is obtained from (4.1) as $\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))=\frac{\theta^{\beta-1}}{\beta u^\beta}\left\{1-(1-u)^{\beta}\right\}.$

  • For the uniform distribution with QF $Q_{X}(p)=a+(b-a)p$, $0 \lt a \lt b$, the Q-PIGF can be obtained as $\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))=\left\{u(b-a)\right\}^{1-\beta};$

  • For the half-logistic distribution with QF $Q_{X}(p)=\sigma \log (\frac{1+p}{1-p})$, $\sigma \gt 0,$ the Q-IGF can be obtained as $\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))=\frac{1}{u^{\beta}}\int_{0}^{u}(\frac{2\sigma}{1-p^2} )^{1-\beta}dp.$

Further, differentiating (4.1) with respect to u, we obtain

(4.4)\begin{eqnarray} q_{X}(u)=\left[\beta u^{\beta-1}\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))+u^\beta \frac{\partial}{\partial}\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))\right]^{\frac{1}{1-\beta}}, \end{eqnarray}

which is useful in obtaining a characterization of a distribution based on the Q-PIGF. Similar to Q-RIGF, (4.4) can be used to produce a new QF. The following two relations that have been derived from (4.2) are useful in determining hazard QF and reversed hazard QF, respectively:

(4.5)\begin{eqnarray} H_{X}(u)&=&\frac{1}{1-u}\left[\beta u^{\beta-1}\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))+u^{\beta}\frac{\partial}{\partial u}\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))\right]^{\frac{1}{\beta-1}}, \end{eqnarray}
(4.6)\begin{eqnarray} \widetilde{H}_{X}(u)&=&\frac{1}{u}\left[\beta u^{\beta-1}\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))+u^{\beta}\frac{\partial}{\partial u}\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))\right]^{\frac{1}{\beta-1}}. \end{eqnarray}

Now, two nonparametric classes of distributions based on the Q-PIGF can be constructed, analogous to Definition 4.1.

Definition 4.3. A RV X is said to have increasing (decreasing) Q-PIGF, that is, IQ-PIGF (DQ-PIGF) if $\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))$ is increasing (decreasing) with respect to $u.$

Similar to (3.8) and (3.9), we can provide bounds for Q-PIGF as follows:

(4.7)\begin{equation} \widetilde{I}_{\beta}^{Q}(X;Q_{X}(u)) \begin{cases} \le (\ge) \displaystyle\frac{(\frac{1-u}{u})^{\beta-1}}{\beta}H_{X}^{\beta-1}(u) & \text{if}\ X\ \text{is IQ-PIGF (DQ-PIGF)},\\\\ \le (\ge)\displaystyle\frac{1}{\beta}\tilde{H}_{X}^{\beta-1}(u) & \text{if}\ X \text{is IQ-PIGF (DQ-PIGF)}, \end{cases} \end{equation}

and

(4.8)\begin{equation} \widetilde{I}_{\beta}^{Q}(X;Q_{X}(u)) \begin{cases} \ge \displaystyle\frac{(uq_{X}(u))^{1-\beta}}{\beta} & \text{if}\ X\ \text{is IQ-PIGF},\\ \le \displaystyle\frac{(uq_{X}(u))^{1-\beta}}{\beta} & \text{if}\ X\ \text{is DQ-PIGF}. \end{cases} \end{equation}

Definition 4.4. A RV X is said to be smaller than Y in the sense of Q-PIGF order, denoted by $X\le_{q-pi}Y,$ if $\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))\le \widetilde{I}_{\beta}^{Q}(Y;Q_{Y}(u)),$ for all $\beta\ge1.$

Theorem 4.5. We have $X\le_{disp}Y\Rightarrow X\ge_{q-pi}Y.$

Proof. The proof is analogous to that of Theorem 3.7, and is therefore omitted.

Theorem 4.6. For two RVs X and Y with QFs $Q_{X}(.)$ and $Q_{Y}(.)$ and QDFs $q_{X}(.)$ and $q_{Y}(.)$, respectively, we have $ X\le_{rq}Y\Rightarrow X\ge_{q-ri}Y. $

Proof. The proof is similar to that of Theorem 3.8, and is therefore omitted.

Theorem 4.7. Let $\frac{\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))}{\widetilde{I}_{\beta}^{Q}(Y;Q_{Y}(u))}$ be increasing with respect to $u\in(0,1)$. Then, $X\le_{q-pi}Y\Rightarrow X\le_{rq}Y.$

Proof. The proof follows along the lines of Theorem 3.9, and is therefore omitted.

We finish this subsection with a result which shows that Q-RIGF and Q-PIGF can be related to each other. The advantage of the following theorem is that one of these concepts is sufficient to study the other.

Theorem 4.8. We have

(4.9)\begin{eqnarray} I_{\beta}^{Q}(X;Q_{X}(u))&=&(1-u)^{-\beta}\left[\widetilde{I}_{\beta}^{Q}(X;Q_{X}(1))-u^{\beta}\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))\right], \end{eqnarray}
(4.10)\begin{eqnarray} \widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))&=&u^{-\beta}\left[I_{\beta}^{Q}(X;Q_{X}(0))-(1-u)^{\beta}I_{\beta}^{Q}(X;Q_{X}(u))\right], \end{eqnarray}

where $\widetilde{I}_{\beta}^{Q}(X;Q_{X}(1))=I_{\beta}^{Q}(X;Q_{X}(0))=I_{\beta}^{Q}(X).$

Proof. From (3.1) and (4.1), we obtain

\begin{equation*}q_{X}^{1-\beta}(u)=-\frac{d}{du}[(1-u)^{\beta}I_{\beta}^{Q}(X;Q_{X}(u))]~\mbox{and}~q_{X}^{1-\beta}(u)=\frac{d}{du}[u^{\beta}\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))].\end{equation*}

Now, equating these two and then integrating, we obtain

(4.11)\begin{eqnarray} (1-u)^{\beta}I_{\beta}^{Q}(X;Q_{X}(u))=-u^{\beta}\widetilde{I}_{\beta}^{Q}(X;Q_{X}(u))+l, \end{eqnarray}

where l is a constant. Further, when u tends to 0, we have $l=I_{\beta}^{Q}(X;Q_{X}(0))$, and when u tends to 1, $l=\widetilde{I}_{\beta}^{Q}(X;Q_{X}(1)).$ Upon using these facts, the desired identities follow.

5. Application of the Q-IGF

This section focuses on the construction of an empirical estimator of Q-IGF and examines its usefulness using a real-life data set. In this regard, consider a random sample of size n as $X_1,\ldots,X_n.$ Further, let $X_{(1)}\le \cdots\le X_{(n)}$ be the order statistics of this random sample. Then, the empirical QF is given by (see [Reference Parzen22])

(5.1)\begin{eqnarray} \hat{Q}_{X}(v)=n\left(\frac{j}{n}-v\right)X_{(j-1)}+n\left(v-\frac{j-1}{n}\right)X_{(\,j)}, \end{eqnarray}

where $\frac{j-1}{n}\le v\le \frac{j}{n}$, for $j=1,\ldots,n.$ Thus, the corresponding empirical estimator of the QDF is

(5.2)\begin{eqnarray} \hat{q}_{X}(v)=n(X_{(j)}-X_{(j-1)}), \end{eqnarray}

for $\frac{j-1}{n}\le v\le \frac{j}{n}$, for $j=1,\ldots,n.$ Using (5.2), the empirical Q-IGF estimator is obtained as

(5.3)\begin{eqnarray} \hat{I}_{\beta}^{Q}(X)=\int_{0}^{1}\hat{q}_{X}^{1-\beta}(p)dp, \end{eqnarray}

where $\hat{q}_{X}(p)=n(X_{(j)}-X_{(j-1)})$. Thus, we have the empirical Q-IGF estimator as

(5.4)\begin{eqnarray} \hat{I}_{\beta}^{Q}(X)=\frac{1}{n}\sum_{j=1}^{n}\left[n\left(X_{(j)}-X_{(j-1)}\right)\right]^{1-\beta},~\beta\ge1. \end{eqnarray}

Further, in order to see the usefulness of the proposed estimator in (5.4), we compute its value based on a real data set (see [Reference Zimmer, Bert Keats and Wang31]), which represents the time (in months) to first failure of twenty electric carts:

\begin{equation*}0.9,1.5,2.3,3.2,3.9,5.0,6.2,7.5,8.3,10.4,11.1,12.6,15.0,16.3,19.3,22.6,24.8,31.8,38.1,53.0.\end{equation*}

Using chi-square goodness of fit test and Q-Q plot, Krishnan [Reference Krishnan, Sunoj and Unnikrishnan Nair16] showed that the data set is well fitted by Davies distribution with QF $Q_{X}(p)=\frac{c p^{a}}{(1-p)^{b}},$ $a,b,c \gt 0$. We recall that Davies distribution does not have tractable CDF, but has a closed-form QF. Further, equating sample L-moments with population L-moments, Krishnan [Reference Krishnan, Sunoj and Unnikrishnan Nair16] obtained the estimated values of the parameters of Davies distribution to be

\begin{equation*}\hat{a}=1.1255,~\hat{b}=0.2911,~\hat{c}=18.6139.\end{equation*}

We note that for Davies distribution, the parametric estimate of Q-IGF is obtained as

(5.5)\begin{eqnarray} \widehat{I}_{\beta}^{Q}(X)=c^{1-\beta}\int_{0}^{1}\left[\frac{\hat{a}p^{\hat{a}-1}}{(1-p)^{\hat{b}}}+\frac{\hat{b}p^{\hat{a}}}{(1-p)^{\hat{b}+1}}\right]^{1-\beta}dp,~~\beta\ge1. \end{eqnarray}

Using Mathematica software, the parametric estimates of the Q-IGF for the Davies family of distributions, with $\hat{a}=1.1255,~\hat{b}=0.2911,$ and $\hat{c}=18.6139,$ for different values of $\beta\ge1$ are plotted in Figure 5. In addition to it, we have also computed the empirical estimate of the Q-IGF given in (5.5) for some values of β, which are presented in Table 1.

Figure 5. Plot of the parametric estimate of the Q-IGF (given by (5.5)) for Davies distribution with respect to β. Here, we have considered β from 1 to 4.

Table 1. The estimated values of Q-IGF for different values of β.

6. Concluding remarks with discussion on a future problem

The concept of IGF gained much attention recently even though it was introduced by [Reference Golomb4] more than five decades ago. The importance of this function is that it helps to produce various information measures for models having closed-form probability density functions. However, there are many distributions which do not have closed-form density functions. In this paper, we have proposed Q-IGF and studied various properties of it. Some bounds, its connection to reliability theory, and effects under monotone transformations have also been discussed. The proposed IGFs have been studied in particular for escort and generalized escort distributions. Orders based on the newly proposed measure have also been introduced. Finally, we have extended the proposed concept to residual and past lifetimes, and discussed them under different contexts.

Very recently, Capaldo [Reference Capaldo, Di Crescenzo and Meoli2] introduced cumulative IGF of an RV X with CDF FX and survival function $\bar{F}_{X}$ as

(6.1)\begin{eqnarray} G_{\alpha,\beta}(X)=\int_{l}^{r}\{F_{X}(x)\}^{\alpha}\{1-F_{X}(x)\}^{\beta}dx,~\alpha,\beta\in \mathcal{R}, \end{eqnarray}

where $l=\inf\{x\in\mathcal{R}|F_{X}(x) \gt 0\}$ and $r=\sup\{x\in\mathcal{R}|\bar{F}_{X}(x) \gt 0\}.$ The quantile-based cumulative IGF of X is obtained as

(6.2)\begin{eqnarray} G_{\alpha,\beta}^{Q}(X)=\int_{0}^{1}p^{\alpha}(1-p)^{\beta}q_{X}(p)dp, \end{eqnarray}

where α and β are real numbers. We propose to explore the properties of this quantile-based measure in (6.2) in our future work.

References

Baratpour, S. & Khammar, A.H. (2018). A quantile-based generalized dynamic cumulative measure of entropy. Communications in Statistics-Theory and Methods, 47(13), 31043117.CrossRefGoogle Scholar
Capaldo, M., Di Crescenzo, A., & Meoli, A. (2023). Cumulative information generating function and generalized Gini functions. Metrika, 129.Google Scholar
Di Crescenzo, A., Kayal, S., & Meoli, A. (2021). Fractional generalized cumulative entropy and its dynamic version. Communications in Nonlinear Science and Numerical Simulation, 102, .CrossRefGoogle Scholar
Golomb, S. (1966). The information generating function of a probability distribution. IEEE Transactions in Information Theory, 12(1), 7577.CrossRefGoogle Scholar
Hankin, R.K. & Lee, A. (2006). A new family of non-negative distributions. Australian & New Zealand Journal of Statistics, 48(1), 6778.CrossRefGoogle Scholar
Kayal, S. (2018a). Quantile-based Chernoff distance for truncated random variables. Communications in Statistics-Theory and Methods, 47(20), 49384957.CrossRefGoogle Scholar
Kayal, S. (2018b). Quantile-based cumulative inaccuracy measure. Physica A: Statistical Mechanics and its Applications, 510, 329344.CrossRefGoogle Scholar
Kayal, S. & Tripathy, M.R. (2018). A quantile-based Tsallis-α divergence. Physica A: Statistical Mechanics and its Applications, 492, 496505.CrossRefGoogle Scholar
Kayal, S., Moharana, R., & Sunoj, S.M. (2020). Quantile-based study of (dynamic) inaccuracy measures. Probability in the Engineering and Informational Sciences, 34(2), 183199.CrossRefGoogle Scholar
Kharazmi, O. & Balakrishnan, N. (2021a). Cumulative and relative cumulative residual information generating measures and associated properties. Communications in Statistics-Theory and Methods, 114.Google Scholar
Kharazmi, O. & Balakrishnan, N. (2021b). Cumulative residual and relative cumulative residual Fisher information and their properties. IEEE Transactions on Information Theory, 67(10), 63066312.CrossRefGoogle Scholar
Kharazmi, O. & Balakrishnan, N. (2021c). Jensen-information generating function and its connections to some well-known information measures. Statistics & Probability Letters, 170, .CrossRefGoogle Scholar
Kharazmi, O. & Balakrishnan, N. (2022). Generating function for generalized Fisher information measure and its application to finite mixture models. Hacettepe Journal of Mathematics and Statistics, 51(5), 14721483.Google Scholar
Kharazmi, O., Balakrishnan, N., & Ozonur, D. (2023a). Jensen-discrete information generating function with an application to image processing. Soft Computing, 27(8), 45434552.CrossRefGoogle Scholar
Kharazmi, O., Contreras-Reyes, J.E., & Balakrishnan, N. (2023b). Optimal information, Jensen-RIG function and α-Onicescu’s correlation coefficient in terms of information generating functions. Physica A: Statistical Mechanics and its Applications, 609, .CrossRefGoogle Scholar
Krishnan, A.S., Sunoj, S.M., & Unnikrishnan Nair, N. (2020). Some reliability properties of extropy for residual and past lifetime random variables. Journal of the Korean Statistical Society, 49, 457474.CrossRefGoogle Scholar
Kumar, V., Taneja, G., & Chhoker, S. (2019). Some results on quantile-based Shannon doubly truncated entropy. Statistical Theory and Related Fields, 3(1), 5970.CrossRefGoogle Scholar
Midhu, N.N., Sankaran, P.G., & Unnikrishnan Nair, N. (2013). A class of distributions with the linear mean residual quantile function and it’s generalizations. Statistical Methodology, 15, 124.CrossRefGoogle Scholar
Unnikrishnan Nair, N., Sankaran, P.G., & Vinesh Kumar, B. (2012). Modelling lifetimes by quantile functions using Parzen’s score function. Statistics, 46(6), 799811.CrossRefGoogle Scholar
Unnikrishnan Nair, N., Sankaran, P.G., & Balakrishnan, N. (2013). Quantile-based reliability analysis. New York: Springer.CrossRefGoogle Scholar
Nanda, Asok K. & Shovan, C. (2021). Shannon’s entropy and its generalisations towards statistical inference in last seven decades. International Statistical Review, 89(1), 167185.CrossRefGoogle Scholar
Parzen, E. (1979). Nonparametric statistical data modeling. Journal of the American Statistical Association, 74(365), 105121.CrossRefGoogle Scholar
Ramberg, J.S. & Schmeiser, B.W. (1974). An approximate method for generating asymmetric random variables. Communications of the ACM, 17(2), 7882.CrossRefGoogle Scholar
Elwood Shannon, C. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27(3), 379423.CrossRefGoogle Scholar
Sunoj, S.M. & Sankaran, P.G. (2012). Quantile based entropy function. Statistics & Probability Letters, 82(6), 10491053.CrossRefGoogle Scholar
Sunoj, S.M., Sankaran, P.G., & Nanda, Asok K. (2013). Quantile based entropy function in past lifetime. Statistics & Probability Letters, 83(1), 366372.CrossRefGoogle Scholar
Ubriaco, M.R. (2009). Entropies based on fractional calculus. Physics Letters A, 373(30), 25162519.CrossRefGoogle Scholar
Xiong, H., Shang, P., & Zhang, Y. (2019). Fractional cumulative residual entropy. Communications in Nonlinear Science and Numerical Simulation, 78, .CrossRefGoogle Scholar
Zamani, Z., Kharazmi, O., & Balakrishnan, N. (2022). Information generating function of record values. Mathematical Methods of Statistics, 31(3), 120133.CrossRefGoogle Scholar
Zamania, Z. & Madadia, M. (2023). Quantile-based entropy function in past lifetime for order statistics and its properties. Filomat, 37(10), 33213334.CrossRefGoogle Scholar
Zimmer, W.J., Bert Keats, J., & Wang, F.K. (1998). The Burr XII distribution in reliability analysis. Journal of Quality Technology, 30(4), 386394.CrossRefGoogle Scholar
Figure 0

Figure 1. Plots of Q-IGF for exponential distributionconsidered in Example 2.2, for (a) $\theta=0.1,0.6,0.8,1$ (presented from below) and (b) $\theta=2.7,3.5,4,4.5$ (presented from below). Along the x-axis, we have taken the values of β.

Figure 1

Figure 2. Plot of Q-IGF for the QDF given by (2.8) considered in Example 2.3, for c = 1 and µ = 1.5. Along the x-axis, we have taken the values of $\beta.$

Figure 2

Figure 3. (a) Plot of Q-RIGF for power distribution considered in Example 3.2, for α = 0.1, β = 2.2, and δ = 2.3; (b) Plot of Q-RIGF for Davies distribution considered in Example 3.2 for $\beta=1.2,1.25,1.5,1.75,2.$ Here, along the x-axis, we take the values of $u\in(0,1)$.

Figure 3

Figure 4. Plot of Q-RIGF for the distribution with QF in Example 3.3 with c = 1 and µ = 2. Here, along the x-axis, we take the values of $u\in(0,1).$ Three values of β have been considered, viz., $\beta=1.2, 1.4$, and 1.7 (presented from above).

Figure 4

Figure 5. Plot of the parametric estimate of the Q-IGF (given by (5.5)) for Davies distribution with respect to β. Here, we have considered β from 1 to 4.

Figure 5

Table 1. The estimated values of Q-IGF for different values of β.