1. Introduction
Finite-level M/G/1-type Markov chains are used for the stationary analysis of finite semi-Markovian queues (see e.g. [Reference Baiocchi3, Reference Baiocchi and Bléfari-Melazzi4, Reference Cuyt6, Reference Herrmann12]). However, it is not easy to derive a simple and analytical expression for the stationary distribution in finite-level M/G/1-type Markov chains, except in some special cases such that the matrix generating functions of the level increments are rational [Reference Akar, Oğuz and Sohraby1, Reference Kim and Kim15].
There are related studies on the asymptotics of finite-level quasi-birth-and-death processes (QBDs) and finite-level GI/M/1-type Markov chains in the infinite-level limit, where the upper boundary level goes to infinity. As is well known, the finite-level QBD is a special case of the finite-level GI/M/1-type Markov chain as well as the finite-level M/G/1-type Markov chain. Miyazawa et al. [Reference Miyazawa, Sakuma and Yamaguchi35] presented an asymptotic formula for the stationary probability of a finite-level QBD being in the upper boundary level. The asymptotic formula is used to study the asymptotic behavior of the loss probability in the MAP/MSP/c/ $K+c$ queue. Kim and Kim [Reference Kim and Kim16] extended the asymptotic formula in [Reference Miyazawa, Sakuma and Yamaguchi35] to the finite-level GI/M/1-type Markov chain.
Some researchers have studied the infinite-level limit of finite-level M/G/1-type Markov chains, focusing on the asymptotics of the loss probability in finite M/G/1-type queues, such as queues with Markovian arrival processes. Ishizaki and Takine [Reference Ishizaki and Takine13] established a direct relationship between the respective stationary distributions of a special finite-level M/G/1-type Markov chain and its infinite-level-limit chain (the corresponding infinite-level M/G/1-type Markov chain). Using this direct relationship, the authors obtained the loss probability in a finite M/G/1-type queue with geometrically distributed off-periods. Baiocchi [Reference Baiocchi3] derived a geometric asymptotic formula for the loss probability in the MAP/G/1/K queue through the asymptotic analysis of a finite-level M/G/1-type Markov chain with light-tailed level increments. Liu and Zhao [Reference Liu and Zhao22] presented power-law asymptotic formulas for the loss probability in an M/G/1/N queue with vacations, where the embedded queue length process is a special finite-level M/G/1-type Markov chain with a single background state.
As we can see above, there are no previous studies on the exact convergence rate of the stationary distribution of a finite-level M/G/1-type Markov chain when taking its infinite-level limit; however, we can find other studies [Reference Masuyama28, Reference Masuyama29, Reference Masuyama31, Reference Masuyama32] with related but different perspectives. They are concerned with the upper bound for the error of the last-column-block-augmented (LCBA) truncation approximation of the stationary distribution in block-structured Markov chains, including infinite-level M/G/1-type Markov chains. Note that the finite-level M/G/1-type Markov chain can be viewed as the LCBA truncation approximation to an infinite-level M/G/1-type Markov chain. Thus, using the results of these previous studies, we can obtain upper bounds for the difference between the respective stationary distributions of the finite-level and (corresponding) infinite-level M/G/1-type Markov chains.
The purpose of this study is to present a subgeometric convergence formula for the stationary distribution of the finite-level M/G/1-type Markov chain in the infinite-level limit. Subgeometric convergence is much slower than geometric (exponential) convergence, and an example of it is polynomial convergence. The subgeometric convergence formula can be used, for example, to derive a subexponential asymptotic formula for the loss probability in the MAP/GI/1/N queues with and without vacations and their batch arrival models (i.e., fed by the batch Markovian arrival process (BMAP) [Reference Lucantoni23]). To save space, this paper presents the application only to the standard MAP/GI/1/N queue (without vacations).
The key to this study is a block-decomposition-friendly solution to the Poisson equation of the deviation matrix (see e.g. [Reference Coolen-Schrijner and van Doorn5]). We refer to the block-decomposition-friendly solution as the fundamental deviation matrix. The fundamental deviation matrix yields a difference formula for the respective stationary distributions of the finite-level M/G/1-type Markov chain and its infinite-level-limit chain. Furthermore, using the difference formula, we derive a subgeometric convergence formula for the stationary distribution of the finite-level M/G/1-type Markov chain as its upper boundary level goes to infinity. The subgeometric convergence formula requires a technical but not very restrictive condition (Assumption 5.10) for the subexponentiality of the integrated tail distribution of nonnegative level increments in steady state of the infinite-level-limit chain.
The rest of this paper consists of five sections. Section 2 describes infinite-level and finite-level M/G/1-type Markov chains. Section 3 discusses the second moment condition on the level increments of the infinite-level M/G/1-type Markov chain, which leads to the finiteness of the mean of the stationary distribution and to the finiteness of the mean first passage time to level zero. Section 4 introduces the fundamental deviation matrix of the infinite-level M/G/1-type Markov chain, and presents a closed block-decomposition form of the fundamental deviation matrix. Using the results of the previous sections, Section 5 proves the main theorem on the subgeometric convergence of the stationary distribution of the finite-level M/G/1-type Markov chain when we take its infinite-level limit. In addition, Section 5 provides an application of the main theorem to the subexponential asymptotics of the loss probability in the MAP/GI/1/N queue. Finally, Section 6 contains concluding remarks.
2. Model description
This section consists of three subsections. Section 2.1 provides basic definitions and notation. Sections 2.2 and 2.3 describe infinite-level and finite-level M/G/1-type Markov chains, respectively.
2.1. Basic definitions and notation
We begin by introducing symbols and notation for numbers. Let
and let $\mathbb{M}_0$ and $\mathbb{M}_1$ denote
respectively, where $M_0, M_1 \in \mathbb{N}$ . For $x,y \in \mathbb{R}\,:\!=\,({-}\infty,\infty)$ , let $x \wedge y = \min(x,y)$ and let $\delta_{k,\ell}$ , $k,\ell \in \mathbb{R}$ , denote the Kronecker delta; that is, $\delta_{k,k} = 1$ and $\delta_{k,\ell} = 0$ for $k \neq \ell$ . Furthermore, let $\mathbb{1}({\cdot})$ denote the indicator function that takes the value of one if the statement within the parentheses is true; otherwise it takes the value of zero.
Next we describe our notation for vectors and matrices. All matrices are denoted by bold uppercase letters, and in particular, ${\boldsymbol{O}}$ and ${\boldsymbol{I}}$ denote the zero matrix and the identity matrix, respectively, with appropriate sizes (i.e., with appropriate numbers of rows and columns). In general, all row vectors are denoted by bold Greek lowercase letters, except for ${\boldsymbol{g}}$ (this exception follows from the convention in the matrix analytic methods pioneered by Neuts [Reference Neuts36]); and all column vectors are denoted by bold English lowercase letters. In particular, ${\boldsymbol{e}}$ denotes the column vector of 1’s of appropriate size.
In addition, we introduce more definitions for vectors and matrices. For any matrix (or vector), the absolute value operator $|{\cdot}|$ works on it elementwise, and $({\cdot})_{i,j}$ (resp. $({\cdot})_{i}$ ) denotes the (i, j)th (resp. ith) element of the matrix (resp. vector) within the parentheses. We then denote by $\|{\cdot}\|$ the total variation norm for vectors. Thus, $\|{\boldsymbol{s}} \| = \sum_{j} |({\boldsymbol{s}})_j|$ for any vector ${\boldsymbol{s}}$ . Furthermore, for any matrix (or vector) function ${\boldsymbol{Z}}({\cdot})$ and scalar function $f({\cdot})$ on $\mathbb{R}$ , we use the notation ${\boldsymbol{Z}}(x) = \overline{\boldsymbol{\mathcal{O}}}(f(x)) $ and ${\boldsymbol{Z}}(x) = \underline{\boldsymbol{\mathcal{O}}}(f(x))$ :
The symbols $\overline{\boldsymbol{\mathcal{O}}}({\cdot})$ and $\underline{\boldsymbol{\mathcal{O}}}({\cdot})$ are applied to vector functions; when applied to scalar functions, they are replaced by $O({\cdot})$ and $o({\cdot})$ (according to the standard notation). Finally, for any nonnegative matrix ${\boldsymbol{S}} \ge {\boldsymbol{O}}$ (including nonnegative vectors), we write ${\boldsymbol{S}} < \infty$ if every element of ${\boldsymbol{S}}$ is finite.
2.2. The infinite-level M/G/1-type Markov chain
This subsection introduces the infinite-level M/G/1-type Markov chain and the preliminary results concerning the chain which are used later in our analysis. First, we define the infinite-level M/G/1-type Markov chain along with the basic assumption for the existence and uniqueness of its stationary distribution. Next, we introduce the R- and G-matrices, which are helpful for our analysis in this study; for example, they allow us to describe Ramaswami’s recursion for the stationary distribution [Reference Ramaswami38].
We define the infinite-level M/G/1-type Markov chain as follows. Let $\{(X_n, J_n);\, n \in \mathbb{Z}_+\}$ denote a discrete-time Markov chain on state space $\mathbb{S}\,:\!=\, \bigcup_{k=0}^{\infty} \mathbb{L}_k$ , where $\mathbb{L}_k = \{k\} \times \mathbb{M}_{k \wedge 1}$ for $k \in \mathbb{Z}_+$ . The subset $\mathbb{L}_k$ of state space $\mathbb{S}$ is referred to as level k. Let ${\boldsymbol{P}}$ denote the transition probability matrix of the Markov chain $\{(X_n, J_n)\}$ , and assume that ${\boldsymbol{P}}$ is a stochastic matrix such that
Therefore, the component block matrices ${\boldsymbol{A}}(k)$ and ${\boldsymbol{B}}(k)$ satisfy the following:
The Markov chain $\{(X_n, J_n)\}$ is referred to as an M/G/1-type Markov chain (see [Reference Neuts36]). For convenience, we call this Markov chain the infinite-level M/G/1-type Markov chain or infinite-level chain for short, in order to distinguish it from its finite-level version (described later).
Throughout the paper (unless otherwise noted), we proceed under Assumption 2.1, the standard mean drift condition for the existence and uniqueness of the stationary distribution (see [Reference Asmussen2, Chapter XI, Proposition 3.1] and [Reference Zhao, Li and Braun40, Theorem 16]), as our basic assumption.
Assumption 2.1. (Standard mean drift condition.) Let
where ${\boldsymbol{A}}$ is a stochastic matrix owing to (2.2). Suppose that the following conditions are satisfied: (i) the stochastic matrices ${\boldsymbol{A}}$ and ${\boldsymbol{P}}$ (given in (2.1)) are irreducible; (ii) $\overline{{\boldsymbol{m}}}_{B}\,:\!=\,\sum_{k=1}^{\infty} k{\boldsymbol{B}}(k){\boldsymbol{e}} < \infty$ ; and (iii) $\sigma \,:\!=\,\boldsymbol{\varpi}\overline{{\boldsymbol{m}}}_{A} < 0$ , where $\boldsymbol{\varpi}$ denotes the unique stationary distribution vector of ${\boldsymbol{A}}$ .
Remark 2.2. The mean drift conditions [Reference Asmussen2, Chapter XI, Proposition 3.1] and [Reference Zhao, Li and Braun40, Theorem 16] are the same and are established for the positive recurrence (or equivalently, for the existence and uniqueness of the stationary distribution) of the irreducible GI/G/1-type Markov chain, which is a generalization of the M/G/1-type Markov chain considered in this paper. Both imply that if the condition (iii) of Assumption 2.1 is weakened so that $\sigma =\boldsymbol{\varpi}\overline{{\boldsymbol{m}}}_{A} \le 0$ , then the infinite-level chain $\{(X_n, J_n)\}$ is irreducible and recurrent. We note that a stronger mean drift condition (than Assumption 2.1) for the M/G/1-type Markov chain is given in [Reference Neuts36, Theorem 3.2.1], where the G-matrix is assumed to be irreducible, and this assumption is more restrictive than the irreducibility of ${\boldsymbol{A}}$ .
Assumption 2.1 ensures that the infinite-level chain $\{(X_n, J_n)\}$ is irreducible and positive recurrent, and therefore this chain has a unique stationary distribution vector, denoted by $\boldsymbol{\pi} = (\pi(k,i))_{(k,i)\in\mathbb{S}}$ . For later convenience, $\boldsymbol{\pi}$ is partitioned level-wise: $\boldsymbol{\pi} = (\boldsymbol{\pi}(0),\boldsymbol{\pi}(1),{\dots})$ , where $\boldsymbol{\pi}(k) = (\pi(k,i))_{i \in \mathbb{M}_{k \wedge 1}}$ for $k \in \mathbb{Z}_+$ .
We conclude this subsection by defining the G- and R-matrices and describing Ramaswami’s recursion for the stationary distribution vector $\boldsymbol{\pi}$ . Let ${\boldsymbol{G}}\,:\!=\,(G_{i,j})_{i,j\in\mathbb{M}_1}$ denote an $M_1 \times M_1$ matrix such that
where $\tau_k = \inf\{n \in \mathbb{N}\,:\, X_n = k\}$ and $\mathbb{L}_{\ge k}= \bigcup_{\ell=k}^{\infty} \mathbb{L}_{\ell}$ for $k \in \mathbb{Z}_+$ . The conditions (i) and (iii) of Assumption 2.1 ensure that ${\boldsymbol{G}}$ is a stochastic matrix with a single closed communicating class [Reference Kimura, Daikoku, Masuyama and Takahashi17, Proposition 2.1] and thus a unique stationary distribution vector, denoted by ${\boldsymbol{g}}$ . In addition, using the G-matrix ${\boldsymbol{G}}$ , we define ${\boldsymbol{R}}_0(k)$ and ${\boldsymbol{R}}(k)$ , $k \in \mathbb{N}$ , as
respectively, where
We then describe Ramaswami’s recursion [Reference Ramaswami38]:
This is used later to prove Theorem 3.4.
2.3. The finite-level M/G/1-type Markov chain
This subsection introduces the finite-level M/G/1-type Markov chain and the fundamental results on its stationary distribution. We begin with the definition of the finite-level M/G/1-type Markov chain and then present a proposition on the uniqueness of its stationary distribution. We close by providing the definitions and convention associated with the finite-level M/G/1-type Markov chain and its stationary distribution. Throughout the paper, the symbol N is assumed to an arbitrary positive integer, unless otherwise noted.
We provide the definition of the finite-level M/G/1-type Markov chain. For any $N \in \mathbb{N}$ , let $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big);\, n\in\mathbb{Z}_+\Big\}$ denote a discrete-time Markov chain on state space $\mathbb{L}_{\le N}\,:\!=\, \bigcup_{\ell=0}^N \mathbb{L}_{\ell}$ with transition probability matrix ${\boldsymbol{P}}^{(N)}$ given by
where
The Markov chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ is referred to as the finite-level M/G/1-type Markov chain or finite-level chain for short. Without loss of generality, we assume that the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ is defined on the same probability space as that of the infinite-level chain $\{(X_n,J_n)\}$ .
Remark 2.3. The stochastic matrix ${\boldsymbol{P}}^{(N)}$ can be considered the last-column-block-augmented (LCBA) truncation of ${\boldsymbol{P}}$ (see [Reference Masuyama28, Reference Masuyama29, Reference Masuyama31, Reference Masuyama32]).
Proposition 2.4 presents the basic results on the recurrence of $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ and the uniqueness of its stationary distribution.
Proposition 2.4. If ${\boldsymbol{P}}$ is irreducible, then the following statements hold:
-
(i) For all sufficiently large $N \in \mathbb{N}$ , the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ can reach any state $(0,j) \in \mathbb{L}_0$ from any state $(0,i) \in \mathbb{L}_0$ .
-
(ii) If ${\boldsymbol{P}}$ is recurrent, then, for each $N \in \mathbb{N}$ , the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ reaches level zero $\mathbb{L}_0$ from any state in its state space $\mathbb{L}_{\le N}$ with probability one.
-
(iii) If ${\boldsymbol{P}}$ is positive recurrent, then the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ has a unique stationary distribution vector for all sufficiently large $N \in \mathbb{N}$ .
Proof. See Appendix A.
Remark 2.5. According to Remark 2.2, ${\boldsymbol{P}}$ is recurrent under the conditions (i) and (ii) of Assumption 2.1 together with $\sigma \le 0$ , and ${\boldsymbol{P}}$ is positive recurrent under Assumption 2.1.
For later use, we introduce some definitions associated with the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ . Let $\boldsymbol{\pi}^{(N)}\,:\!=\,\big(\pi^{(N)}(k,i)\big)_{(k,i)\in\mathbb{L}_{\le N}}$ denote the stationary distribution vector, which is uniquely determined for all sufficiently large $N \in \mathbb{N}$ (see Proposition 2.4). By definition,
For later use, $\boldsymbol{\pi}^{(N)}$ is partitioned as
where $\boldsymbol{\pi}^{(N)}(k) = \big(\pi^{(N)}(k,i)\big)_{i \in \mathbb{M}_{k \wedge 1}}$ for $k \in \mathbb{Z}_{[0,N]}$ .
Finally, we introduce our convention for performing calculations (such as addition and multiplication) over finite- and infinite-dimensional matrices (including vectors). As mentioned in the introduction, the main purpose of this paper is to study the convergence of $\big\{\boldsymbol{\pi}^{(N)};\, N \in\mathbb{N}\big\}$ . Thus, we consider the situation where the probability vectors $\boldsymbol{\pi}^{(N)}$ , $N \in\mathbb{N}$ , of different finite dimensions converge to a certain probability vector of infinite dimension (which is expected to be equal to $\boldsymbol{\pi}$ ). To facilitate this study, the following convention is introduced: a finite-dimensional matrix (or vector) is extended (if necessary) to an infinite-dimensional matrix by appending zeros to it, keeping its original elements in their original positions. Following this convention, for example, $\boldsymbol{\pi}^{(N)} - \boldsymbol{\pi}$ and ${\boldsymbol{P}}^{(N)} - {\boldsymbol{P}}$ are well-defined.
3. Second-order moment condition on level increments in the infinite-level chain
This section introduces a condition on the level increments of the infinite-level chain (Assumption 3.1 below). For convenience, we call it the second-order moment condition. As shown later, the second-order moment condition ensures that the mean of the stationary distribution is finite and that the mean first passage time to level zero is finite; these facts lead to the proof of the convergence of $\big\{\boldsymbol{\pi}^{(N)}\big\}$ to $\boldsymbol{\pi}$ .
Assumption 3.1. (Second-order moment condition on level increments.)
Remark 3.2. Assumption 3.1 implies that
3.1. Finiteness of the mean of the stationary distribution
In this subsection, we first establish a certain Foster–Lyapunov drift condition (called the drift condition for short). Using this drift condition, we show that the second-order moment condition is equivalent to $\sum_{k=1}^{\infty}k\boldsymbol{\pi}(k){\boldsymbol{e}} < \infty$ , and also show that the second-order moment condition implies $\sup_{N \in \mathbb{N}}\sum_{k=1}^N k \boldsymbol{\pi}^{(N)}(k) {\boldsymbol{e}} < \infty$ .
Consider the Poisson equation
to establish the desired drift condition under Assumption 3.1. Let ${\boldsymbol{a}}$ denote
where $c\in \mathbb{R}$ is an arbitrary constant. It then follows from ${\boldsymbol{A}}{\boldsymbol{e}}={\boldsymbol{e}}$ , $\sigma=\boldsymbol{\varpi}\overline{{\boldsymbol{m}}}_{A}$ , and $\boldsymbol{\varpi}({\boldsymbol{I}} - {\boldsymbol{A}} + {\boldsymbol{e}}\boldsymbol{\varpi})^{-1}=\boldsymbol{\varpi}$ that
Therefore, ${\boldsymbol{a}}$ is a solution to the Poisson equation (3.2) (see e.g. [Reference Dendievel, Latouche and Liu7, Reference Makowski and Shwartz25]) and is unique up to constant multiples (see [Reference Glynn and Meyn10, Proposition 1.1]). For later use, fix $c > 0$ sufficiently large so that ${\boldsymbol{a}} \ge {\textbf{0}}$ , and then let ${\boldsymbol{v}}\,:\!=\,(v(k,i))_{(k,i)\in\mathbb{S}}$ and ${\boldsymbol{f}}\,:\!=\,(f(k,i))_{(k,i)\in\mathbb{S}}$ denote nonnegative column vectors such that
respectively, where the size $M_0$ of ${\boldsymbol{v}}(0)$ and ${\boldsymbol{f}}(0)$ is, in general, different from the size $M_1$ of ${\boldsymbol{v}}(k)$ and ${\boldsymbol{f}}(k)$ , $k \in \mathbb{N}$ . Furthermore, let $\textbf{1}_{\mathbb{C}}\,:\!=\,(1_{\mathbb{C}}(k,i))_{(k,i)\in\mathbb{S}}$ , $\mathbb{C} \subseteq \mathbb{S}$ , denote a column vector such that
When $\mathbb{C}$ consists of a single state $(\ell,j) \in \mathbb{S}$ , that is, $\mathbb{C} = \{(\ell,j)\}$ , we write $\textbf{1}_{(\ell,j)}$ for $\textbf{1}_{\{(\ell,j)\}}$ .
The following lemma presents our desired drift condition.
Lemma 3.3. If Assumptions 2.1 and 3.1 hold, then there exist some $b \in (0,\infty)$ and $K \in \mathbb{N}$ such that
where $\mathbb{L}_{\le k} = \bigcup_{\ell=0}^k \mathbb{L}_{\ell}$ for $k \in \mathbb{Z}_+$ .
Proof. To prove this lemma, it suffices to show that
where ${\boldsymbol{P}}(k;\,\ell)$ , $k,\ell \in \mathbb{Z}_+$ , denotes a submatrix of ${\boldsymbol{P}}$ that contains the transition probabilities from level k to level $\ell$ . Indeed, (3.8) implies that there exist some $b \in (0,\infty)$ and $K \in \mathbb{N}$ such that
Furthermore, ${\boldsymbol{P}}^{(N)}{\boldsymbol{v}} \le {\boldsymbol{P}}{\boldsymbol{v}}$ for $N \in \mathbb{N}$ , which follows from (2.1) and (2.9) together with the fact that ${\boldsymbol{v}}(1) \le {\boldsymbol{v}}(2) \le {\boldsymbol{v}}(3) \le \cdots$ owing to (3.5).
First, we prove (3.8a). It follows from (2.1), (2.4), and (3.5) that, for all $k \in \mathbb{Z}_{\ge 2}$ ,
where the last equality holds because $\overline{{\boldsymbol{m}}}_{A} + {\boldsymbol{A}}{\boldsymbol{a}} ={\boldsymbol{a}} + \sigma{\boldsymbol{e}}$ owing to (3.4). It also follows from (3.9) and Assumption 3.1 that $\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(k;\,\ell){\boldsymbol{v}}(\ell)$ is finite for each $k \in \mathbb{Z}_{\ge 2}$ . Similarly, we can confirm that $\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(0;\,\ell){\boldsymbol{v}}(\ell)$ and $\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(1;\,\ell){\boldsymbol{v}}(\ell)$ are finite.
Next, we prove (3.8b). Using (3.5) and (3.6), we rewrite (3.9) as
Clearly, there exists some $K \in \mathbb{N}$ such that
Combining this and (3.10) results in (3.8b). The proof is complete.
Lemma 3.3 leads to the results on the finite means of the stationary distribution vectors $\boldsymbol{\pi}$ and $\boldsymbol{\pi}^{(N)}$ of the infinite-level and finite-level chains.
Theorem 3.4. Under Assumption 2.1, the following are true:
-
(i) Assumption 3.1 holds if and only if
(3.11) \begin{eqnarray}\sum_{k=1}^{\infty} k \boldsymbol{\pi}(k) {\boldsymbol{e}} &<& \infty.\end{eqnarray} -
(ii) If Assumption 3.1 holds, then
(3.12) \begin{eqnarray}\sup_{N \in \mathbb{N}}\sum_{k=1}^N k \boldsymbol{\pi}^{(N)}(k) {\boldsymbol{e}} &<& \infty.\end{eqnarray}
Proof. The ‘only if’ part of the statement (i), that is, ‘Assumption 3.1 implies (3.11)’, can be proved in a similar way to the proof of the statement (ii). Thus in what follows, we prove the statement (ii) and the ‘if’ part of the statement (i).
We prove the statement (ii). Suppose that Assumption 3.1 holds (in addition to Assumption 2.1) and thus Lemma 3.3 holds. Pre-multiplying both sides of (3.7) by $\boldsymbol{\pi}^{(N)}$ and using (2.11), we obtain
and thus $\boldsymbol{\pi}^{(N)}{\boldsymbol{f}} \le b$ for all $N \in \mathbb{N}$ . Combining this inequality and (3.6) yields (3.12). The statement (ii) has been proved.
We prove the ‘if’ part of the statement (i). To this end, suppose that (3.11) holds. It then follows from (2.8) that
which yields $\sum_{k=1}^{\infty} k{\boldsymbol{R}}_0(k){\boldsymbol{e}} < \infty$ and $\sum_{k=1}^{\infty} k{\boldsymbol{R}}(k){\boldsymbol{e}}< \infty$ . It also follows from (2.6), ${\boldsymbol{G}}{\boldsymbol{e}}={\boldsymbol{e}}$ , and $({\boldsymbol{I}} - \boldsymbol{\Phi}(0))^{-1}{\boldsymbol{e}} \ge {\boldsymbol{e}}$ that
which leads to
and thus $\sum_{\ell=1}^{\infty} \ell^2{\boldsymbol{A}}(\ell){\boldsymbol{e}} < \infty$ . Similarly, we can prove that $\sum_{\ell=1}^{\infty} \ell^2{\boldsymbol{B}}(\ell){\boldsymbol{e}} < \infty$ by combining (2.5) and $\sum_{k=1}^{\infty} k{\boldsymbol{R}}_0(k){\boldsymbol{e}} < \infty$ . Consequently, the ‘if’ part of the statement (i) has been proved.
3.2. Finiteness of the mean first passage time to level zero
This subsection presents basic results on the mean first passage time to level zero of the infinite level chain. First, we introduce some definitions to derive the basic results. We then show that the mean first passage time to level zero is basically linear in the initial level. We also show that the second moment condition (Assumption 3.1) is equivalent to the finiteness of the mean first passage time to level zero in steady state. These results are used in the following sections (some of them might be in the literature, but the authors have not been able to find them and therefore provide them with their proofs for the convenience of the reader).
We present some definitions related to the mean first passage time to level zero. Let ${\boldsymbol{u}}(k)\,:\!=\,(u(k,i))_{i\in\mathbb{M}_{k\wedge1}}$ , $k\in\mathbb{Z}_+$ , denote a column vector such that
where $\tau_k = \inf\{n \in \mathbb{N}\,:\, X_n = k\}$ and $\mathbb{E}_{(k,i)}[{\cdot}] = \mathbb{E}[{\cdot}\mid (X_0, J_0)=(k,i)]$ for $k \in \mathbb{Z}_+$ and $i \in \mathbb{M}_{k \wedge 1}$ . Let $\widehat{{\boldsymbol{G}}}_1(z)$ , $z \in [0,1]$ , denote an $M_1 \times M_0$ matrix such that
Furthermore, let $\widehat{{\boldsymbol{G}}}(z)$ , $z \in [0,1]$ , denote an $M_1 \times M_1$ matrix such that
Note that the infinite-level M/G/1-type Markov chain has level homogeneity of transitions except for level zero. Therefore, for all $k\in\mathbb{N}$ ,
and $\widehat{{\boldsymbol{G}}}(1)$ is equal to the G-matrix ${\boldsymbol{G}}$ . In addition, it follows from (3.13)–(3.15) that
The following lemma shows that the mean first passage time to level zero from level k has its dominant term linear in k.
Lemma 3.5. Suppose that Assumption 2.1 holds. We then have
and thus
Proof. First, we prove (3.17b), which leads directly to the proof of (3.18). The matrix generating function $\widehat{{\boldsymbol{G}}}(z)$ is the minimal nonnegative solution to the matrix equation $\widehat{\mathscr{G}}(z)= z\sum_{m=-1}^{\infty}{\boldsymbol{A}}(m) \big\{ \widehat{\mathscr{G}}(z) \big\}^{m+1}$ (see [Reference Neuts36, Theorems 2.2.1 and 2.2.2]), and thus
which is equivalent to [Reference Neuts36, Equation (2.2.9)] with $s=0$ . Equation (3.19) is rewritten as
Similarly, we have
which is equivalent to [Reference Neuts36, Equation (2.4.3)] with $s=0$ . Combining (3.20), (3.21), and (2.3) yields
Substituting (3.22) into (3.16) and using $\widehat{{\boldsymbol{G}}}(1){\boldsymbol{e}}={\boldsymbol{G}}{\boldsymbol{e}}={\boldsymbol{e}}$ , we obtain
Note here (see [Reference Neuts36, Equations (3.1.3), (3.1.12), and (3.1.14)]) that
where the second equality is due to the fact that ${\boldsymbol{g}}({\boldsymbol{I}} - {\boldsymbol{A}} - \overline{{\boldsymbol{m}}}_{A}{\boldsymbol{g}})^{-1}=\boldsymbol{\varpi}/({-}\sigma)$ (see [Reference Neuts36, Equation (3.1.15)]). Inserting (3.24) into (3.23) results in
which shows that (3.17b) holds.
Next, we prove (3.17a). Let $\widehat{{\boldsymbol{K}}}(z)$ , $z \in [0,1]$ , denote an $M_0 \times M_0$ matrix such that
From (3.13), we then have
We also have
which is equivalent to [Reference Neuts36, Equation (2.4.8)] with $s=0$ . Combining (3.26) and (3.22) yields
Substituting (3.27) into (3.25) and using (3.23) and $\sum_{m=0}^{\infty}{\boldsymbol{B}}(m){\boldsymbol{e}} = {\boldsymbol{e}}$ , we obtain
Finally, inserting (3.17b) into (3.28) leads to (3.17a). The proof is complete.
Lemma 3.5, together with Theorem 3.4, yields Theorem 3.6 below, which ensures that the second moment condition (Assumption 3.1) is equivalent to the finiteness of the mean first passage time to level zero in steady state.
Theorem 3.6. Suppose that Assumption 2.1 is satisfied. Assumption 3.1 holds if and only if
Proof. The constant $-\sigma$ in (3.18) is positive and finite. Indeed, it follows from (2.4) and Assumption 2.1 that
Therefore, (3.17) implies that (3.29) is equivalent to (3.11). Equation (3.11) is equivalent to Assumption 3.1 owing to Theorem 3.4(i). The proof is complete.
4. Fundamental deviation matrix of the infinite-level chain
This section has two subsections. Section 4.1 defines the fundamental deviation matrix of the infinite-level chain as a block-decomposition-friendly solution to the Poisson equation
which is also solved by the deviation matrix
if it exists (see [Reference Coolen-Schrijner and van Doorn5] and [Reference Dendievel, Latouche and Liu7, Lemma 2.7]). The deviation matrix ${\boldsymbol{D}}$ satisfies the constraint
while the fundamental deviation matrix satisfies another constraint (shown later). Furthermore, the fundamental deviation matrix always exists, while the deviation matrix ${\boldsymbol{D}}$ does not necessarily exist, provided that the infinite-level chain is irreducible and positive recurrent. In this sense, the fundamental deviation matrix is more fundamental than the deviation matrix ${\boldsymbol{D}}$ . Section 4.2 presents a closed block-decomposition form of the fundamental deviation matrix, which contributes to the analysis in the next section.
4.1. A block-decomposition-friendly solution to the Poisson equation
In this subsection, after some preparations, we define the fundamental deviation matrix. We then show that the fundamental deviation matrix is a solution to the Poisson equation (4.1). The fundamental deviation matrix has a finite base set (as its parameters) and is suitable for block decomposition by choosing the base set appropriately according to the block structure of ${\boldsymbol{P}}$ . We also establish upper bounds for the fundamental deviation matrix and for any solution to the Poisson equation. Finally, we show a relationship between the fundamental deviation matrix and the deviation matrix.
We make some preparations to describe the fundamental deviation matrix. Let $\mathbb{A}$ denote an arbitrary finite subset of $\mathbb{S}$ , and $\mathbb{B} = \mathbb{S} \setminus \mathbb{A}$ , and partition ${\boldsymbol{P}}$ as
Let $\widetilde{{\boldsymbol{P}}}_{\mathbb{A}}$ denote
which is considered to be the transition probability matrix of a Markov chain obtained by observing the infinite-level chain $\{(X_n,J_n)\}$ when it is in $\mathbb{A}$ (see [Reference Zhao39, Definition 1 and Theorem 2]). Since ${\boldsymbol{P}}$ is an irreducible and positive recurrent stochastic matrix, so is $\widetilde{{\boldsymbol{P}}}_{\mathbb{A}}$ . Therefore, $\widetilde{{\boldsymbol{P}}}_{\mathbb{A}}$ has a unique stationary distribution vector, denoted by $\widetilde{\boldsymbol{\pi}}_{\mathbb{A}} \,:\!=\,(\widetilde{\pi}_{\mathbb{A}}(k,i))_{(k,i) \in \mathbb{A}}$ , which is given by
Finally, let $T(\mathbb{A}) = \inf\{n \in \mathbb{N}\,:\, (X_n,J_n) \in \mathbb{A}\}$ and $u_{\mathbb{A}}(k,i) = \mathbb{E}_{(k,i)}[T(\mathbb{A})]$ for $(k,i) \in \mathbb{S}$ .
We define the fundamental deviation matrix of the infinite-level chain. Fix $\boldsymbol{\alpha}\,:\!=\,(\alpha_1,\alpha_2) \in \mathbb{S}$ arbitrarily, and let ${\boldsymbol{H}}_{\mathbb{A}}\,:\!=\,(H_{\mathbb{A}}(k,i;\,\ell,j))_{(k,i;\,\ell,j) \in \mathbb{S}^2}$ denote a matrix such that
where
Note here that $\{(X_n,J_n)\}$ is irreducible and positive recurrent. It then follows that
Note also that $\mathbb{A}$ is a finite subset of $\mathbb{S}$ . Therefore, the matrix ${\boldsymbol{H}}_{\mathbb{A}}$ is well-defined. We refer to this ${\boldsymbol{H}}_{\mathbb{A}}$ as the fundamental deviation matrix with finite base set $\mathbb{A}$ or as the fundamental deviation matrix for short.
The following theorem shows that the fundamental deviation matrix ${\boldsymbol{H}}_{\mathbb{A}}$ is a solution to the Poisson equation (4.1) and has a closed block-decomposition form.
Theorem 4.1. Suppose that Assumption 2.1 holds, and fix $\boldsymbol{\alpha}=(\alpha_1,\alpha_2) \in \mathbb{S}$ arbitrarily. For any finite $\mathbb{A} \subset \mathbb{S}$ , ${\boldsymbol{H}}_{\mathbb{A}}$ is independent of $\boldsymbol{\alpha}$ and is the unique solution to the Poisson equation (4.1) with the constraint
Furthermore, partition ${\boldsymbol{H}}_{\mathbb{A}}$ as
We then have
respectively, where ${\boldsymbol{u}}_{\mathbb{A}}(\mathbb{A}) = (u_{\mathbb{A}}(k,i))_{(k,i) \in \mathbb{A}}$ and ${\boldsymbol{u}}_{\mathbb{A}}(\mathbb{B}) = (u_{\mathbb{A}}(k,i))_{(k,i) \in \mathbb{B}}$ .
Proof. See Appendix B.
Remark 4.2. Equation (4.7a) is a closed form for ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})$ , and thus substituting (4.7a) into (4.7b) yields a closed form for ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{B})$ . Furthermore, if $\mathbb{A} = \mathbb{L}_0$ , then these block-decomposition forms are expressed with probabilistically interpretable matrices (see Corollary 4.7).
Remark 4.3. Theorem 4.1 is immediately extended to the general class of Markov chains with countable states, since the proof of this theorem does not depend on the M/G/1-type structure. Related results can be found in [Reference Liu, Liu and Zhao19, Theorem 3.2].
The following lemma gives respective upper bounds for $|{\boldsymbol{H}}_{\mathbb{A}}|{\boldsymbol{e}}$ and any solution to the Poisson equation (4.1) by a drift condition different from the one presented in Lemma 3.3.
Lemma 4.4. Suppose that Assumption 2.1 is satisfied. Let ${\boldsymbol{v}}^{\prime}\,:\!=\,(v^{\prime}(k,i))_{(k,i) \in \mathbb{S}}$ denote a column vector such that
where ${\boldsymbol{a}} \ge {\textbf{0}}$ is given in (3.3). The following statements then hold:
-
(i) There exist some $b^{\prime} \in (0,\infty)$ and $K^{\prime} \in \mathbb{N}$ such that
(4.9) \begin{align}{\boldsymbol{P}}^{(N)}{\boldsymbol{v}}^{\prime} \le {\boldsymbol{P}}{\boldsymbol{v}}^{\prime} &\le {\boldsymbol{v}}^{\prime} - {\boldsymbol{e}} + b^{\prime} \textbf{1}_{\mathbb{L}_{\le K^{\prime}}}& \quad &{for\ all\ N\ \in\ \mathbb{N}}.\end{align} -
(ii) Any solution ${\boldsymbol{X}}$ to the Poisson equation (4.1) satisfies $|{\boldsymbol{X}}| \le C_0{\boldsymbol{v}}^{\prime}{\boldsymbol{e}}^{\top}$ for some $C_0 > 0$ . Furthermore, $\boldsymbol{\pi}|{\boldsymbol{X}}| < \infty$ under Assumption 3.1.
-
(iii) For each finite $\mathbb{A} \subset \mathbb{S}$ , there exists some $C_{\mathbb{A}} > 0$ such that $|{\boldsymbol{H}}_{\mathbb{A}}|{\boldsymbol{e}} \le C_{\mathbb{A}}{\boldsymbol{v}}^{\prime}$ .
Proof. See Appendix C.
The following proposition shows that the deviation matrix ${\boldsymbol{D}}$ does not necessarily exist even if ${\boldsymbol{H}}_{\mathbb{A}}$ does, and that ${\boldsymbol{D}}$ (if it exists) is expressed by ${\boldsymbol{H}}_{\mathbb{A}}$ .
Proposition 4.5. Suppose that Assumption 2.1 is satisfied and ${\boldsymbol{P}}$ is aperiodic.
-
(i) Assumption 3.1 holds if and only if ${\boldsymbol{D}}$ exists.
-
(ii) If ${\boldsymbol{D}}$ exists, then it is the unique solution to the Poisson equation (4.1) with the constraint (4.3) such that $\boldsymbol{\pi} |{\boldsymbol{X}}| < \infty$ , and furthermore,
(4.10) \begin{align}{\boldsymbol{D}} = ({\boldsymbol{I}} - {\boldsymbol{e}}\boldsymbol{\pi}){\boldsymbol{H}}_{\mathbb{A}}\quad {for\ any\ finite\ \mathbb{A}\ \subset\ \mathbb{S}}.\end{align}
Proof. See Appendix D.
Remark 4.6. A similar and general version of Proposition 4.5 is presented in [Reference Liu, Liu and Zhao19, Corollary 3.1].
We note that the deviation matrix has been used for perturbation analysis of block-structured Markov chains (see [Reference Dendievel, Latouche and Liu7, Reference Liu, Wang and Xie20, Reference Liu, Wang and Zhao21]). However, the deviation matrix requires the aperiodicity of the transition probability matrix, and such aperiodicity is not necessary for the analysis in this paper. Therefore, we use the fundamental deviation matrix ${\boldsymbol{H}}_{\mathbb{A}}$ instead of the deviation matrix ${\boldsymbol{D}}$ .
4.2. Closed block-decomposition form of the fundamental deviation matrix
Corollary 4.7 (which follows from Theorem 4.1) shows that the fundamental deviation matrix with the base set $\mathbb{L}_0$ , i.e., ${\boldsymbol{H}}_{\mathbb{L}_0}=(H_{\mathbb{L}_0}(k,i;\,\ell,j))_{(k,i;\,\ell,j) \in \mathbb{S}^2}$ , can be expressed in a block-decomposition form with probabilistically interpretable matrices. For simplicity, we omit the subscript ‘ $\mathbb{L}_0$ ’ of ${\boldsymbol{H}}_{\mathbb{L}_0}$ (and thus its element $H_{\mathbb{L}_0}(k,i;\,\ell,j)$ ), since we do not consider any base set other than $\mathbb{L}_0$ in the following.
Corollary 4.7. Suppose that Assumption 2.1 holds. Let
which is the $(k,\ell)$ -block of ${\boldsymbol{H}}\,:\!=\,{\boldsymbol{H}}_{\mathbb{L}_0}$ . We then have, for $\ell \in \mathbb{Z}_+$ ,
where $\boldsymbol{\kappa}$ denotes the unique stationary distribution vector of ${\boldsymbol{K}}\,:\!=\,\widehat{{\boldsymbol{K}}}(1)$ , and where ${\boldsymbol{F}}_+(k;\,\ell)$ , $k,\ell\in\mathbb{Z}_+$ , denotes an $\mathbb{M}_{k \wedge 1} \times \mathbb{M}_{\ell \wedge 1}$ matrix such that
and thus, for all $k,\ell \in \mathbb{Z}_+$ such that $k \wedge \ell = 0$ ,
Proof. We first prove (4.11a). By definition, ${\boldsymbol{K}} = \widetilde{{\boldsymbol{P}}}_{\mathbb{L}_0}$ , and thus $\boldsymbol{\kappa}$ is the stationary probability vector of $\widetilde{{\boldsymbol{P}}}_{\mathbb{L}_0}$ . Therefore, letting $\mathbb{A} = \mathbb{L}_0$ in (4.7a), we have
Combining this equation with (4.13a) leads to
It also follows from (4.12) and [Reference Meyn and Tweedie34, Theorem 10.0.1] that
Substituting (4.15) into (4.14) yields (4.11a).
Next, we prove (4.11b). Letting $\mathbb{A} = \mathbb{L}_0$ in (4.7b) yields
This equation together with (4.13c) leads to
Note here ([Reference Zhao, Li and Braun40, Theorem 9]) that
and thus
where $\boldsymbol{\Phi}(0)$ is given in (2.7). Note also that ${\boldsymbol{F}}_{+}(k;\,0) = {\boldsymbol{O}}$ for $k\in\mathbb{N}$ , which is shown in (4.13c). To emphasize this exceptional case, we write
Substituting (4.18) and (4.19) into (4.16) results in (4.11b). The proof is complete.
Remark. Let $\widetilde{{\boldsymbol{H}}}(k;\,\ell)=\big(\widetilde{H}(k,i;\,\ell,j)\big)_{(i,j) \in \mathbb{M}_{k\wedge1} \times \mathbb{M}_{\ell\wedge1}}$ for $k,\ell \in \mathbb{Z}_+$ . Letting $\mathbb{A} = \mathbb{L}_0$ in (B.6) and following the proof of Corollary 4.7, we obtain equations similar to (4.11a) and (4.11b): for $\ell \in \mathbb{Z}_+$ ,
5. Subgeometric convergence in the infinite-level limit
The main purpose of this section is to prove the subgeometric convergence of the level-wise difference $\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)$ . This section consists of three subsections. Section 5.1 presents a difference formula for $\boldsymbol{\pi}^{(N)}$ and $\boldsymbol{\pi}$ , and then Section 5.2 shows the uniform convergence of $\|\boldsymbol{\pi}^{(N)} - \boldsymbol{\pi}\|$ under Assumption 3.1. Based on these results, Section 5.3 presents a subgeometric convergence formula for $\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)$ under an additional condition (Assumption 5.10). The subgeometric convergence formula is presented in Theorem 5.12, which is the main theorem of this paper. Section 5.4 applies the main theorem to derive a subexponential asymptotic formula for the loss probability in the MAP/GI/1/N queue.
5.1. A difference formula for the finite- and infinite-level stationary distributions
This subsection presents a difference formula for $\boldsymbol{\pi}^{(N)}$ and $\boldsymbol{\pi}$ with the fundamental deviation matrix ${\boldsymbol{H}}$ . The following two matrices are needed to describe the difference formula:
where $\overline{{\boldsymbol{A}}}(k)$ and $\overline{{\boldsymbol{B}}}(k)$ are given in (2.10a) and (2.10b), respectively.
Lemma 5.1. (Difference formula.) Suppose that Assumption 2.1 holds; then
For $k \in \mathbb{Z}_{[0,N]}$ , we also have
where
Proof. Equation (5.2) is proved as follows. Theorem 4.1 implies that ${\boldsymbol{H}} = {\boldsymbol{H}}_{\mathbb{L}_0}$ is a solution to the Poisson equation (4.1) and thus $({\boldsymbol{I}} - {\boldsymbol{P}}){\boldsymbol{H}} = {\boldsymbol{I}} - {\boldsymbol{e}}\boldsymbol{\pi}$ . Combining this equation with $\boldsymbol{\pi}^{(N)} =\boldsymbol{\pi}^{(N)}{\boldsymbol{P}}^{(N)}$ and $\boldsymbol{\pi}{\boldsymbol{e}}=1$ yields
which shows that (5.2) holds.
We start the proof of (5.3) with the level-wise expression for the difference formula (5.2). From (2.1) and (2.9), we have
With this equation, we decompose (5.2) into level-wise expressions: for $k \in \mathbb{Z}_{[0,N]}$ ,
To proceed further, we rewrite the term ${\boldsymbol{H}}(N;\,k) - {\boldsymbol{H}}(n;\,k)$ in (5.5) by using ${\boldsymbol{G}}$ and related matrices (including vectors) introduced in Section 3. Combining (4.11b) and (4.17), we obtain, for $n \in \mathbb{Z}_{\ge N+1}$ and $k \in \mathbb{Z}_{[0,N]}$ ,
Using (3.17b), we rewrite ${\boldsymbol{u}}(n)-{\boldsymbol{u}}(N)$ in (5.6) as
Applying (5.7) to (5.6), we obtain, for $n \in \mathbb{Z}_{\ge N+1}$ and $k \in \mathbb{Z}_{[0,N]}$ ,
where the second equality follows from the fact that the first three terms on the right-hand side are expressed by ${\boldsymbol{S}}^{(N)}(n;\,k)$ defined in (5.4).
We are ready to prove (5.3). Substituting (5.8) into (5.5) results in the following: for $k \in \mathbb{Z}_{[0,N]}$ ,
Note here that, for $\ell\in\mathbb{Z}_{[1,N]}$ ,
and similarly,
Remark 5.2. In the same way as for (5.2), we can derive a similar difference formula for any solution ${\boldsymbol{X}}$ to the Poisson equation (4.1):
Note that (5.12) holds for ${\boldsymbol{X}}=\widetilde{{\boldsymbol{H}}}$ defined in (B.1) and for ${\boldsymbol{X}}={\boldsymbol{D}}$ defined in (4.2) (if it exists); the latter case is presented in [Reference Heidergott11, Equation (3)].
5.2. Uniform convergence under the second-order moment condition
This subsection presents two theorems, Theorems 5.3 and 5.4. Theorem 5.3 shows that $\big\{\boldsymbol{\pi}^{(N)}\big\}$ converges uniformly to $\boldsymbol{\pi}$ at a rate of $o\big(N^{-1}\big)$ under the second-order moment condition (Assumption 3.1). Theorem 5.4 shows that the tail probabilities of $\boldsymbol{\pi}^{(N)}=\big(\boldsymbol{\pi}^{(N)}(0),\boldsymbol{\pi}^{(N)}(1),\dots,\boldsymbol{\pi}^{(N)}(N)\big)$ are bounded from above by the corresponding tail probabilities of $\boldsymbol{\pi}=(\boldsymbol{\pi}(0),\boldsymbol{\pi}(1),{\dots})$ and the gap between them is $o\big(N^{-1}\big)$ .
Theorem 5.3. If Assumptions 2.1 and 3.1 hold, then $\| \boldsymbol{\pi}^{(N)} - \boldsymbol{\pi} \| = o\big(N^{-1}\big)$ .
Proof. It suffices to show that
because
where the second equality follows from $\boldsymbol{\pi}(k) =\underline{\boldsymbol{\mathcal{O}}}(k^{-2})$ owing to Theorem 3.4(i).
In what follows, we prove (5.13). Equation (4.8) shows that ${\boldsymbol{v}}^{\prime}(n)> {\boldsymbol{v}}^{\prime}(N)$ for $n \in\mathbb{Z}_{\ge N+1}$ , and thus Lemma 4.4(iii) yields
where $C > 0$ is some constant. Applying (5.14) to (5.5) results in
Furthermore, (4.8) implies that there exists some $C^{\prime} > 0$ such that
Combining (5.15) with (5.16), (3.1), and $\boldsymbol{\pi}^{(N)}(\ell) = \underline{\boldsymbol{\mathcal{O}}}\big(\ell^{-2}\big)$ (owing to Theorem 3.4(ii)), we obtain
which shows that (5.13) holds. The proof is complete.
Theorem 5.4. Suppose that Assumptions 2.1 and 3.1 are satisfied. For $k \in \mathbb{Z}_+$ , let $\overline{\boldsymbol{\pi}}(k)\,:\!=\,(\overline{\pi}(k,i))_{i \in \mathbb{M}_1}$ and $\overline{\boldsymbol{\pi}}^{(N)}(k)\,:\!=\,\big(\overline{\pi}^{(N)}(k,i)\big)_{i \in \mathbb{M}_1}$ denote
respectively, where $\boldsymbol{\pi}^{(N)}(k) = {\textbf{0}}$ for $k \in \mathbb{Z}_{\ge N+1}$ . We then have
Proof. See Appendix E.
5.3. Subgeometric convergence
In this subsection, we first provide preliminaries on heavy-tailed distributions and subgeometric functions, and then present the main theorem and its corollaries on the subgeometric convergence of $\big\{\boldsymbol{\pi}^{(N)}\big\}$ .
5.3.1. Preliminaries: heavy-tailed distributions and subgeometric functions
We introduce the class of heavy-tailed distributions on the domain $\mathbb{Z}_+$ together with and its two subclasses. To this end, let $F\,:\,\mathbb{Z}_+\to [0,1]$ denote a probability distribution (function), and let $\overline{F} = 1 - F$ , which is the complementary distribution (function) of F. Furthermore, let $F^{*2}(k) = \sum_{\ell=0}^k F(\ell) F(k-\ell)$ for $k \in \mathbb{Z}_+$ .
Definition 5.5. (Heavy-tailed, long-tailed, and subexponential distributions.)
-
(i) A distribution F is said to be heavy-tailed if and only if
\begin{equation*}\limsup_{k \to \infty} e^{\theta k} \overline{F}(k) = \infty\quad \text{for any}\ \theta > 0.\end{equation*}The class of heavy-tailed distributions is denoted by $\mathcal{H}$ . -
(ii) A distribution F is said to be long-tailed if and only if
\begin{equation*}\lim_{k \to \infty}{\overline{F}(k+\ell) \over \overline{F}(k)} = 1\quad \text{for any fixed}\ \ell \in \mathbb{N}.\end{equation*}The class of long-tailed distributions is denoted by $\mathcal{L}$ . -
(iii) A distribution F is said to be subexponential if and only if
\begin{equation*}\lim_{k \to \infty}{1 - F^{*2}(k) \over \overline{F}(k)} = 2.\end{equation*}The class of subexponential distributions is denoted by $\mathcal{S}$ .
Remark 5.6. The following inclusion relation holds for the above three classes: $\mathcal{S} \subsetneq \mathcal{L} \subsetneq \mathcal{H}$ (see [Reference Foss, Korshunov and Zachary9, Lemmas 2.17 and 3.2; Section 3.7]).
Next we introduce the class of subgeometric functions.
Definition 5.7. (Subgeometric functions.) A function $r\,:\,\mathbb{Z}_+\to\mathbb{R}_+\,:\!=\,[0,\infty)$ is said to be a subgeometric function if and only if $\log r(k) = o(k)$ as $k \to \infty$ . The class of subgeometric functions is denoted by $\varTheta$ .
Proposition 5.8. If $F \in \mathcal{L}$ , then $\overline{F} \in \varTheta$ .
Proof. Assuming $F \in \mathcal{L}$ , we prove that
By definition,
Therefore, for any $\varepsilon > 0$ , there exists some $k_0 \in \mathbb{N}$ such that $\big|\log \overline{F}(k+1) - \log \overline{F}(k)\big| < \varepsilon$ for all $k \in \mathbb{Z}_{\ge k_0}$ , which yields
Letting $n\to\infty$ and then $\varepsilon \downarrow 0$ in the above inequality, we obtain
which implies that (5.17) holds. The proof is complete.
Remark 5.9. In relation to the class $\varTheta$ , there is a class $\varLambda$ of subgeometric rate functions introduced in [Reference Nummelin and Tuominen37]. A function $\psi\,:\,\mathbb{Z}_+\to\mathbb{R}_+$ belongs to the class $\varLambda$ if and only if
for some $\psi_0\,:\,\mathbb{Z}_+\to \mathbb{R}_+$ such that $\psi_0(1) \ge 2$ and $\psi_0$ is nondecreasing with $\log \psi_0(k)/k \searrow 0$ as $k \to \infty$ . By definition, $\varLambda \subset \varTheta$ .
5.3.2. The main theorem and its corollaries
We now make an additional assumption on $\{{\boldsymbol{A}}(k)\}$ and $\{{\boldsymbol{B}}(k)\}$ to study the subgeometric convergence of $\big\{\boldsymbol{\pi}^{(N)}\big\}$ to $\boldsymbol{\pi}$ .
Assumption 5.10. There exists a distribution $F \in \mathcal{S}$ such that
where either ${\boldsymbol{c}}_{A}\neq{\textbf{0}}$ or ${\boldsymbol{c}}_{B}\neq{\textbf{0}}$ .
Assumption 5.10 can be interpreted as a condition on the distribution of level increments in steady state. We define $\Delta_+ = \max(X_1-X_0,0)$ and call it the nonnegative level increment. We then define $\varGamma(k)$ , $k \in \mathbb{Z}_+$ , as
We call $\varGamma$ the stationary nonnegative level-increment (SNL) distribution. Assumption 2.1 ensures that the SNL distribution $\varGamma$ has a finite positive mean $\mu_{\pi}\,:\!=\,\boldsymbol{\pi}(0) \overline{{\boldsymbol{m}}}_{B} + \overline{\boldsymbol{\pi}}(0) \overline{{\boldsymbol{m}}}_{A}^+ > 0$ , where
We also define $\overline{\varGamma}_I(k) = 1 - \varGamma_I(k)$ for $k \in \mathbb{Z}_+$ , where $\varGamma_I$ denotes the integrated tail distribution (the equilibrium distribution) of the SNL distribution $\varGamma$ , that is,
It follows from (5.20), (5.19), and Assumption 5.10 that
Since $F \in \mathcal{S}$ , we have $\varGamma_I \in \mathcal{S} \subset \mathcal{L}$ (see [Reference Foss, Korshunov and Zachary9, Corollary 3.13]) and thus $\overline{\varGamma}_I \in \varTheta$ by Proposition 5.8.
Assumption 5.10 yields a subexponential asymptotic formula for the stationary distribution $\boldsymbol{\pi}$ of the infinite-level chain, as shown in the following proposition. This proposition contributes to the proof of Theorem 5.12 below (see (F.9) in Appendix F.1).
Proposition 5.11. ([Reference Masuyama30, Theorem 3.1].) If Assumptions 2.1 and 5.10 hold, then
We have arrived at the main theorem of this paper.
Theorem 5.12. If Assumptions 2.1, 3.1, and 5.10 hold, then
and thus, as $N\to\infty$ , $\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)$ converges to zero according to a subgeometric function $\overline{F}$ (see Proposition 5.8).
Proof. See Appendix F.
There are two other versions of the subgeometric convergence formula (5.23).
Corollary 5.13. If all the conditions of Theorem 5.12 are satisfied, then
Proof. The formulas (5.24) and (5.25) follow from the combination of Theorem 5.12 with (5.21) and (5.22), respectively.
Theorem 5.12 and Corollary 5.13 give three types of subgeometric convergence formulas for the relative difference $[\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)] / \boldsymbol{\pi}(k){\boldsymbol{e}}$ .
Corollary 5.14. If all the conditions of Theorem 5.12 are satisfied, then
Finally, we consider the case where ${\boldsymbol{A}}$ is reducible. We can conjecture that the convergence formula (5.23) holds if ${\boldsymbol{A}}$ is not irreducible but has a single communication class. However, the irreducibility of ${\boldsymbol{A}}$ is assumed in most of the existing results used to prove Theorem 5.12. Therefore, in order to confirm that this conjecture is true, we have to remove the irreducibility of ${\boldsymbol{A}}$ from the assumptions of these existing results. We can also apply Theorem 5.12 to the case where ${\boldsymbol{A}}$ has more than one communication class but no transient states; i.e., ${\boldsymbol{A}}(k)$ , $k \in \mathbb{Z}_{\ge -1}$ , is rearranged to be of the block-diagonal form
where s is some positive integer. In this case, by observing the finite-level chain $\Big\{\Big(X_n^{(N)},J_n^{(N)}\Big)\Big\}$ only when it is in $\mathbb{L}_0 \cup \big(\mathbb{Z}_{[1,N]} \times \mathbb{M}_1^{(i)}\big)$ ( $i=1,2,\dots,s$ ), we have s (censored) finite-level chains covered by Theorem 5.12 and thus obtain such a result as (5.23) for each chain (under appropriate technical assumptions). If we integrate the results obtained in this way, we may obtain a convergence formula for the original finite-level chain $\Big\{\Big(X_n^{(N)},J_n^{(N)}\Big)\Big\}$ , but this work would be somewhat tedious.
5.4. Application to the MAP/GI/1/N queue
This subsection considers the application of the main theorem (Theorem 5.12) to the loss probability in the MAP/GI/1/N queue (see e.g. [Reference Baiocchi3]). First, we introduce the Markovian arrival process (MAP) [Reference Lucantoni, Meier-Hellstern and Neuts24] (‘ ${\boldsymbol{D}}_0$ ’ and ‘ ${\boldsymbol{D}}_1$ ’ are used to express two matrices that characterize the MAP according to the convention of the representative paper [Reference Lucantoni23] on MAPs and batch MAPs; the two matrices, of course, have no relation to the deviation matrix ${\boldsymbol{D}}$ ). Next, we describe the MAP/GI/1/N queue and then establish a finite-level M/G/1-type Markov chain of the queue length process in the MAP/GI/1/N queue. Finally, we derive a subexponential asymptotic formula for the loss probability of the MAP/GI/1/N queue.
We introduce the MAP [Reference Lucantoni, Meier-Hellstern and Neuts24]. The MAP has its background (Markov) chain, denoted by $\{\varphi(t); t \ge 0\}$ , which is defined on a finite state space $\mathbb{M} \,:\!=\, \{1,2,\dots,M\} \subset \mathbb{N}$ and is characterized by the irreducible infinitesimal generator ${\boldsymbol{Q}}$ . The irreducible infinitesimal generator ${\boldsymbol{Q}}$ is expressed as the sum of two matrices ${\boldsymbol{D}}_0$ and ${\boldsymbol{D}}_1$ , i.e., ${\boldsymbol{Q}} = {\boldsymbol{D}}_0 + {\boldsymbol{D}}_1$ , where ${\boldsymbol{D}}_1$ denotes a nonnegative matrix and ${\boldsymbol{D}}_0$ denotes a diagonally dominant matrix with negative diagonal elements and nonnegative off-diagonal elements. Note that ${\boldsymbol{D}}_1$ governs the transition of the background chain $\{\varphi(t)\}$ with an arrival, while ${\boldsymbol{D}}_0$ governs its transition without an arrival. Thus, the MAP is characterized by the pair of matrices $({\boldsymbol{D}}_0, {\boldsymbol{D}}_1$ ). For later use, let $\boldsymbol{\varphi}$ denote the stationary distribution vector of the irreducible background chain $\{\varphi(t)\}$ with infinitesimal generator ${\boldsymbol{Q}}$ , i.e., $\boldsymbol{\varphi}{\boldsymbol{Q}} = {\textbf{0}}$ , $\boldsymbol{\varphi}{\boldsymbol{e}}=1$ , and $\boldsymbol{\varphi} > {\textbf{0}}$ . Furthermore, let $\lambda = \boldsymbol{\varphi}{\boldsymbol{D}}_1{\boldsymbol{e}}$ , which is the arrival rate of MAP $({\boldsymbol{D}}_0,{\boldsymbol{D}}_1)$ . To avoid trivial cases, assume that ${\boldsymbol{D}}_1 \ge {\boldsymbol{O}},\neq {\boldsymbol{O}}$ and thus $\lambda > 0$ .
We describe the MAP/GI/1/N queue and related notions. Consider a queueing system with a single server and a buffer of capacity $N \in \mathbb{N}$ . Customers arrive at the system according to MAP $({\boldsymbol{D}}_0,{\boldsymbol{D}}_1)$ . Arriving customers are served on a first-come, first-served (FCFS) basis. If the waiting room is not full when a customer arrives, the customer is accepted into the system; otherwise, the customer is rejected. In addition, if the server is idle, the accepted customer’s service begins immediately; otherwise, the customer waits for his or her service turn. The service times of accepted customers are assumed to be independent and identically distributed according to a general distribution function G(t) ( $t \ge 0$ ) with finite mean $\mu > 0$ . The traffic intensity of our queue is denoted by $\rho \,:\!=\, \lambda \mu \in (0,\infty)$ .
We establish a finite-level M/G/1-type Markov chain from the queue length process in the MAP/GI/1/N queue. For $n \in \mathbb{N}$ , let $L_n^{(N)} \in \mathbb{Z}_{[0,N]}$ and $\varphi_n \in \mathbb{M}$ denote the queue length and the background state, respectively, immediately after the nth departure of a customer. It then follows (see e.g. [Reference Baiocchi3]) that the discrete-time bivariate process $\{(L_n^{(N)},\varphi_n);\,n \in \mathbb{Z}_+\}$ is a finite-level M/G/1-type Markov chain on state space $\mathbb{Z}_{[0,N]} \times \mathbb{M}$ with transition probability matrix ${\boldsymbol{T}}^{(N)}$ (see [Reference Baiocchi3, Equation (4) and p. 871]) given by
where, for $k \in \mathbb{Z}_+$ ,
and the sequence $\{{\boldsymbol{E}}(k);\,k \in \mathbb{Z}_+\}$ satisfies
Note here (see [Reference Masuyama, Liu and Takine33, Section 3]) that $\sum_{k=0}^{\infty}{\boldsymbol{E}}(k)$ is an irreducible and stochastic matrix satisfying the conditions
Equations (5.27) and (5.28) imply that the finite-level chain $\Big\{\Big(L_n^{(N)},\varphi_n\Big)\Big\}$ can reach any state in its state space from any state in $\mathbb{Z}_{[1,N]} \times \mathbb{M}$ . Note also that $({-}{\boldsymbol{D}}_0)^{-1}{\boldsymbol{D}}_1 \ge {\boldsymbol{O}}$ and $ ({\boldsymbol{D}}_0 + {\boldsymbol{D}}_1){\boldsymbol{e}} ={\boldsymbol{Q}}{\boldsymbol{e}} ={\textbf{0}} $ . Therefore, $({-}{\boldsymbol{D}}_0)^{-1}{\boldsymbol{D}}_1$ is a stochastic matrix. Combining this fact, (5.26), and (5.27), we obtain
Furthermore, (5.27)–(5.29) imply that $\Big\{\Big(L_n^{(N)},\varphi_n\Big)\Big\}$ is irreducible and thus has a unique stationary distribution vector, denoted by
where $\boldsymbol{\phi}^{(N)}(k)$ , $k \in \mathbb{Z}_{[0,N]}$ , is a $1 \times M$ vector with the following interpretation:
We present a formula for the loss probability, denoted by $P_{\textrm{loss}}^{(N)}$ , in the MAP/GI/1/N queue. The loss probability $P_{\textrm{loss}}^{(N)}$ is equal to the fraction of customers who are rejected because there is no room in the buffer upon their arrival. There is a well-known formula for the loss probability $P_{\textrm{loss}}^{(N)}$ in the MAP/GI/1/N queue (see e.g. [Reference Baiocchi3, Equation (9)]):
The loss probability $P_{\textrm{loss}}^{(N)}$ is expressed by the stationary distribution (if it exists) of the infinite-level limit of the finite-level chain $\Big\{\Big(L_n^{(N)},\varphi_n\Big)\Big\}$ . Let ${\boldsymbol{T}}$ denote
which is the transition probability matrix of the embedded Markov chain of the queue length process at the departure points in the MAP/GI/1 queue, i.e., the infinite-buffer version of the MAP/GI/1/N queue. As with ${\boldsymbol{T}}^{(N)}$ , the transition probability matrix ${\boldsymbol{T}}$ is irreducible by (5.27)–(5.29). To proceed further, assume that
which ensures that ${\boldsymbol{T}}$ has a unique stationary vector, denoted by
where $\boldsymbol{\phi}(k)$ is a $1 \times M$ vector for all $k \in \mathbb{Z}_+$ .
We now introduce the assumption for deriving a subexponential asymptotic formula for the loss probability $P_{\textrm{loss}}^{(N)}$ .
Assumption 5.15. Let $G_I$ denote the integrated tail distribution of the service time distribution G, i.e.,
The integrated tail distribution $G_I$ is second-order long-tailed (see [Reference Masuyama27, Definition 1.1]), i.e.,
where $\overline{G}_I(t) = 1 - G_I(t)$ for $t \ge 0$ .
Remark 5.16. According to [Reference Jelenković, Momčilović and Zwart14, Definition 1], the distribution $G_I$ is said to be square-root-insensitive if and only if (5.31) holds.
Under Assumption 5.15, we show two lemmas (Lemmas 5.17 and 5.18) and then a single theorem (Theorem 5.19). The theorem presents a subexponential asymptotic formula for the loss probability $P_{\textrm{loss}}^{(N)}$ .
Lemma 5.17. Suppose that Assumption 5.15 holds and let
If $G_I \in \mathcal{S}$ , then
and
Proof. Equation (5.33) follows from [Reference Masuyama30, Theorem 4.2(iii)]. It also follows from (5.33), (5.26), and $({-}{\boldsymbol{D}}_0)^{-1} {\boldsymbol{D}}_1{\boldsymbol{e}} = {\boldsymbol{e}}$ that
which shows that (5.32) holds. Combining (5.32), (5.33), and [Reference Masuyama30, Theorem 3.1] yields
which shows that (5.34) holds.
Lemma 5.18. Suppose that Assumption 5.15 holds. If $G_I \in \mathcal{S}$ , then
Theorem 5.19. Suppose that Assumption 5.15 holds. If $G_I \in \mathcal{S}$ , then
where $f(x) \stackrel{x}{\sim} g(x)$ implies $\displaystyle\lim_{x \to \infty}f(x)/g(x) = 1$ .
Proof. It follows from (5.30) and [Reference Lucantoni23, Equation (54)] that
and thus
Applying (5.35) to (5.38) and using (5.37), we obtain
which implies that $P_{\textrm{loss}}^{(N)} \stackrel{N}{\sim} \rho \overline{G}_I(N/\lambda)$ . Combining this and (5.34) results in (5.36).
To the best of our knowledge, Theorem 5.19 is a new result on the asymptotics of the loss probability in the MAP/GI/1/N queue and its variants. Baiocchi [Reference Baiocchi3] studied the loss probability in the same queue as ours, and he derived a geometric asymptotic formula, but not a subexponential asymptotic one. Theorem 5.19 can also be extended to batch-arrival and service-vacation models. For example, we can derive a subexponential asymptotic formula for the loss probability in a BMAP/GI/1/N with vacations, and such a result would be considered a generalization of the asymptotic formulas presented in Liu and Zhao [Reference Liu and Zhao22].
6. Concluding remarks
Theorem 5.12, the main theorem of this paper, presents the subgeometric convergence formula for the difference between $\boldsymbol{\pi}^{(N)}(k)$ and $\boldsymbol{\pi}(k)$ , which are the respective stationary probabilities of level k in the finite-level and infinite-level M/G/1-type Markov chains. Theorem 5.12, together with Corollaries 5.13 and 5.14, provides three pieces of knowledge on the subgeometric convergence of $\big\{\boldsymbol{\pi}^{(N)}\big\}$ to $\boldsymbol{\pi}$ :
-
(i) The convergence of $\big\{\boldsymbol{\pi}^{(N)}\big\}$ to $\boldsymbol{\pi}$ is subgeometric if the integrated tail distribution $\varGamma_I$ of the SNL distribution $\varGamma$ is subexponential.
-
(ii) The subgeometric convergence rate of $\big\{\boldsymbol{\pi}^{(N)}\big\}$ is equal to the decay rate of $\overline{\varGamma}_I(N)$ and $\overline{\boldsymbol{\pi}}(N)$ .
-
(iii) The decay rate of $\|\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)\| / (\boldsymbol{\pi}(k){\boldsymbol{e}})$ is independent of the level value k.
Two challenging future problems remain after this study. The first problem is to derive geometric convergence formulas for $\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)$ . A typical such formula would be of the following form:
for some $\gamma \in (0,1)$ and nonnegative vector $\boldsymbol{\zeta}(k) \neq {\textbf{0}}$ . Once this formula is obtained, we can derive a geometric asymptotic formula for the loss probability in M/G/1-type queues. The second problem is to study the convergence rate of the total variation distance $\|\boldsymbol{\pi}^{(N)} - \boldsymbol{\pi}\|=\sum_{k=0}^{\infty}|\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)|{\boldsymbol{e}} + \overline{\boldsymbol{\pi}}(N){\boldsymbol{e}}$ in the cases of geometric and subgeometric convergence. The second problem would be more challenging than the first one, since it is generally not permitted to exchange the order of two operators ‘ $\lim_{N\to\infty}$ ’ and ‘ $\sum_{k=0}^{\infty}$ ’.
Appendix A. Proof of Proposition 2.4
We first present a key lemma (Lemma A.1) for the proof of Proposition 2.4, and then prove this proposition.
A.1. Key lemma for the proof
Lemma A.1. Let $\Big\{\Big(\breve{X}_n^{(N)},\breve{J}_n^{(N)}\Big);\,n\in\mathbb{Z}_+\Big\}$ denote a stochastic process such that
and
For later convenience, we refer to the value of $\big\{\breve{X}_n^{(N)}\big\}$ as the level. Furthermore, let $\breve{\tau}_{0}^{(N)} = \inf\big\{n \in \mathbb{N}\,:\, \breve{X}_n^{(N)} = 0\big\}$ ; then the following are true.
-
(i) $\breve{X}_n^{(N)} \le X_n$ for all $n=0,1,\dots,\breve{\tau}_0^{(N)}$ , and thus $\breve{\tau}_{0}^{(N)} \le \tau_0$ .
-
(ii) The stopped process $\Big\{\Big(\breve{X}_n^{(N)},\breve{J}_n^{(N)}\Big);\, 0 \le n \le \breve{\tau}_{0}^{(N)}\Big\}$ is driven by the finite-level M/G/1-type transition probability matrix ${\boldsymbol{P}}^{(N)}$ until it stops at time $\breve{\tau}_{0}^{(N)}$ (i.e., reaches level zero).
-
(iii) If $\Big(\breve{X}_0^{(N)},\breve{J}_0^{(N)}\Big) = \big(X_0^{(N)},J_0^{(N)}\big)$ , then $\breve{\tau}_{0}^{(N)}=\tau_{0}^{(N)}$ in distribution.
Proof. Since the statement (i) is obvious from (A.1) and (A.2), we prove the statements (ii) and (iii). Equations (A.1a) and (A.1b) (together with (A.2)) ensure that the level increment of $\Big\{\Big(\breve{X}_n^{(N)},\breve{J}_n^{(N)}\Big)\Big\}$ is equal to that of $\{(X_n, J_n)\}$ as long as $\Big\{\Big(\breve{X}_n^{(N)},\breve{J}_n^{(N)}\Big)\Big\}$ is below level N and has not yet reached level zero. Furthermore, (A.1c) (together with (A.2)) ensures that $\Big\{\Big(\breve{X}_n^{(N)},\breve{J}_n^{(N)}\Big)\Big\}$ stays at level N while $\{(X_n, J_n)\}$ evolves with the transition matrices ${\boldsymbol{A}}(0), {\boldsymbol{A}}(1), {\boldsymbol{A}}(2),\dots$ ; and it ensures that $\Big\{\Big(\breve{X}_n^{(N)},\breve{J}_n^{(N)}\Big)\Big\}$ moves down to level $N-1$ when $\{(X_n, J_n)\}$ moves down by one level with ${\boldsymbol{A}}({-}1)$ . These facts imply that the behavior of $\Big\{\Big(\breve{X}_n^{(N)},\breve{J}_n^{(N)}\Big)\Big\}$ follows the transition law expressed by (2.9). Therefore, the statement (ii) holds (a similar argument is given in [Reference Latouche and Ramaswami18, Theorem 8.3.1]). In addition, it follows from (A.1), (A.2), and ${\boldsymbol{B}}({-}1){\boldsymbol{e}} = {\boldsymbol{A}}({-}1){\boldsymbol{e}}$ that
Combining this result with the statement (ii) implies the statement (iii). The proof is complete.
A.2. Body of the proof
We begin with the proof of the statement (i). Since the infinite-level chain $\{(X_n, J_n)\}$ is irreducible, it reaches any state $(0,j) \in \mathbb{L}_0$ from any state $(0,i) \in \mathbb{L}_0$ . Let $N_{i,j}$ denote the maximum level in an arbitrarily chosen path from state (0, i) to state (0, j). Since $\mathbb{L}_0$ is finite, we have $N_0\,:\!=\,\max_{i,j \in \mathbb{M}_0} N_{i,j} < \infty$ . Furthermore, the similarity between the structures of ${\boldsymbol{P}}^{(N)}$ and ${\boldsymbol{P}}$ implies that the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ behaves stochastically the same as the infinite-level chain $\{(X_n, J_n)\}$ unless the former reaches its upper boundary level N. Therefore, for any $N > N_0$ , the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ reaches any state $(0,j) \in \mathbb{L}_0$ from any state $(0,i) \in \mathbb{L}_0$ .
Next, we prove the statement (ii). Suppose that the infinite-level chain $\{(X_n, J_n)\}$ is irreducible and recurrent. It then follows that the infinite-level chain $\{(X_n, J_n)\}$ reaches level zero with probability one from any state, and so does the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ owing to Lemma A.1(i).
Finally, we prove the statement (iii). Suppose that the infinite-level chain $\{(X_n, J_n)\}$ is irreducible and positive recurrent. There then exists a stochastic matrix $\boldsymbol{\varPi}^{(N)}\,:\!=\,\big(\varPi^{(N)}(k,i;\,\ell,j)\big)_{(k,i;\,\ell,j) \in \mathbb{L}_{\le N}}$ (see e.g. [Reference Doob8, Chapter V, Theorem 2.1]) such that
Therefore, the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ has at least one stationary distribution vector, and each of them is a linear combination of the row vectors of $\boldsymbol{\varPi}^{(N)}$ (see e.g. [Reference Doob8, Chapter V, Theorem 2.4]). Furthermore, the statements (i) and (ii) imply that the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ has only one closed communication class and thus its stationary distribution vector is unique.
Appendix B. Proof of Theorem 4.1
First, we prove that ${\boldsymbol{H}}_{\mathbb{A}}$ is a solution to the Poisson equation (4.1) with (4.5). To facilitate our discussion, we define $\widetilde{{\boldsymbol{H}}}\,:\!=\,\big(\widetilde{H}(k,i;\,\ell,j)\big)_{(k,i;\,\ell,j) \in \mathbb{S}^2}$ as a matrix such that
which is equal to the first term of $H_{\mathbb{A}}(k,i;\,\ell,j)$ in (4.4). We then have
and thus
where $\widetilde{{\boldsymbol{H}}}(\mathbb{A})$ is a submatrix of $\widetilde{{\boldsymbol{H}}}$ partitioned as follows:
Solving the matrix equation (B.2) with respect to $\widetilde{{\boldsymbol{H}}}$ and decomposing its rows with the sets $\mathbb{A}$ and $\mathbb{B}$ , we have
where the sizes of the vectors ${\boldsymbol{e}}$ in (B.4a) and (B.4b) are equal to the cardinalities of $\mathbb{A}$ and $\mathbb{B}$ , respectively. Note (see [Reference Dendievel, Latouche and Liu7, Lemma 2.1]) that $\widetilde{{\boldsymbol{H}}}$ is a solution to the Poisson equation (4.1) (that is, $({\boldsymbol{I}} - {\boldsymbol{P}})\widetilde{{\boldsymbol{H}}} = {\boldsymbol{I}} - {\boldsymbol{e}}\boldsymbol{\pi}$ ) such that $\widetilde{H}(\boldsymbol{\alpha};\,\ell,j)=0$ for all $(\ell,j) \in \mathbb{S}$ . Using this fact and (B.2), we have
which shows that ${\boldsymbol{H}}_{\mathbb{A}}$ is a solution to the Poisson equation (4.1). From (B.2), (B.3), and $\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}{\boldsymbol{e}}=1$ , we also have
and thus ${\boldsymbol{H}}_{\mathbb{A}}$ satisfies the constraint (4.5).
Next, we prove (4.7), which implies that ${\boldsymbol{H}}_{\mathbb{A}}$ is independent of $\boldsymbol{\alpha}$ . Since the matrix $\widetilde{{\boldsymbol{H}}}$ , defined in (B.1), is a solution to the Poisson equation (4.1), it follows from [Reference Dendievel, Latouche and Liu7, Theorem 2.5]) that
Substituting (B.4a) into (B.6a) yields
Furthermore, (B.5) and (4.6) yield
Note here that ${\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}}_{\mathbb{A}} + {\boldsymbol{e}}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}$ is nonsingular and
Therefore, pre-multiplying both sides of (B.7) by $\big({\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}}_{\mathbb{A}} + {\boldsymbol{e}}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}\big)^{-1}$ and using (B.8) and (B.9), we obtain
which shows that (4.7a) holds. In addition, applying (B.4a) and (B.4b) to $\widetilde{{\boldsymbol{H}}}(\mathbb{A})$ and $\widetilde{{\boldsymbol{H}}}(\mathbb{B})$ , respectively, in (B.6b) and using $({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}{\boldsymbol{P}}_{\mathbb{B},\mathbb{A}}{\boldsymbol{e}}={\boldsymbol{e}}$ , we obtain
and thus
which shows that (4.7b) holds.
To complete the proof, it suffices to show that ${\boldsymbol{H}}_{\mathbb{A}}$ is the unique solution to the Poisson equation (4.1) with (4.5). Based on (B.7) and (B.8), we can consider ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})$ to be a solution to the system of equations
where ${\boldsymbol{X}}_{\mathbb{A}}(\mathbb{A}) $ is an unknown matrix of the same size as ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})$ . Let ${\boldsymbol{H}}^{\prime}_{\mathbb{A}}(\mathbb{A})$ denote an arbitrary solution to (B.10). Since ${\boldsymbol{H}}^{\prime}_{\mathbb{A}}(\mathbb{A})$ and ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})$ satisfy (B.10a), we have $\big({\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}}_{\mathbb{A}}\big){\boldsymbol{H}}^{\prime}_{\mathbb{A}}(\mathbb{A}) = $ $({\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}}_{\mathbb{A}}) {\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})$ , which leads to
This equation (together with [Reference Doob8, Chapter V, Theorem 2.4]) yields
where the last equality holds because ${\boldsymbol{H}}^{\prime}_{\mathbb{A}}(\mathbb{A})$ and ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})$ satisfy (B.10b). Therefore, ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})$ is the unique solution to the system of equations (B.10), and thus (4.7b) uniquely determines ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{B})$ . Consequently, ${\boldsymbol{H}}_{\mathbb{A}}$ is the unique solution to the Poisson equation (4.1) with (4.5). The proof is complete.
Appendix C. Proof of Lemma 4.4
We prove the statement (i). It follows from (2.1), (2.9), and (4.8) that ${\boldsymbol{P}}^{(N)}{\boldsymbol{v}}^{\prime} \le {\boldsymbol{P}}{\boldsymbol{v}}^{\prime}$ for $N \in \mathbb{N}$ and that, for $k \in \mathbb{Z}_{\ge 2}$ ,
where the second-to-last equality is due to (3.4). In a similar way, we can show that $\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(0;\,\ell){\boldsymbol{v}}^{\prime}(\ell) < \infty$ and $\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(1;\,\ell){\boldsymbol{v}}^{\prime}(\ell) < \infty$ . Therefore, (4.9) holds for some $b^{\prime} \in (0,\infty)$ and $K^{\prime} \in \mathbb{N}$ .
Next, we prove the statement (ii). Let ${\boldsymbol{x}}_{(\ell,j)}$ , $(\ell,j) \in \mathbb{S}$ , denote the $(\ell,j)$ th column of an arbitrary solution ${\boldsymbol{X}}$ to the Poisson equation (4.1). By definition, ${\boldsymbol{x}}_{(\ell,j)}$ is a solution to a Poisson equation
Therefore, it follows from the statement (i) (which has been proved) and [Reference Glynn and Meyn10, Theorem 2.3] that there exists some $C_0 > 0$ such that $|{\boldsymbol{x}}_{(\ell,j)}| \le C_0 {\boldsymbol{v}}^{\prime}$ for any $(\ell,j) \in \mathbb{S}$ , and thus $|{\boldsymbol{X}}| \le C_0 {\boldsymbol{v}}^{\prime}{\boldsymbol{e}}^{\top}$ . It also follows from (4.8) and Theorem 3.4 that if Assumption 3.1 holds then $\boldsymbol{\pi}{\boldsymbol{v}}^{\prime} < \infty$ and thus $\boldsymbol{\pi}|{\boldsymbol{X}}| \le C_0 (\boldsymbol{\pi}{\boldsymbol{v}}^{\prime}){\boldsymbol{e}}^{\top}< \infty$ . Consequently, the statement (ii) is true.
In the following, we prove the statement (iii). Let ${\boldsymbol{w}}\,:\!=\,(w(k,i))_{(k,i) \in \mathbb{S}}$ denote an arbitrary column vector such that $|{\boldsymbol{w}}| \le {\boldsymbol{e}}$ . We then have
According to Theorem 4.1, ${\boldsymbol{H}}_{\mathbb{A}}$ satisfies the Poisson equation (4.1) and thus ${\boldsymbol{H}}_{\mathbb{A}}{\boldsymbol{w}}$ satisfies
Therefore, we obtain an upper bound for $|{\boldsymbol{H}}_{\mathbb{A}}{\boldsymbol{w}}|$ by using [Reference Glynn and Meyn10, Theorem 2.1], the statement (i) of Lemma 4.4, and ${\boldsymbol{e}} \le ({-}\sigma){\boldsymbol{v}}^{\prime}$ (by (4.8)):
Combining this bound and (C.1) results in
which shows that the statement (iii) holds. The proof is complete.
Appendix D. Proof of Proposition 4.5
We begin the proof of this proposition with the following lemma (its proof is given at the end of this section).
Lemma D.1. Under Assumption 2.1, Assumption 3.1 holds if and only if
where $\tau_{(0,j)} = \inf\{n\in\mathbb{N}\,:\, (X_n,J_n) = (0,j)\}$ for $j \in \mathbb{M}_0$ .
First, we prove the statement (i). Assumption 2.1 and the aperiodicity of ${\boldsymbol{P}}$ ensure that the Markov chain $\{(X_n, J_n)\}$ is ergodic. Thus, the deviation matrix ${\boldsymbol{D}}$ exists if and only if (D.1) holds (see [Reference Coolen-Schrijner and van Doorn5, Theorem 4.1] and [Reference Dendievel, Latouche and Liu7, Lemma 2.6]). The combination of this fact and Lemma D.1 results in the statement (i).
Next, we prove the statement (ii), assuming that ${\boldsymbol{D}}$ exists, or equivalently that Assumption 3.1 holds (this equivalence is due to the statement (i) of the present proposition). The first half of the statement (ii) follows from Lemma D.1 and [Reference Dendievel, Latouche and Liu7, Lemma 2.6]. It thus remains to prove (4.10). Recall that ${\boldsymbol{D}}$ and ${\boldsymbol{H}}_{\mathbb{A}}$ are solutions to the Poisson equation (4.1). Therefore, the combination of Lemma 4.4(ii) and Assumption 3.1 implies that $\boldsymbol{\pi} |{\boldsymbol{D}}| < \infty$ and $\boldsymbol{\pi} |{\boldsymbol{H}}_{\mathbb{A}}| < \infty$ . It follows from this result and [Reference Glynn and Meyn10, Proposition 1.1] that there exists some row vector $\boldsymbol{\beta}$ such that ${\boldsymbol{D}} = {\boldsymbol{H}}_{\mathbb{A}} + {\boldsymbol{e}}\boldsymbol{\beta}$ . Since $\boldsymbol{\pi}{\boldsymbol{D}}={\textbf{0}}$ , we have $\boldsymbol{\beta} = - \boldsymbol{\pi}{\boldsymbol{H}}_{\mathbb{A}}$ , which leads to (4.10).
We conclude this section by proving Lemma D.1.
Proof of Lemma D.1. Theorem 3.6 shows that Assumption 3.1 is equivalent to (3.29). Therefore, it suffices to prove the equivalence of (3.29) and (D.1). For all $j \in \mathbb{M}_0$ and $(k,i) \in \mathbb{S}$ , we have
where the last equality is due to the strong Markov property. Since $\{(X_n, J_n)\}$ is ergodic, there exists some constant $C>0$ such that
Combining this bound and (D.2) yields
Appendix E. Proof of Theorem 5.4
To prove this theorem, it suffices to show that
where the term $1 + o\big(N^{-1}\big)$ is independent of k. Let $\tau_0^{(N)} = \inf\big\{n \in \mathbb{N}\,:\,X_n^{(N)} = 0\big\}$ . It then follows from [Reference Meyn and Tweedie34, Theorem 10.0.1] that, for $\ell \in \mathbb{Z}_{[0,N-1]}$ and $i \in \mathbb{M}_1$ ,
where we use the simplified notation $\mathbb{E}_{(k,i)}^{(N)}[{\cdot}] \,:\!=\, \mathbb{E}\big[{\cdot}\mid \big(X_0^{(N)}, J_0^{(N)}\big)=(k,i)\big]$ , in addition to the notation $\mathbb{E}_{(k,i)}[{\cdot}]= \mathbb{E}[{\cdot}\mid (X_0, J_0)=(k,i)]$ introduced in Section 3.2. Note here that, for $k \in \mathbb{Z}_{[0,N-1]}$ and $j\in\mathbb{M}_0$ ,
which is an immediate consequence of Lemma A.1. Equations (E.4) and (E.3) yield, for $k \in \mathbb{Z}_{[0,N]}$ and $i \in \mathbb{M}_1$ ,
Theorem 5.3 also implies that $\boldsymbol{\pi}^{(N)}(0) = \big(1 + o\big(N^{-1}\big)\big)\boldsymbol{\pi}(0)$ . Applying this to (E.5) and using (E.2), we obtain
where the term $\big(1 + o\big(N^{-1}\big)\big)$ is independent of (k, i). Consequently, (E.1) holds.
Appendix F. Proof of Theorem 5.12
Theorem 5.12 is an immediate consequence of applying Lemmas F.1 and F.2 to (5.3). The two lemmas are proved in Sections F.1 and F.2, respectively.
Lemma F.1. If Assumptions 2.1, 3.1, and 5.10 hold, then
where ${\boldsymbol{S}}^{(N)}(n;\,k)$ is given in (5.4).
Lemma F.2. If Assumptions 2.1, 3.1, and 5.10 hold, then
F.1. Proof of Lemma F.1
We confirm that (F.1a) and (F.1b) hold if
Since ${\boldsymbol{G}}$ is stochastic, ${\boldsymbol{G}}^N = \overline{\boldsymbol{\mathcal{O}}}(1)$ . It thus follows from (5.4) that, for each $k \in \mathbb{Z}_+$ , there exists some finite $C_k > 0$ such that $|{\boldsymbol{S}}^{(N)}(n;\,k) | \le C_k {\boldsymbol{e}}{\boldsymbol{e}}^{\top}$ for all $n \ge N+1$ and $N \ge \max(k,1)$ . The bound for $|{\boldsymbol{S}}^{(N)}(n;\,k)|$ yields
Combining (F.3a) and (F.4a) results in (F.1a); and combining (F.3b) and (F.4b) results in (F.1b).
First, we prove (F.3a). Using (5.1), (5.18b), $F \in \mathcal{S} \subset \mathcal{L}$ , and Theorem 5.3, we obtain
which shows that (F.3a) holds.
Next, we prove (F.3b). Let $\Pi \in \mathbb{Z}_{[1,N]}$ and $\overline{A}$ denote independent integer-valued random variables. We then have
for $N \in \mathbb{N}$ . By analogy with this equation, the following hold:
Applying Theorem 5.4 to (F.7) and using (F.8), we obtain
Furthermore, it follows from (5.18a), Proposition 5.11, and [Reference Masuyama26, Proposition A.3] that
Combining (F.10) and (F.9) yields
and thus
which shows that (F.3b) holds. The proof is complete.
F.2. Proof of Lemma F.2
We prove only the second limit (F.2b), because the first one (F.2a) is implied by (F.5) in the proof of (F.3a). The following equation holds for the integer-valued random variables $\Pi \in \mathbb{Z}_{[1,N]}$ and $\overline{A}$ in (F.6):
Thus, we have
Combining (F.12) and (F.11) yields
Therefore, using (5.18a), $F \in \mathcal{S} \subset \mathcal{L}$ , Theorem 5.3, and the dominated convergence theorem, we obtain
which shows that (F.2b) holds. The proof is complete.
Acknowledgements
The research of H. Masuyama was supported in part by JSPS KAKENHI Grant No. JP21K11770.
Funding information
There are no funding bodies to thank in relation to the creation of this article.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.