1. Introduction
In the paper [Reference Castro, Goverse, Lamb and Rasmussen1], the technical Lemmas 4.5 and 4.6 are incorrect. This cascaded into the proofs of Proposition 5.2 and Theorems 2.2, 2.3 and 2.4. Although some of the main theorems of the original paper were impacted, the ideas in [Reference Castro, Goverse, Lamb and Rasmussen1] are robust enough to correct the original proof. In this corrigendum, we provide the necessary modifications to the statements and proofs.
Furthermore, Lemma 3.1(i) has a typo, which propagated to Proposition 4.2(i) and Theorem 2.2(ii). We give the corrected statements below and note that the proof of these results remains correct.
Finally, Theorem 2.4 lacks a condition, which we provide below. Its proof remains essentially the same.
2. Lemma 3.1(i) and Proposition 4.2(i)
Lemma 3.1(i) was incorrectly quoted from [Reference Krengel2, Theorem 3.3.5] and [Reference Schaefer3, Corollary V.8.1]. The correct statement is as follows.
Lemma 3.1.
-
(i) For every $f\in L^1(M,\rho )$ ,
$$ \begin{align*}\lim_{n\to \infty} \frac{1}{n} \sum_{i=0}^{n-1}T ^i f = \eta \frac{\mathbb E_\rho[f \mid \mathcal I(T,\rho)]}{\mathbb E_\rho[ \eta \mid \mathcal I(T,\rho)]}\quad \rho\mbox{-almost surely (a.s.)}.\end{align*} $$
This affected the statements of Proposition 4.2(i), which are corrected as follows.
-
(i) For every $f\in L^1(M,\mu ),$
$$ \begin{align*} {\frac{1}{n}\sum_{i=0}^{n-1}\frac{1}{\unicode{x3bb}^i}{\mathcal P}^i f \xrightarrow{n\to\infty} \eta \int_M f(y)\mu(dy)}\quad \text{ in } L^1(M,\mu) \text{ and } \mu\text{-a.s.} \end{align*} $$
3. Lemmas 4.5 and 4.6
By labelling $\{g_i\}_{i=0}^{m-1}$ and $\{C_i\}_{i=0}^{m-1}$ in Lemma 4.4, we assume that the permutation $\sigma $ satisfies $\sigma (i) = i-1\ \text {(mod }m)$ . We write $g_j =g_{{j\ (\mathrm {mod}\ m)}}$ and $C_j = C_{{j\ (\mathrm {mod}\ m)}}$ for every $j\in \mathbb N$ .
With this convention, Lemmas 4.5 and 4.6, should be combined in a single lemma and corrected as follows.
Lemma 4.5. Suppose the absorbing Markov chain $X_n$ satisfies Hypothesis H1. Then for every bounded and measurable function $h:M\to \mathbb {R}$ and $\ell \in \{0,1,\ldots ,m-1\}$ ,
and
Proof. Due to Proposition 4.2, there exists $\alpha _0,\ldots ,\alpha _{m-1}\in \mathbb C$ and $v\in E_{\mathrm {aws}}$ such that ${h = \sum _{s=0}^{m-1} \alpha _s g_s + v}.$
Step 1. We show that $v \in E_{\mathrm {aws}}$ if and only if $\int _{C_i} v\,d \mu = 0$ for every $i\in \{0,1,\ldots , m-1\}.$ Suppose first that $v \in E_{\mathrm {aws}}$ . We claim that for all $i \in \{0,1,\ldots , m-1\}$ . Indeed, if , then with $\alpha _i \neq 0$ and $w \in E_{\mathrm {aws}}$ . Since $\mu (C_i \cap C_j) = 0$ for all $j \neq i$ , we obtain that $v \not \in E_{\mathrm {aws}}$ . It follows that
Reciprocally, assume that $\int _{C_i} v \,d \mu = 0$ for every $i\in \{0,1,\ldots ,k-1\}.$ Write $v = \sum _{i=0}^{k-1}\alpha _i g_i + w$ , with $w\in E_{\mathrm {aws}}$ . Since $\int g_i\,d \mu = 1$ , we have that $\alpha _i = \int _{C_i} \alpha _i g_i\,d\mu = \int _{C_i} (\sum _{j=0}^{k-1}\alpha _j g_j + w) \,d\mu = \int _{C_i} v\,{d} \mu = 0.$ We obtain that $\alpha _i = 0$ for every $i\in \{0,1,\ldots , k-1\},$ which implies $v\in E_{\mathrm {aws}}$ .
Step 2. We show that equation (3.1) holds. Integrating $h = \sum _{s=0}^{m-1} \alpha _s g_s + v$ with respect to $\mu $ on $C_i$ , from Step $1$ , we obtain that $h = \sum _{s=0}^{m-1} g_s \int _{C_s} h\,d \mu + v.$
Therefore,
Step 3. We show that equation (3.2) holds and conclude the proof of the lemma. From Step $1$ , we have that for some $w\in E_{\mathrm {aws}}$ . Given $\ell \in \{0,1,\ldots ,m-1\}$ , define $n_\ell := m n +\ell $ . A direct computation implies that
On the one hand, we have that
From Step $2$ , we obtain that $J^{n_\ell } \xrightarrow []{n\to \infty } 0$ in $L^1(M,\mu )$ .
On the other hand, Step 2 yields that
Hence, $I_h^{n_\ell } + J_{h}^{n_\ell } \xrightarrow []{n\to \infty }\sum _{k=0}^{m-1} \mu (C_{\ell +k} )g_k \int _{M} h \eta \,d\mu $ in $L^1(M,\mu ),$ which concludes the proof of Step 3.
4. Proposition 5.2
As a consequence of the corrected Lemma 4.5 , Proposition 5.2 reads as follows.
Proposition 5.2. Let $X_n$ be an absorbing Markov chain satisfying Hypothesis H1. Suppose that one of the following items holds:
-
(a) there exists $K>0$ such that $\mu (\{K<\eta \}) =1$ almost surely;
-
(b) there exists $ g\in L^1(M,\mu )$ such that $ ({1}/{\unicode{x3bb} ^n}) {\mathcal P}^n(x,M) \leq g \ \text {for every }n\in \mathbb N$ ;
-
(c) the absorbing Markov chain $X_n$ fulfils Hypothesis H1.
Then for every $h\in L^\infty (M,\mu )$ and $\ell \in \{0,1,\ldots ,m-1\}$ ,
In addition,
Proof. The proof of the theorem assuming that either item (a) or item (b) holds remains mostly the same. The only correction to be made is on page $16$ line $5$ , where the term $(1/\unicode{x3bb} )^n{\mathcal P}(x,M)$ should be replaced by $(1/\unicode{x3bb} )^n{\mathcal P}^n(x,M)$ .
Now, we prove item (c). For every $j\in \mathbb N$ , define the set $K_j:= \{x\in M; k(x,\cdot )\in L^{\infty }(M,\hspace{-0.8pt}\mu )\}$ and the bounded operator $\mathcal G_{j}\hspace{-0.8pt}:\hspace{-0.8pt}L^{1}(M,\mu )\hspace{-0.8pt}\to\hspace{-0.8pt} L^\infty (K_j,\mu )$ , By composing ${\mathcal G}_j$ to equations (3.1) and (3.2) considering $\ell -1$ instead of $\ell $ , from Lemma 4.5 and the fact that $\mathcal G_j$ is a bounded operator, we obtain that equations (4.1) and (4.2) converge for $\mu $ -almost every $x\in K_j$ . Finally, since Hypothesis H1 implies that $\mu (\bigcup _{j\geq 1} K_j) = 1$ , we obtain the result.
5. Theorem 2.2
The corrections of Lemma 4.5 also affect Theorem 2.2.
Theorem 2.2. Let $X_n$ be an absorbing Markov chain fulfilling Hypothesis H1. Then the following assertions hold:
-
(i) there exist a natural number $m\in \mathbb N$ and sets $C_0, C_1, \ldots , C_{m-1}=:C_{-1} \in \mathscr B(M)$ such that for every $i\in \{0,1,\ldots ,m-1\}$ ;
-
(ii) for every $f\in L^1(M,\mu )$ , $ ({1}/{n}) \sum _{i=0}^{n-1} ({1}/{\unicode{x3bb} ^i}){\mathcal P}^i f \xrightarrow {n\to \infty } \eta \int _M f(y) \mu (dy)$ in $L^1(M,\mu )$ and $\mu $ -a.s.;
-
(iii) there exist non-negative functions $g_0,g_1,\ldots ,g_{m-1}=:g_{-1}\in L^1(M,\mu )$ , satisfying
$$ \begin{align*}{\mathcal P} g_{j} = \unicode{x3bb} g_{j-1}\quad \text{and}\quad \|g_j\|_{L^1(M,\mu)}=1\end{align*} $$for every $j\in \{0,1,\ldots ,n-1\}$ , such that given $\ell \in \{0,1,\ldots ,m-1\}$ and $h\in L^\infty (M,\mu )$ , the following limit holds:$$ \begin{align*} \frac{1}{\unicode{x3bb}^{nm+\ell}} {\mathcal P}^{nm+\ell} h \xrightarrow[L^1(M,\mu)]{n\to\infty} \sum_{s=0}^{m-1}g_s \int_M h(x) \mu(dx);\end{align*} $$ -
(iv) if in addition, we assume that M is a Polish space, then for every $h\in L^\infty (M,\mu )$ ,
(5.1) $$ \begin{align} \bigg(x\mapsto \mathbb E_x \bigg[\frac{1}{n} \sum_{i=0}^{n-1}h\circ X_i \mid \tau> n\bigg]\bigg) \xrightarrow{n\to\infty} \int_M h(y) \eta(y) \mu(dy) \end{align} $$in the $L^\infty (M,\mu )$ -weak ${}^*$ topology. In particular, we obtain that equation (5.1) also converges weakly in $L^1(M,\mu ).$
Proof. The proof of assertions (i), (ii) and (iii) remains unchanged. To prove assertion (iv), fix $\ell \in \{0,1,\ldots ,m-1\}$ , repeating the proof of [Reference Castro, Goverse, Lamb and Rasmussen1, Lemma 2.2] but changing $g_{n}$ by $g_{mn+\ell }$ , we obtain that $g_{nm+\ell }$ converges to the right-hand side of equation (5.1) in $L^\infty (M,\mu )$ -weak $^*$ . Since $\ell \in \{0,1,\ldots ,m-1\}$ is arbitrary, we obtain that assertion (iv) follows.
7. Theorem 2.4
Theorem 2.4 requires an extra assumption.
Theorem 2.4. Let $X_n$ be an absorbing Markov chain fulfilling Hypothesis H2, and suppose that ${\mathcal P} f|_{K_i}\in \mathcal C^0(K_i)$ for every $f\in L^1(M,\mu )$ and $i\in \mathbb N$ , where $\{K_i\}_{i\in \mathbb N}$ is the nested sequence of compact sets given by the second part of Hypothesis H2. Then, given $h\in L^\infty (M,\mu )$ , equation (2.3) holds for every $x\in (\bigcup _{i\in \mathbb N} K_i)\cap \{\eta>0\}.$
In the case where $m=1$ in Theorem 2.2(i), equation (2.4) holds for every $x\in ( \bigcup _{i\in \mathbb N} K_i)\cap \{\eta>0\}$ .
Proof. Observe that $\mathcal G_j:L^1(M,\mu ) \to \mathcal C^0(K_j)$ , is a bounded linear operator since it is a positive operator between two Banach lattices [Reference Schaefer3, Theorem 5.3]. Then the proof follows from the same arguments as given in the new proof of Proposition 5.2 (c) and equation (5.3).
Acknowledgment
The authors thank Bernat Bassols Cornudella for the valuable discussions and for pointing out some of the inaccuracies in the original publication.