1. Notation
Throughout, we shall use the following notation:
-
$\mathbb{N} = \{1,2, \ldots, \}$ and $\mathbb{Z}_+ = \{0\} \cup \mathbb{N}$ .
-
$\mathcal{M}_1(E)$ denotes the space of the probability measures whose support is included in E.
-
$\mathcal{B}(E)$ denotes the set of the measurable bounded functions defined on E.
-
$\mathcal{B}_1(E)$ denotes the set of the measurable functions f defined on E such that $\|f\|_\infty \leq 1$ .
-
For all $\mu \in \mathcal{M}_1(E)$ and $p \in \mathbb{N}$ , $\mathbb{L}^p(\mu)$ denotes the set of the measurable functions $f \;:\; E \mapsto \mathbb{R}$ such that $\int_E |f(x)|^p \mu(dx) < + \infty$ .
-
For any $\mu \in \mathcal{M}_1(E)$ and $f \in \mathbb{L}^1(\mu)$ , we define
\begin{equation*}\mu(f) \;:\!=\; \int_E f(x) \mu(dx).\end{equation*} -
For any positive function $\psi$ ,
\begin{equation*}\mathcal{M}_1(\psi) \;:\!=\; \{\mu \in \mathcal{M}_1(E) \;:\; \mu(\psi) < + \infty\}.\end{equation*} -
Id denotes the identity operator.
2. Introduction
In general, an ergodic theorem for a Markov process $(X_t)_{t \geq 0}$ and probability measure $\pi$ refers to the almost sure convergence
In the time-homogeneous setting, such an ergodic theorem holds for positive Harris-recurrent Markov processes with the limiting distribution $\pi$ corresponding to an invariant measure for the underlying Markov process. For time-inhomogeneous Markov processes, such a result does not hold in general (in particular the notion of invariant measure is in general not well-defined), except for specific types of time-inhomogeneity such as periodic time-inhomogeneous Markov processes, defined as time-inhomogeneous Markov processes for which there exists $\gamma > 0$ such that, for any $s \leq t$ , $k \in \mathbb{Z}_+$ , and x,
In other words, a time-inhomogeneous Markov process is periodic when the transition law between any times s and t remains unchanged when the time interval [s, t] is shifted by a multiple of the period $\gamma$ . In particular, this implies that, for any $s \in [0,\gamma)$ , the Markov chain $(X_{s+n \gamma})_{n \in \mathbb{Z}_+}$ is time-homogeneous. This fact allowed Höpfner et al. (in [Reference Höpfner and Kutoyants20, Reference Höpfner, Löcherbach and Thieullen21, Reference Höpfner, Löcherbach and Thieullen22]) to show that, if the skeleton Markov chain $(X_{n \gamma})_{n \in \mathbb{Z}_+}$ is Harris-recurrent, then the chains $(X_{s + n \gamma})_{n \in \mathbb{Z}_+}$ , for all $s \in [0,\gamma)$ , are also Harris-recurrent and
where $\pi_s$ is the invariant measure for $(X_{s+n \gamma})_{n \in \mathbb{Z}_+}$ .
This paper aims to prove a similar result for time-inhomogeneous Markov processes said to be asymptotically periodic. Roughly speaking (a precise definition will be explicitly given later), an asymptotically periodic Markov process is such that, given a time interval $T \geq 0$ , its transition law on the interval $[s,s+T]$ is asymptotically ‘close to’ the transition law, on the same interval, of a periodic time-inhomogeneous Markov process called an auxiliary Markov process, when $s \to \infty$ . This definition is very similar to the notion of asymptotic homogenization, defined as follows in [Reference Bansaye, Cloez and Gabriel1, Subsection 3.3]. A time-inhomogeneous Markov process $(X_t)_{t \geq 0}$ is said to be asymptotically homogeneous if there exists a time-homogeneous Markovian semigroup $(Q_t)_{t \geq 0}$ such that, for all $s \geq 0$ ,
where, for two positive measures with finite mass $\mu_1$ and $\mu_2$ , $\| \mu_1 - \mu_2\|_{TV}$ is the total variation distance between $\mu_1$ and $\mu_2$ :
In particular, it is well known (see [Reference Bansaye, Cloez and Gabriel1, Theorem 3.11]) that, under this and suitable additional conditions, an asymptotically homogeneous Markov process converges towards a probability measure which is invariant for $(Q_t)_{t \geq 0}$ . It is similarly expected that an asymptotically periodic process has the same asymptotic properties as a periodic Markov process; in particular an ergodic theorem holds for the asymptotically periodic process.
The main result of this paper provides for an asymptotically periodic Markov process to satisfy
where $\mathbb{P}_{0,\mu}$ is a probability measure under which $X_0 \sim \mu$ , and where $\beta_s$ is the limiting distribution of the skeleton Markov chain $(X_{s+n \gamma})_{n \in \mathbb{Z}_+}$ , if it satisfies a Lyapunov-type condition and a local Doeblin condition (defined further in Section 3), and is such that its auxiliary process satisfies a Lyapunov/minorization condition.
Furthermore, this convergence result holds almost surely if a Lyapunov function of the process $(X_t)_{t \geq 0}$ , denoted by $\psi$ , is integrable with respect to the initial measure:
This will be more precisely stated and proved in Section 3.
The main motivation of this paper is then to deal with quasi-stationarity with moving boundaries, that is, the study of asymptotic properties for the process X, conditioned not to reach some moving subset of the state space. In particular, such a study is motivated by models such as those presented in [Reference Cattiaux, Christophe and Gadat3], which studies Brownian particles absorbed by cells whose volume may vary over time.
Quasi-stationarity with moving boundaries has been studied in particular in [Reference Oçafrain24, Reference Oçafrain25], where a ‘conditional ergodic theorem’ (see further the definition of a quasi-ergodic distribution) has been shown when the absorbing boundaries move periodically. In this paper, we show that a similar result holds when the boundary is asymptotically periodic, assuming that the process satisfies a conditional Doeblin condition (see Assumption (A′)). This will be dealt with in Section 4.
The paper will be concluded by using these results in two examples: an ergodic theorem for an asymptotically periodic Ornstein–Uhlenbeck process, and the existence of a unique quasi-ergodic distribution for a Brownian motion confined between two symmetric asymptotically periodic functions.
3. Ergodic theorem for asymptotically periodic time-inhomogeneous semigroup.
Asymptotic periodicity: the definition. Let $(E,{\mathcal E})$ be a measurable space. Consider $\{(E_t, {\mathcal E}_t)_{t \geq 0}, (P_{s,t})_{s \leq t}\}$ a Markovian time-inhomogeneous semigroup, giving a family of measurable subspaces of $(E, {\mathcal E})$ , denoted by $(E_t, {\mathcal E}_t)_{t \geq 0}$ , and a family of linear operator $(P_{s,t})_{s \leq t}$ , with $P_{s,t} \;:\; \mathcal{B}(E_t) \to \mathcal{B}(E_s)$ , satisfying for any $r \leq s \leq t$ ,
In particular, associated to $\{(E_t, {\mathcal E}_t)_{t \geq 0}, (P_{s,t})_{s \leq t}\}$ is a Markov process $(X_t)_{t \geq 0}$ and a family of probability measures $(\mathbb{P}_{s,x})_{s \geq 0, x \in E_s}$ such that, for any $s \leq t$ , $x \in E_s$ , and $A \in {\mathcal E}_t$ ,
We denote by $\mathbb{P}_{s,\mu} \;:\!=\; \int_{E_s} \mathbb{P}_{s,x} \mu(dx)$ any probability measure $\mu$ supported on $E_s$ . We also denote by $\mathbb{E}_{s,x}$ and $\mathbb{E}_{s,\mu}$ the expectations associated to $\mathbb{P}_{s,x}$ and $\mathbb{P}_{s,\mu}$ respectively. Finally, the following notation will be used for $\mu \in \mathcal{M}_1(E_s)$ , $s \leq t$ , and $f \in \mathcal{B}(E_t)$ :
The periodicity of a time-inhomogeneous semigroup is defined as follows. We say a semigroup $\{(F_t, {\mathcal F}_t)_{t \geq 0},(Q_{s,t})_{s \leq t}\}$ is $\gamma$ -periodic (for $\gamma > 0$ ) if, for any $s \leq t$ ,
It is now possible to define an asymptotically periodic semigroup.
Definition 1. (Asymptotically periodic semigroups.) A time-inhomogeneous semigroup $\{(E_t, {\mathcal E}_t)_{t \geq 0},(P_{s,t})_{s \leq t}\}$ is said to be asymptotically periodic if (for some $\gamma > 0$ ) there exist a $\gamma$ -periodic semigroup $\{(F_t, {\mathcal F}_t)_{t \geq 0},(Q_{s,t})_{s \leq t}\}$ and two families of functions $(\psi_s)_{s \geq 0}$ and $(\tilde{\psi}_s)_{s \geq 0}$ such that $\tilde{\psi}_{s+\gamma} = \tilde{\psi}_s$ for all $s \geq 0$ , and for any $s \in [0, \gamma)$ , the following hold:
-
1. $\bigcup_{k=0}^\infty \bigcap_{l \geq k} E_{s+l\gamma} \cap F_s \ne \emptyset$ .
-
2. There exists $x_s \in \bigcup_{k=0}^\infty \bigcap_{l \geq k} E_{s+l\gamma} \cap F_s$ such that, for any $n \in \mathbb{Z}_+$ ,
(6) \begin{equation} \|\delta_{x_s} P_{s+k \gamma, s+(k+n)\gamma}[\psi_{s+(k+n)\gamma} \times \cdot] - \delta_{x_s} Q_{s, s + n \gamma}[\tilde{\psi}_{s} \times \cdot]\|_{TV} \underset{k \to \infty}{\longrightarrow} 0.\end{equation}
The semigroup $\{(F_t, {\mathcal F}_t)_{t \geq 0},(Q_{s,t})_{s \leq t}\}$ is then called the auxiliary semigroup of $(P_{s,t})_{s \leq t}$ .
When $\psi_s = \tilde{\psi}_s = \mathbb{1}$ for all $s \geq 0$ , we say that the semigroup $(P_{s,t})_{s \leq t}$ is asymptotically periodic in total variation. By extension, we will say that the process $(X_t)_{t \geq 0}$ is asymptotically periodic (in total variation) if the associated semigroup $\{(E_t, {\mathcal E}_t)_{t \geq 0},(P_{s,t})_{s \leq t}\}$ is asymptotically periodic (in total variation).
In what follows, the functions $(\psi_s)_{s \geq 0}$ and $(\tilde{\psi}_s)_{s \in [0,\gamma)}$ will play the role of Lyapunov functions (that is to say, satisfying Assumption 1(ii) below) for the semigroups $(P_{s,t})_{s \leq t}$ and $(Q_{s,t})_{s \leq t}$ , respectively. The introduction of these functions in the definition of asymptotically periodic semigroups will allow us to establish an ergodic theorem for processes satisfying the Lyapunov/minorization conditions stated below.
Lyapunov/minorization conditions. The main assumption of Theorem 1, which will be provided later, will be that the asymptotically periodic Markov process satisfies the following assumption.
Assumption 1. There exist $t_1 \geq 0$ , $n_0 \in \mathbb{N}$ , $c > 0$ , $\theta \in (0,1)$ , a family of measurable sets $(K_t)_{t \geq 0}$ such that $K_t \subset E_t$ for all $t \geq 0$ , a family of probability measures $\left(\nu_{s}\right)_{s \geq 0}$ on $(K_{s})_{s \geq 0}$ , and a family of functions $(\psi_s)_{s \geq 0}$ , all lower-bounded by 1, such that the following hold:
-
(i) For any $s \geq 0$ , $x \in K_s$ , and $n \geq n_0$ ,
\begin{equation*}\delta_x P_{s,s+n t_1} \geq c \nu_{s+nt_1}.\end{equation*} -
(ii) For any $s \geq 0$ ,
\begin{equation*}P_{s,s+t_1} \psi_{s+t_1} \leq \theta \psi_s + C \mathbb{1}_{K_s}.\end{equation*} -
(iii) For any $s \geq 0$ and $t \in [0,t_1)$ ,
\begin{equation*}P_{s,s+t} \psi_{s+t} \leq C \psi_s.\end{equation*}
When a semigroup $(P_{s,t})_{s \leq t}$ satisfies Assumption 1 as stated above, we will say that the functions $(\psi_s)_{s \geq 0}$ are Lyapunov functions for the semigroup $(P_{s,t})_{s \leq t}$ . In particular, under (ii) and (iii), it is easy to prove that for any $s \leq t$ ,
We remark in particular that Assumption 1 implies an exponential weak ergodicity in $\psi_t$ -distance; that is, we have the existence of two constants $C' > 0$ and $\kappa > 0$ such that, for all $s \leq t$ and for all probability measures $\mu_1, \mu_2 \in \mathcal{M}_1(E_s)$ ,
where, for a given function $\psi$ , $\| \mu - \nu \|_\psi$ is the $\psi$ -distance, defined to be
In particular, when $\psi = \mathbb{1}$ for all $t \geq 0$ , the $\psi$ -distance is the total variation distance. If we have weak ergodicity (8) in the time-homogeneous setting (see in particular [Reference Hairer and Mattingly15]), the proof of [Reference Hairer and Mattingly15, Theorem 1.3] can be adapted to a general time-inhomogeneous framework (see for example [Reference Champagnat and Villemonais6, Subsection 9.5]).
The main theorem and proof. The main result of this paper is the following.
Theorem 1. Let $\{ (E_t, {\mathcal E}_t)_{t \geq 0},(P_{s,t})_{s \leq t}, (X_t)_{t \geq 0}, (\mathbb{P}_{s,x})_{s \geq 0, x \in E_s}\}$ be an asymptotically $\gamma$ -periodic time-inhomogeneous Markov process, with $\gamma > 0$ , and denote by $\{(F_t, {\mathcal F}_t)_{t \geq 0},(Q_{s,t})_{s \leq t}\}$ its periodic auxiliary semigroup. Also, denote by $(\psi_s)_{s \geq 0}$ and $(\tilde{\psi}_s)_{s \geq 0}$ the two families of functions as defined in Definition 1. Assume moreover the following:
-
1. The semigroups $(P_{s,t})_{s \leq t}$ and $(Q_{s,t})_{s \leq t}$ satisfy Assumption 1, with $(\psi_s)_{s \geq 0}$ and $(\tilde{\psi}_s)_{s \geq 0}$ respectively as Lyapunov functions.
-
2. For any $s \in [0,\gamma)$ , $(\psi_{s+n \gamma})_{n \in \mathbb{Z}_+}$ converges pointwise to $\tilde{\psi}_s$ .
Then, for any $\mu \in \mathcal{M}_1(E_0)$ such that $\mu(\psi_0) < + \infty$ ,
where $\beta_\gamma \in \mathcal{M}_1(F_0)$ is the unique invariant probability measure of the skeleton semigroup $(Q_{0,n \gamma})_{n \in \mathbb{Z}_+}$ satisfying $\beta_\gamma(\tilde{\psi}_0) < + \infty$ . Moreover, for any $f \in \mathcal{B}(E)$ we have the following:
-
1. For any $\mu \in \mathcal{M}_1(E_0)$ ,
(10) \begin{equation} \mathbb{E}_{0,\mu}\!\left[\Bigg|\frac{1}{t} \int_0^t f(X_s)ds - \frac{1}{\gamma} \int_0^\gamma \beta_\gamma Q_{0,s}f ds\Bigg|^2\right] \underset{t \to \infty}{\longrightarrow} 0.\end{equation} -
2. If moreover $\mu(\psi_0) < + \infty$ , then
(11) \begin{equation}\frac{1}{t} \int_0^t f(X_s) ds \underset{t \to \infty}{\longrightarrow} \frac{1}{\gamma} \int_0^\gamma \beta_\gamma Q_{0,s}f ds,\;\;\;\;\mathbb{P}_{0,\mu}\textit{-almost surely.}\end{equation}
Remark 1. When Assumption 1 holds for $K_s = E_s$ for any s, the condition (i) in Assumption 1 implies the Doeblin condition.
Doeblin condition. There exist $t_0 \geq 0$ , $c > 0$ , and a family of probability measures $(\nu_t)_{t \geq 0}$ on $(E_t)_{t \geq 0}$ such that, for any $s \geq 0$ and $x \in E_s$ ,
In fact, if we assume that Assumption 1(i) holds for $K_s = E_s$ , the Doeblin condition holds if we set $t_0 \;:\!=\; n_0 t_1$ . Conversely, the Doeblin condition implies the conditions (i), (ii), and (iii) with $K_s = E_s$ and $\psi_s = \mathbb{1}_{E_s}$ for all $s \geq 0$ , so that these conditions are equivalent. In fact, (ii) and (iii) straightforwardly hold true for $(K_s)_{s \geq 0} = (E_s)_{s \geq 0}$ , $(\psi_s)_{s \geq 0} = (\mathbb{1}_{E_s})_{s \geq 0}$ , $C = 1$ , any $\theta \in (0,1)$ , and any $t_1 \geq 0$ . If we set $t_1 = t_0$ and $n_0 = 1$ , the Doeblin condition implies that, for any $s \in [0,t_1)$ ,
Integrating this inequality over $\mu \in \mathcal{M}_1(E_s)$ , one obtains
Then, by the Markov property, for all $s \in [0,t_1)$ , $x \in E_s$ , and $n \in \mathbb{N}$ , we have
which is (i).
Theorem 1 then implies the following corollary.
Corollary 1. Let $(X_t)_{t \geq 0}$ be asymptotically $\gamma$ -periodic in total variation distance. If $(X_t)_{t \geq 0}$ and its auxiliary semigroup satisfy a Doeblin condition, then the convergence (10) is improved to
Moreover, the almost sure convergence (11) holds for any initial measure $\mu$ .
Remark 2. We also note that, if the convergence (6) holds for all
then this implies (6) and therefore the pointwise convergence of $(\psi_{s+n \gamma})_{n \in \mathbb{Z}_+}$ to $\tilde{\psi}_s$ (by taking $n = 0$ in (6)).
Proof of Theorem 1. The proof is divided into five steps.
First step. Since the auxiliary semigroup $(Q_{s,t})_{s \leq t}$ satisfies Assumption 1 with $(\tilde{\psi}_s)_{s \geq 0}$ as Lyapunov functions, the time-homogeneous semigroup $(Q_{0,n\gamma})_{n \in \mathbb{Z}_+}$ satisfies Assumptions 1 and 2 of [Reference Hairer and Mattingly15], which we now recall (using our notation).
Assumption 2. ([Reference Hairer and Mattingly15, Assumption 1].) There exist $V \;:\; F_0 \to [0,+\infty)$ , $n_1 \in \mathbb{N}$ , and constants $K \geq 0$ and $\kappa \in (0,1)$ such that
Assumption 3. ([Reference Hairer and Mattingly15, Assumption 2].) There exist a constant $\alpha \in (0,1)$ and a probability measure $\nu$ such that
with $\mathcal{C}_R \;:\!=\; \{x \in F_0 \;:\; V(x) \leq R \}$ for some $R > 2 K/(1-\kappa)$ , where $n_1$ , K, and $\kappa$ are the constants from Assumption 2.
In fact, since $(Q_{s,t})_{s \leq t}$ satisfies (ii) and (iii) of Assumption 1, there exist $C > 0$ , $\theta \in (0,1)$ , $t_1 \geq 0$ , and $(K_s)_{s \geq 0}$ such that
and
We let $n_2 \in \mathbb{N}$ be such that $\theta^{n_2} C \bigl(1 + \frac{C}{1-\theta}\bigl) < 1$ . By (13) and recalling that $\tilde{\psi}_t = \tilde{\psi}_{t+\gamma}$ for all $t \geq 0$ , one has for any $s \geq 0$ and $n \in \mathbb{N}$ ,
Thus, for all $n_1 \geq \lceil \frac{n_2t_1}{\gamma} \rceil$ ,
where we successively used the semigroup property of $(Q_{s,t})_{s \leq t}$ , (14), and (7) applied to $(Q_{s,t})_{s \leq t}$ . Hence one has Assumption 2 by setting $V = \tilde{\psi}_0$ , $\kappa \;:\!=\; \theta^{n_2}C\bigl(1 + \frac{C}{1-\theta}\bigl)$ , and $K \;:\!=\; \frac{C}{1 - \theta}$ .
We now prove Assumption 3. To this end, we introduce a Markov process $(Y_t)_{t \geq 0}$ and a family of probability measures $(\hat{\mathbb{P}}_{s,x})_{s \geq 0, x \in F_s}$ such that
In what follows, for all $s \geq 0$ and $x \in F_s$ , we will use the notation $\hat{\mathbb{E}}_{s,x}$ for the expectation associated to $\hat{\mathbb{P}}_{s,x}$ . Moreover, we define
Then, using (13) recursively, for all $k \in \mathbb{N}$ , $R > 0$ , and $x \in \mathcal{C}_R$ (recalling that $\mathcal{C}_R$ is defined in the statement of Assumption 3), we have
Since $\tilde{\psi}_{kt_1} \geq 1$ for all $k \in \mathbb{Z}_+$ , we have that for all $x \in \mathcal{C}_R$ , for all $k \in \mathbb{Z}_+$ ,
In particular, there exists $k_0 \geq n_0$ such that, for all $k \geq k_0 - n_0$ ,
Hence, for all $x \in \mathcal{C}_R$ ,
Hence, for all $n_1 \geq \bigl\lceil \frac{k_0t_1}{\gamma} \bigl\rceil$ , for all $x \in \mathcal{C}_R$ ,
Thus, Assumption 3 is satisfied if we take $n_1 \;:\!=\; \bigl\lceil \frac{n_2 t_1}{\gamma}\bigl\rceil \lor \bigl\lceil \frac{k_0t_1}{\gamma} \bigl\rceil$ , $\alpha \;:\!=\; \frac{c}{2}$ , and $\nu(\cdot) \;:\!=\; \nu_{k_0t_1} Q_{k_0t_1,n_1\gamma}$ .
Then, by [Reference Hairer and Mattingly15, Theorem 1.2], Assumptions 2 and 3 imply that $Q_{0,n_1 \gamma}$ admits a unique invariant probability measure $\beta_\gamma$ . Furthermore, there exist constants $C > 0$ and $\delta \in (0,1)$ such that, for all $\mu \in \mathcal{M}_1(F_0)$ ,
Since $\beta_\gamma$ is the unique invariant probability measure of $Q_{0,n_1 \gamma}$ , and noting that $\beta_\gamma Q_{0,\gamma}$ is invariant for $Q_{0,n_1 \gamma}$ , we deduce that $\beta_\gamma$ is the unique invariant probability measure for $Q_{0,\gamma}$ , and by (15), for all $\mu$ such that $\mu(\tilde{\psi}_0) < + \infty$ ,
Now, for any $s \geq 0$ , note that $\delta_x Q_{s,\lceil \frac{s}{\gamma} \rceil \gamma} \tilde{\psi}_0 < + \infty$ for all $x \in F_s$ (this is a consequence of (7) applied to the semigroup $(Q_{s,t})_{s \leq t}$ ), and therefore, taking $\mu = \delta_x Q_{s,\lceil \frac{s}{\gamma} \rceil \gamma}$ in the above convergence,
for all $x \in F_s$ . Hence, since $Q_{n\gamma, n \gamma +s}\tilde{\psi}_s \leq C \bigl(1 + \frac{C}{1-\theta}\bigl) \tilde{\psi}_{n \gamma}$ by (7), we conclude from the above convergence that
Moreover, $\beta_\gamma(\tilde{\psi}_0) < + \infty$ .
Second step. The first part of this step (up to the equality (20)) is inspired by the proof of [Reference Bansaye, Cloez and Gabriel1, Theorem 3.11].
We fix $s \in [0,\gamma]$ . Without loss of generality, we assume that $\bigcap_{l \geq 0} E_{s+l \gamma} \cap F_s \ne \emptyset$ . Then, by Definition 1, there exists $x_s \in \bigcap_{l \geq 0} E_{s+l \gamma} \cap F_s$ such that for any $n \geq 0$ ,
which implies by (16) that
Then, by the Markov property, (8), and (7), one obtains that, for any $k,n \in \mathbb{N}$ and $x \in \bigcap_{l \geq 0} E_{s+l \gamma}$ ,
where $C'' \;:\!=\; C' \bigl(C \bigl(1 + \frac{C}{1-\theta}\bigl) \lor 1\bigl)$ . Then, for any $k,n \in \mathbb{N}$ ,
which by (17) and the pointwise convergence of $(\psi_{s+k \gamma})_{k \in \mathbb{Z}_+}$ implies that
The weak ergodicity (8) implies therefore that the previous convergence actually holds for any initial distribution $\mu \in \mathcal{M}_1(E_0)$ satisfying $\mu(\psi_0) < + \infty$ , so that
Since
for all $\mu \in \mathcal{M}_1(E_0)$ , $s \geq 0$ , and $n \in \mathbb{Z}_+$ , (21) and Lebesgue’s dominated convergence theorem imply that
which implies that
By Cesaro’s lemma, this allows us to conclude that, for any $\mu \in \mathcal{M}_1(E_0)$ such that $\mu(\psi_0) < + \infty$ ,
which concludes the proof of (9) .
Third step. In the same manner, we now prove that, for any $\mu \in \mathcal{M}_1(E_0)$ such that $\mu(\psi_0) < + \infty$ ,
In fact, for any function f bounded by 1 and $\mu \in \mathcal{M}_1(E_0)$ such that $\mu(\psi_0) < + \infty$ ,
We now remark that, since $\psi_{s+n \gamma} \geq 1$ for any s and $n \in \mathbb{Z}_+$ , one has that
Since $(\psi_{s + n \gamma})_{n \in \mathbb{Z}_+}$ converges pointwise towards $\tilde{\psi}_s$ and $\beta_\gamma Q_{0,s} \tilde{\psi}_s < + \infty$ , Lebesgue’s dominated convergence theorem implies
Then, using (21), one has
which allows us to conclude (22), using the same argument as in the first step.
Fourth step. In order to show the $\mathbb{L}^2$ -ergodic theorem, we let $f \in \mathcal{B}(E)$ . For any $x \in E_0$ and $t \geq 0$ ,
where the Markov property was used in the last line. By (8) (weak ergodicity) and (7), one obtains for any $s \leq t$
where C ′ was defined in the first part. As a result, for any $x \in E_0$ and $t \geq 0$ ,
Then, by (9), there exists a constant $\tilde{C} > 0$ such that, for any $x \in E_0$ , when $t \to \infty$ ,
Since $f \in \mathcal{B}(E)$ and by definition of the total variation distance, (22) implies that, for all $x \in E_0$ ,
Then, using (22), one deduces that for any $x \in E_0$ and bounded function f,
The convergence for any probability measure $\mu \in \mathcal{M}_1(E_0)$ comes from Lebesgue’s dominated convergence theorem.
Fifth step. We now fix nonnegative $f \in \mathcal{B}(E)$ , and $\mu \in \mathcal{M}_1(E_0)$ satisfying $\mu(\psi_0) < + \infty$ . The following proof is inspired by the proof of [Reference Vassiliou26, Theorem 12].
Since $\mu(\psi_0) < + \infty$ , the inequality (24) implies that there exists a finite constant $C_{f,\mu} \in (0,\infty)$ such that, for t large enough,
Then, for n large enough,
Then, by Chebyshev’s inequality and the Borel–Cantelli lemma, this last inequality implies that
One thereby obtains by the convergence (22) that
Since the nonnegativity of f is assumed, this implies that for any $t > 0$ we have
These inequalities and (25) then give that
In order to conclude that the result holds for any bounded measurable function f, it is enough to decompose $f = f_+ - f_-$ with $f_+ \;:\!=\; f \lor 0$ and $f_- = (\!-\!f) \lor 0$ and apply the above convergence to $f_+$ and $f_-$ . This concludes the proof of Theorem 1.
Proof of corollary 1. We remark as in the previous proof that, if $\|f\|_\infty \leq 1$ and $\psi_s = \mathbb{1}$ , an upper bound for the inequality (24) can be obtained, which does not depend on f and x. Likewise, the convergence (21) holds uniformly in the initial measure thanks to (23).
Remark 3. The proof of Theorem 1, as written above, does not allow us to deal with semigroups satisfying a Doeblin condition with time-dependent constant $c_s$ , that is, such that there exist $t_0 \geq 0$ and a family of probability measure $(\nu_t)_{t \geq 0}$ on $(E_t)_{t \geq 0}$ such that, for all $s \geq 0$ and $x \in E_s$ ,
In fact, under the condition written above, we can show (see for example the proof of the formula (2.7) of [Reference Champagnat and Villemonais9, Theorem 2.1]) that, for all $s \leq t$ and $\mu_1, \mu_2 \in \mathcal{M}_1(E_s)$ ,
Hence, by this last inequality with $\mu_1 = \delta_x P_{s,s+k\gamma}$ , $\mu_2 = \delta_x$ , replacing s by $s+k\gamma$ and t by $s+(k+n)\gamma$ , one obtains
which replaces the inequality (18) in the proof of Theorem 1. Plugging this last inequality into the formula (19), one obtains
Hence, we see that we cannot conclude a similar result when $c_s \longrightarrow 0$ as $s \to + \infty$ , since, for n fixed,
4. Application to quasi-stationarity with moving boundaries
In this section, $(X_t)_{t \geq 0}$ is assumed to be a time-homogeneous Markov process. We consider a family of measurable subsets $(A_t)_{t \geq 0}$ of E, and define the hitting time
For all $s \leq t$ , denote by ${\mathcal F}_{s,t}$ the $\sigma$ -field generated by the family $(X_u)_{s \leq u \leq t}$ , with ${\mathcal F}_t \;:\!=\; {\mathcal F}_{0,t}$ . Assume that $\tau_A$ is a stopping time with respect to the filtration $({\mathcal F}_{t})_{t \geq 0}$ . Assume also that for any $x \not \in A_0$ ,
We will be interested in a notion of quasi-stationarity with moving boundaries, which studies the asymptotic behavior of the Markov process $(X_t)_{t \geq 0}$ conditioned not to hit $(A_t)_{t \geq 0}$ up to the time t. For non-moving boundaries ( $A_t = A_0$ for any $t \geq 0$ ), the quasi-limiting distribution is defined as a probability measure $\alpha$ such that, for at least one initial measure $\mu$ and for all measurable subsets $\mathcal{A} \subset E$ ,
Such a definition is equivalent (still in the non-moving framework) to the notion of quasi-stationary distribution, defined as a probability measure $\alpha$ such that, for any $t \geq 0$ ,
If quasi-limiting and quasi-stationary distributions are in general well-defined for time-homogeneous Markov processes and non-moving boundaries (see [Reference Collet, Martínez and San Martín11, Reference Méléard and Villemonais23] for a general overview of the theory of quasi-stationarity), these notions are nevertheless not well-defined for time-inhomogeneous Markov processes or moving boundaries, for which they are no longer equivalent. In particular, under reasonable assumptions on irreducibility, it was shown in [Reference Oçafrain24] that the notion of quasi-stationary distribution as defined by (26) is not well-defined for time-homogeneous Markov processes absorbed by moving boundaries.
Another asymptotic notion to study is the quasi-ergodic distribution, related to a conditional version of the ergodic theorem and usually defined as follows.
Definition 2. A probability measure $\beta$ is a quasi-ergodic distribution if, for some initial measure $\mu \in \mathcal{M}_1(E \setminus A_0)$ and for any bounded continuous function f,
In the time-homogeneous setting (in particular for non-moving boundaries), this notion has been extensively studied (see for example [Reference Breyer and Roberts2, Reference Champagnat and Villemonais8, Reference Chen and Jian10, Reference Colonius and Rasmussen12, Reference Darroch and Seneta13, Reference He16–Reference He, Zhang and Zhu18, Reference Oçafrain24]). In the ‘moving boundaries’ framework, the existence of quasi-ergodic distributions has been dealt with in [Reference Oçafrain24] for Markov chains on finite state spaces absorbed by periodic boundaries, and in [Reference Oçafrain25] for processes satisfying a Champagnat–Villemonais condition (see Assumption (A′) below) absorbed by converging or periodic boundaries. In this last paper, the existence of the quasi-ergodic distribution is dealt with through the following inequality (see [Reference Oçafrain25, Theorem 1]), which holds for any initial state x, $s \leq t$ , and for some constants $C, \gamma > 0$ independent of x, s, and t:
where the family of probability measures $(\mathbb{Q}_{s,x})_{s \geq 0, x \in E_s}$ is defined by
Moreover, by [Reference Champagnat and Villemonais9, Proposition 3.1], there exists a family of positive bounded functions $(\eta_t)_{t \geq 0}$ defined in such a way that, for all $s \leq t$ and $x \in E_s$ ,
Then we can show (this is actually shown in [Reference Champagnat and Villemonais9]) that
and that, for all $\mu \in \mathcal{M}_1(E_0)$ ,
where
By the triangle inequality, one has
In particular, the inequality (27) implies that there exists a quasi-ergodic distribution $\beta$ for the process $(X_t)_{t \geq 0}$ absorbed by $(A_t)_{t \geq 0}$ if and only if there exist some probability measures $\mu \in \mathcal{M}_1(E_0)$ such that $\frac{1}{t} \int_0^t \mathbb{Q}_{0,\eta_0 * \mu}[X_s \in \cdot]ds$ converges weakly to $\beta$ , when t goes to infinity. In other words, under Assumption (A′), the existence of a quasi-ergodic distribution for the absorbed process is equivalent to the law of large numbers for its Q-process.
We now state Assumption (A′).
Assumption 4. There exists a family of probability measures $(\nu_t)_{t \geq 0}$ , defined on $E \setminus A_t$ for each t, such that the following hold:
(A′1) There exist $t_0 \geq 0$ and $c_1 > 0$ such that
(A′2) There exists $c_2 > 0$ such that
In what follows, we say that the pair $\{(X_t)_{t \geq 0}, (A_t)_{t \geq 0}\}$ satisfies Assumption (A′) when the assumption holds for the Markov process $(X_t)_{t \geq 0}$ considered as absorbed by the moving boundary $(A_t)_{t \geq 0}$ .
The condition (A′1) is a conditional version of the Doeblin condition (12), and (A′2) is a Harnack-like inequality on the probabilities of surviving, necessary to deal with the conditioning. They are equivalent to the set of conditions presented in [Reference Bansaye, Cloez and Gabriel1, Definition 2.2], when the non-conservative semigroup is sub-Markovian. In the time-homogeneous framework, we obtain the Champagnat–Villemonais condition defined in [Reference Champagnat and Villemonais5] (see Assumption (A)), shown as being equivalent to the exponential uniform convergence to quasi-stationarity in total variation.
In [Reference Oçafrain25], the existence of a unique quasi-ergodic distribution is proved only for converging or periodic boundaries. However, we can expect such a result on existence (and uniqueness) for other kinds of movement for the boundary. Hence, the aim of this section is to extend the results on the existence of quasi-ergodic distributions obtained in [Reference Oçafrain25] to Markov processes absorbed by asymptotically periodic moving boundaries.
Now let us state the following theorem.
Theorem 2. Assume that there exists a $\gamma$ -periodic sequence of subsets $(B_t)_{t \geq 0}$ such that, for any $s \in [0,\gamma)$ ,
and there exists $x_s \in E_s$ such that, for any $n \leq N$ ,
Assume also that Assumption (A′) is satisfied by the pairs $\{(X_t)_{t \geq 0}, (A_t)_{t \geq 0}\}$ and $\{(X_t)_{t \geq 0}, (B_t)_{t \geq 0}\}$ .
Then there exists a probability measure $\beta \in \mathcal{M}_1(E)$ such that
Remark 4. Observe that the condition (28) implies that, for any $n \in \mathbb{Z}_+$ ,
Under the additional condition $B_t \subset A_t$ for all $t \geq 0$ , these two conditions are equivalent, since for all $n \leq N$ ,
where we used the periodicity of $(B_t)_{t \geq 0}$ , writing
for all $k \in \mathbb{Z}_+$ . This implies the following corollary.
Corollary 2. Assume that there exists a $\gamma$ -periodic sequence of subsets $(B_t)_{t \geq 0}$ , with $B_t \subset A_t$ for all $t \geq 0$ , such that, for any $s \in [0,\gamma)$ , there exists $x_s \in E'_{\!\!s}$ such that, for any $n \leq N$ ,
Assume also that Assumption (A′) is satisfied by $\{(X_t)_{t \geq 0}, (A_t)_{t \geq 0}\}$ and $\{(X_t)_{t \geq 0}, (B_t)_{t \geq 0}\}$ .
Then there exists $\beta \in \mathcal{M}_1(E)$ such that (29) holds.
Proof of theorem 2. Since $\{(X_t)_{t \geq 0}, (B_t)_{t \geq 0}\}$ satisfies Assumption (A′) and $(B_t)_{t \geq 0}$ is a periodic boundary, we already know by [Reference Oçafrain25, Theorem 2] that, for any initial distribution $\mu$ , $t \mapsto \frac{1}{t} \int_0^t \mathbb{P}_{0,\mu}[X_s \in \cdot | \tau_B > t]ds$ converges weakly to a quasi-ergodic distribution $\beta$ .
The main idea of this proof is to apply Corollary 1. Since $\{(X_t)_{t \geq 0}, (A_t)_{t \geq 0}\}$ and $\{(X_t)_{t \geq 0}, (B_t)_{t \geq 0}\}$ satisfy Assumption (A′), [Reference Oçafrain25, Theorem 1] implies that there exist two families of probability measures $\big(\mathbb{Q}^A_{s,x}\big)_{s \geq 0, x \in E \setminus A_s}$ and $\big(\mathbb{Q}^B_{s,x}\big)_{s \geq 0, x \in E \setminus B_s}$ such that, for any $s \leq t$ , $x \in E \setminus A_s$ , $y \in E \setminus B_s$ , and $\Gamma \in {\mathcal F}_{s,t}$ ,
In particular, the quasi-ergodic distribution $\beta$ is the limit of $t \mapsto \frac{1}{t} \int_0^t \mathbb{Q}^B_{0, \mu}[X_s \in \cdot]ds$ , when t goes to infinity (see [Reference Oçafrain25, Theorem 5]). Also, by [Reference Oçafrain25, Theorem 1], there exist constants $C > 0$ and $\kappa > 0$ such that, for any $s \leq t \leq T$ , for any $x \in E \setminus A_{s}$ ,
and for any $x \in E \setminus B_s$ ,
Moreover, for any $s \leq t \leq T$ and $x \in E'_{\!\!s}$ ,
since
Then we obtain, for any $s \leq t \leq T$ and $x \in E'_{\!\!s}$ ,
The condition (28) implies the existence of $x_s \in E_s$ such that, for any $n \leq N$ , for all $k \in \mathbb{Z}_+$ ,
which implies by (31) that, for any $n \leq N$ ,
Now, letting $N \to \infty$ , for any $n \in \mathbb{Z}_+$ we have
In other words, the semigroup $\big(Q^A_{s,t}\big)_{s \leq t}$ defined by
is asymptotically periodic (according to Definition 1, with $\psi_s = \tilde{\psi}_s = 1$ for all $s \geq 0$ ), associated to the auxiliary semigroup $\big(Q^B_{s,t}\big)_{s \leq t}$ defined by
Moreover, since Assumption (A′) is satisfied for $\{(X_t)_{t \geq 0}, (A_t)_{t \geq 0}\}$ and $\{(X_t)_{t \geq 0}, (B_t)_{t \geq 0}\}$ , the Doeblin condition holds for these two Q-processes. As a matter of fact, by the Markov property, for all $s \leq t \leq T$ and $x \in E \setminus A_s$ ,
where, for all $s \leq t$ and $\mu \in \mathcal{M}_1(E_s)$ , $\phi_{t,s}(\mu) \;:\!=\; \mathbb{P}_{s,\mu}(X_t \in \cdot | \tau_A > t)$ . By (A′1), for any $s \geq 0$ , $T \geq s+t_0$ , $x \in E \setminus A_s$ , and measurable set $\mathcal{A}$ ,
that is, by (32),
Letting $T \to \infty$ in this last inequality and using [Reference Champagnat and Villemonais9, Proposition 3.1], for any $s \geq 0$ , $x \in E \setminus A_s$ , and measurable set $\mathcal{A}$ ,
The measure
is then a positive measure whose mass is bounded below by $c_2$ , by (A′2), since for all $s \geq 0$ and $T \geq s + t_0$ ,
This proves a Doeblin condition for the semigroup $\big(Q_{s,t}^A\big)_{s \leq t}$ . The same reasoning also applies to prove a Doeblin condition for the semigroup $\big(Q_{s,t}^B\big)_{s \leq t}$ . Then, using (27) followed by Corollary 1, we have
where the limits refer to convergence in total variation and hold uniformly in the initial measure.
For any $\mu \in \mathcal{M}_1(E \setminus A_0)$ , $f \in \mathcal{B}_1(E)$ , and $t \geq 0$ ,
Then, by [Reference Oçafrain25, Theorem 1], for any $s \leq u \leq t$ , for any $\mu \in \mathcal{M}_1(E \setminus A_0)$ and $f \in \mathcal{B}(E)$ ,
where the expectation $\mathbb{E}^{\mathbb{Q}^A}_{0,\eta_0*\mu}$ is associated to the probability measure $\mathbb{Q}_{0,\eta_0*\mu}^A$ . Hence, for any $\mu \in \mathcal{M}_1(E \setminus A_0)$ , $f \in \mathcal{B}_1(E)$ , and $t > 0$ ,
Moreover, since $\big(Q^A_{s,t}\big)_{s \leq t}$ is asymptotically periodic in total variation and satisfies the Doeblin condition, like $\big(Q^B_{s,t}\big)_{s \leq t}$ , Corollary 1 implies that
Then
Remark 5. It seems that Assumption (A′) can be weakened by a conditional version of Assumption 1. In particular, such conditions can be derived from Assumption (F) in [Reference Champagnat and Villemonais6], as will be shown later in the paper [Reference Champagnat, Oçafrain and Villemonais4], currently in preparation.
5. Examples
5.1. Asymptotically periodic Ornstein–Uhlenbeck processes
Let $(X_t)_{t \geq 0}$ be a time-inhomogeneous diffusion process on $\mathbb{R}$ satisfying the stochastic differential equation
where $(W_t)_{t \geq 0}$ is a one-dimensional Brownian motion and $\lambda \;:\; [0, \infty) \to [0, \infty)$ is a function such that
and such that there exists $\gamma > 0$ such that
By Itô’s lemma, for any $s \leq t$ ,
In particular, denoting by $(P_{s,t})_{s \leq t}$ the semigroup associated to $(X_t)_{t \geq 0}$ , for any $f \in \mathcal{B}(\mathbb{R})$ , $t \geq 0$ , and $x \in \mathbb{R}$ ,
where $\mathcal{N}(0,1)$ denotes a standard Gaussian variable.
Theorem 3. Assume that there exists a $\gamma$ -periodic function g, bounded on $\mathbb{R}$ , such that $\lambda \sim_{t \to \infty} g$ . Then the assumptions of Theorem 1 hold.
Proof. In our case, the auxiliary semigroup $(Q_{s,t})_{s \leq t}$ of Definition 1 will be defined as follows: for any $f \in \mathcal{B}(\mathbb{R})$ , $t \geq 0$ , and $x \in \mathbb{R}$ ,
In particular, the semigroup $(Q_{s,t})_{s \leq t}$ is associated to the process $(Y_t)_{t \geq 0}$ following
We first remark that the function $\psi \;:\; x \mapsto 1 + x^2$ is a Lyapunov function for $(P_{s,t})_{s \leq t}$ and $(Q_{s,t})_{s \leq t}$ . In fact, for any $s \geq 0$ and $x \in \mathbb{R}$ ,
where $C \in (0,+\infty)$ and $c_{\inf} \;:\!=\; \inf_{t \geq 0} \frac{1}{\gamma} \int_t^{t+\gamma}\lambda(u)du > 0$ . Taking $\theta \in (e^{-2 \gamma c_{\inf}},1)$ , there exists a compact set K such that, for any $s \geq 0$ ,
Moreover, for any $s \geq 0$ and $t \in [0, \gamma)$ , the function $P_{s,s+t}\psi/\psi$ is upper-bounded uniformly in s and t. It remains therefore to prove Assumption 1(i) for $(P_{s,t})_{s \leq t}$ , which is a consequence of the following lemma.
Lemma 1. For any $a,b_{-},b_{+} > 0$ , define the subset $\mathcal{C}(a,b_{-},b_{+}) \subset \mathcal{M}_1(\mathbb{R})$ as
Then, for any $a,b_{-},b_{+} > 0$ , there exist a probability measure $\nu$ and a constant $c > 0$ such that, for any $\mu \in \mathcal{C}(a,b_{-},b_{+})$ ,
The proof of this lemma is postponed until after the end of this proof.
Since $\lambda \sim_{t \to \infty} g$ and these two functions are bounded on $\mathbb{R}_+$ , Lebesgue’s dominated convergence theorem implies that, for all $s \leq t$ ,
In the same way, for all $s \leq t$ ,
Hence, for any $s \leq t$ ,
and
Using [Reference Devroye, Mehrabian and Reddad14, Theorem 1.3], for any $x \in \mathbb{R}$ ,
To deduce the convergence in $\psi$ -distance, we will draw inspiration from the proof of [Reference Hening and Nguyen19, Lemma 3.1]. Since the variances are uniformly bounded in k (for $s \leq t$ fixed), there exists $H > 0$ such that, for any $k \in \mathbb{N}$ and $s \leq t$ ,
Since $\lim_{|x| \to \infty} \frac{\psi(x)}{\psi^2(x)} = 0$ , for any $\epsilon > 0$ there exists $l_\epsilon > 0$ such that, for any function f such that $|f| \leq \psi$ and for any $|x| \geq l_\epsilon$ ,
Combining this with (34), and letting $K_\epsilon \;:\!=\; [\!-\!l_\epsilon, l_\epsilon]$ , we find that for any $k \in \mathbb{Z}_+$ , f such that $|f| \leq \psi$ , and $x \in \mathbb{R}$ ,
Then, for any $k \in \mathbb{Z}_+$ and f such that $|f| \leq \psi$ ,
Hence, (33) implies that, for k large enough, for any f bounded by $\psi$ ,
implying that
We now prove Lemma 1.
Proof of Lemma 1. Defining
we conclude easily that, for any $m \in [\!-\!a,a]$ and $\sigma \geq b_-$ , for any $x \in \mathbb{R}$ ,
Imposing moreover that $\sigma \leq b_+$ , one has
which concludes the proof.
5.2. Quasi-ergodic distribution for Brownian motion absorbed by an asymptotically periodic moving boundary
Let $(W_t)_{t \geq 0}$ be a one-dimensional Brownian motion, and let h be a $\mathcal{C}^1$ -function such that
We assume also that
Define
Since h is continuous, the hitting time $\tau_h$ is a stopping time with respect to the natural filtration of $(W_t)_{t \geq 0}$ . Moreover, since $\sup_{t \geq 0} h(t) < + \infty$ and $\inf_{t \geq 0} h(t) > 0$ ,
The main assumption on the function h is the existence of a $\gamma$ -periodic function g such that $h(t) \leq g(t)$ , for any $t \geq 0$ , and such that
Similarly to $\tau_h$ , define
Finally, let us assume that there exists $n_0 \in \mathbb{N}$ such that, for any $s \geq 0$ ,
This condition says that there exists $n_0 \in \mathbb{N}$ such that, for any time $s \geq 0$ , the infimum of the function h on the domain $[s, + \infty)$ is reached on the subset $[s, s + n_0 \gamma]$ .
We first prove the following proposition.
Proposition 1. The Markov process $(W_{t})_{t \geq 0}$ , considered as absorbed by h or by g, satisfies Assumption (A′).
Proof. In what follows, we will prove Assumption (A′) with respect to the absorbing function h. The proof can easily be adapted for the function g.
-
Proof of (A′1). Define $\mathcal{T} \;:\!=\; \{s \geq 0 \;:\; h(s) = \inf_{t \geq s} h(t)\}$ . The condition (38) implies that this set contains an infinity of times.
In what follows, the following notation is needed: for any $z \in \mathbb{R}$ , define $\tau_z$ as
Also, let us state that, since the Brownian motion absorbed at $\{-1,1\}$ satisfies Assumption (A) of [Reference Champagnat and Villemonais5] at any time (see [Reference Champagnat and Villemonais7]), it follows that, for a given $t_0 > 0$ , there exist $c > 0$ and $\nu \in \mathcal{M}_1((\!-\!1,1))$ such that, for any $x \in (\!-\!1,1)$ ,
Moreover, in relation to the proof of [Reference Champagnat and Villemonais7, Section 5.1], the probability measure $\nu$ can be expressed as
for some $0 < t_2 < \frac{t_0}{h_{\max}^2} \land t_0$ and $\epsilon \in (0,1)$ .
The following lemma is very important for the next part of the argument.
Lemma 2. For all $z \in [h_{\min},h_{max}]$ ,
where $t_0$ is as previously mentioned, $c > 0$ is the same constant as in (39), and
with $\nu \in \mathcal{M}_1((\!-\!1,1))$ defined in (40).
The proof of this lemma is postponed until after the current proof.
Let $s \in \mathcal{T}$ . Then, for all $x \in (\!-\!h(s),h(s))$ and $t \geq 0$ ,
By Lemma 2, for all $x \in (\!-\!h(s),h(s))$ and $t \geq t_0$ ,
which implies that, for any $t \in [t_0, t_0 + n_0 \gamma]$ ,
Let us introduce the process $X^h$ defined by, for all $t \geq 0$ ,
By Itô’s formula, for any $t \geq 0$ ,
Define
By the Dubins–Schwarz theorem, it is well known that the process $M^h$ has the same law as
Then, defining
and, for any $s \leq t$ and for any trajectory w,
Girsanov’s theorem implies that, for all $x \in (\!-\!h(s),h(s))$ ,
On the event
and since h and h ′ are bounded on $\mathbb{R}_+$ , the random variable ${\mathcal E}_{s,s+t_0}^h(W)$ is almost surely bounded by a constant $C > 0$ , uniformly in s, such that for all $x \in (\!-\!h(s),h(s))$ ,
Since $h(t) \geq h(s)$ for all $t \geq s$ (since $s \in \mathcal{T}$ ),
By the scaling property of the Brownian motion and by the Markov property, one has for all $x \in (\!-\!h(s),h(s))$
where, for any initial distribution $\mu$ and any $t \geq 0$ ,
The family $(\phi_t)_{t \geq 0}$ satisfies the equality $\phi_t \circ \phi_s = \phi_{t+s}$ for all $s,t \geq 0$ . By this property, and using that
for any $s \geq 0$ , the minorization (39) implies that, for all $s \geq 0$ and $x \in (\!-\!1,1)$ ,
Hence, by this minorization, and using that h is upper-bounded and lower-bounded positively on $\mathbb{R}_+$ , one has for all $x \in (\!-\!1,1)$
that is to say,
In other words, we have just shown that, for all $x \in (\!-\!h(s),h(s))$ ,
Moreover, by Lemma 2 and the scaling property of the Brownian motion, for all $x \in (\!-\!h(s),h(s))$ ,
Thus, combining (41), (46), and (47), for any $x \in (\!-\!h(s),h(s))$ and any $t \in [t_0, t_0 + n_0 \gamma]$ ,
where
We recall that the Doeblin condition (48) has, for now, been obtained only for $s \in \mathcal{T}$ . Consider now $s \not \in \mathcal{T}$ . Then, by the condition (38), there exists $s_1 \in \mathcal{T}$ such that $s < s_1 \leq s + n_0 \gamma$ . The Markov property and (48) therefore imply that, for any $x \in (\!-\!h(s),h(s))$ ,
where, for all $s \leq t$ and $\mu \in \mathcal{M}_1((\!-\!h(s),h(s)))$ ,
This concludes the proof of (A′1).
-
Proof of (A′2). Since $(W_t)_{t \geq 0}$ is a Brownian motion, note that for any $s \leq t$ ,
\begin{equation*}\sup_{x \in (\!-\!1,1)} \mathbb{P}_{s,x}[\tau_h > t] = \mathbb{P}_{s,0}[\tau_h > t].\end{equation*}Also, for any $a \in (0,h(s))$ ,\begin{equation*}\inf_{[\!-\!a,a]}\mathbb{P}_{s,x}[\tau_h > t] = \mathbb{P}_{s,a}[\tau_h > t].\end{equation*}Thus, by the Markov property, and using that the function $s \mapsto \mathbb{P}_{s,0}[\tau_g > t]$ is non-decreasing on [0, t] (for all $t \geq 0$ ), one has, for any $s \leq t$ ,(49) \begin{align} \mathbb{P}_{s,a}[\tau_h > t] \geq \mathbb{E}_{s,a}[\mathbb{1}_{\tau_0 < s+\gamma < \tau_h} \mathbb{P}_{\tau_0,0}[\tau_h > t]] \geq \mathbb{P}_{s,a}[\tau_0 < s+\gamma < \tau_h] \mathbb{P}_{s,0}[\tau_h > t].\end{align}Defining $a \;:\!=\; \frac{h_{\min}}{h_{\max}}$ , by Lemma 2 and taking $s_1 \;:\!=\; \inf\{u \geq s \;:\; u \in \mathcal{T}\}$ , one obtains that, for all $s \leq t$ ,\begin{align*}\mathbb{P}_{s,\nu_{h(s_1)}}[\tau_h > t] &= \int_{(\!-\!1,1)} \nu(dx) \mathbb{P}_{s,h(s_1)x}[\tau_h > t] \\[5pt] &\geq \nu([\!-\!a,a]) \mathbb{P}_{s,h(s_1)a}[\tau_h > t] \\[5pt] & \geq \nu([\!-\!a,a]) \mathbb{P}_{0,h_{\min}}[\tau_0 < \gamma < \tau_h] \sup_{x \in (\!-\!h(s),h(s))} \mathbb{P}_{s,x}[\tau_h > t].\end{align*}This concludes the proof, since, using (40), one has $\nu([\!-\!a,a]) > 0$ .
We now prove Lemma 2.
Proof of Lemma 2. This result comes from the scaling property of a Brownian motion. In fact, for any $z \in [h_{\min},h_{\max}]$ , $x \in (\!-\!z,z)$ , and $t \geq 0$ , and for any measurable bounded function f,
Then the minorization (39) implies that for any $x \in (\!-\!1,1)$ ,
This inequality holds for any time greater than $\frac{t_0}{h_{\max}^2}$ . In particular, for any $z \in [h_{\min},h_{\max}]$ and $x \in (\!-\!1,1)$ ,
Then, for any $z \in [a,b]$ , f positive and measurable, and $x \in (\!-\!z,z)$ ,
where $\nu_z(f) \;:\!=\; \int_E f(z \times x)\nu(dx)$ . This completes the proof of Lemma 2.
We now conclude the section by stating and proving the following result.
Theorem 4. For any $s \leq t$ , $n \in \mathbb{N}$ , and any $x \in \mathbb{R}$ ,
In particular, Corollary 2 holds for $(W_t)_{t \geq 0}$ absorbed by h.
Proof. Recalling (43), by the Markov property for the Brownian motion, one has, for any $k,n \in \mathbb{N}$ and any $x \in \mathbb{R}$ ,
where, for any trajectory $w = (w_u)_{u \geq 0}$ ,
Since $h \sim_{t \to \infty} g$ , one has for any $s,t \in [0, \gamma]$
For the same reasons, and using that the function h is bounded on $[s+k \gamma, t+k\gamma]$ for all $s \leq t$ , Lebesgue’s dominated convergence theorem implies that
for all $s \leq t \in [0,\gamma]$ . Moreover, since $h \sim_{t \to \infty} g$ and $h' \sim_{t \to \infty} g'$ , one has for all trajectories $w = (w_u)_{u \geq 0}$ and $s \leq t \in [0,\gamma]$
Since the random variable
is bounded almost surely, Lebesgue’s dominated convergence theorem implies that
which concludes the proof.
Acknowledgements
I would like to thank the anonymous reviewers for their valuable and relevant comments and suggestions, as well as Oliver Kelsey Tough for reviewing a part of this paper.
Funding information
A part of this research was supported by the Swiss National Foundation grant 200020 196999.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.