1. Introduction
In the classical framework, a random walk on a group $\mathbf{G}$ is a discrete-time stochastic process $(Z_n)_{n \geq 0}$ defined as the product of independent and identically distributed (i.i.d.) random variables $(\xi_n)_{n \geq 1}$ . Random walks on groups are Markov chains that are adapted to the group structure in the sense that the underlying Markov operator is invariant under the group action of $\mathbf{G}$ on itself. Thus, this homogeneity naturally gives rise to deep connections between stochastic properties of the random walk and algebraic properties of the group. Starting with the seminal paper of Pòlya [Reference Pólya57], many of these connections have been studied (see [Reference Woess65] and references therein), and this research area has remained in constant progress over the last two decades (without claiming to be exhaustive, see [Reference Bartholdi and Erschler3, Reference Bartholdi and Virág4, Reference Blachère, Hassinsky and Mathieu6, Reference Brofferio and Woess11, Reference Gouëzel, Mathéus and Maucourant30, Reference Guivarc’h and Raja36, Reference Kaimanovich and Woess42, Reference Karlsson and Woess43, Reference Mathieu50, Reference Pittet and Saloff-Coste56, Reference Salaün60]).
In this paper, we aim at investigating inhomogeneous random walks. It turns out that there are at least two ways to introduce inhomogeneity. First, we can consider spatial inhomogeneity by weakening the group structure, replacing it, for instance, by a directed graph as in [Reference Bosi, Hu and Peres7, Reference Brémont9, Reference Brémont10, Reference Campanino and Petritis12, Reference de Loynes24, Reference Guillotin-Plantard and Le Ny33, Reference Guillotin-Plantard and Le Ny34, Reference Pène54]. Secondly, we can study temporal inhomogeneous random walks by introducing a notion of memory as in the model of reinforced [Reference Pemantle53, Reference Volkov64], excited [Reference Benjamini, Kozma and Schapira5, Reference Raimond and Schapira58], self-interacting [Reference Comets, Menshikov, Volkov and Wade23, Reference Peres, Popov and Sousi55], or persistent random walks [Reference Cénac, Chauvin, Herrmann and Vallois17, Reference Cénac, Chauvin, Paccaut, Pouyanne, Donati-Martin, Lejay and Rouault18, Reference Cénac, de Loynes, Offret and Rousselle19, Reference Cénac, Le Ny, de Loynes and Offret20, Reference Cénac, Le Ny, de Loynes and Offret21], or also the Markov additive processes that are at the core of this paper. All these models belong to the larger class of stochastic processes with long range dependency.
Markov additive processes are also known as random walks with internal degrees of freedom [Reference Krámli and Szász47], semi-Markov processes, or Markov random walks—see, for instance, [Reference Babillot2, Reference Çinlar15, Reference Çinlar16, Reference Guivarc’h, Mokobodzki and Pinchon35, Reference Kazami and Uchiyama45, Reference Kesten46, Reference Ney and Nummelin51, Reference Ney and Nummelin52, Reference Uchiyama62]. Roughly speaking, Markov additive processes are discrete-time $\mathbb{Z}^d$ -valued (or $\mathbb{R}^d$ ) processes whose increments are still independent but no longer stationary. The distribution of an increment is then driven by a Markov chain termed the internal Markov chain. Most results in the context of standard random walks are generalized to Markov additive processes when the Markov operator of the internal chain is assumed to be quasi-compact on a suitable Banach space: among them, a renewal theorem [Reference Babillot2, Reference Guibourg and Hervé31, Reference Guibourg and Hervé32, Reference Kesten46], local limit theorem [Reference Ferré, Hervé and Ledoux27, Reference Guivarc’h, Mokobodzki and Pinchon35, Reference Hervé37, Reference Hervé and Ledoux38, Reference Hervé and Pène40, Reference Krámli and Szász47], central limit theorem [Reference Ferré, Hervé and Ledoux27, Reference Hervé and Pène40], results on the recurrence set [Reference Alsmeyer1, Reference Hervé and Pène41, Reference Rogers59], large deviations [Reference Ney and Nummelin51, Reference Ney and Nummelin52], asymptotic expansion of the Green function [Reference Kazami and Uchiyama45, Reference Uchiyama62], one-dimensional Berry–Essen theorem [Reference Ferré, Hervé and Ledoux27, Reference Hervé and Pène40, Reference Hervé, Ledoux and Patilea39] with applications to M-estimation, and first passage time [Reference Fuh and Lai28].
However, assuming the internal operator to be quasi-compact is rather strong (see [Reference Lin48]). Actually, beyond the technical difficulties inherent to the infinite dimension, there is no real difference in nature from the finite dimension under this assumption. On the other hand, relax this assumption and the study of Markov additive processes can be very challenging. Also, it is worth noting that many (interesting) Markov additive processes do not admit a quasi-compact internal operator, as illustrated by the examples considered in this paper.
In the context of Markov additive processes, classical Fourier analysis can be extended by introducing the Fourier transform operator which is a perturbation of the internal Markov operator in an appropriate Banach space. As in the classical context, the Fourier transform operator characterizes the distribution of the additive part of Markov additive processes. By a continuous perturbation argument (see, for instance, [Reference Kato44]), when the internal Markov operator is quasi-compact, the Fourier transform operator remains quasi-compact for all sufficiently small perturbations. It allows us, under suitable moment conditions on the distribution of increments, to derive a Taylor expansion at the second order of the perturbed dominating eigenvalue, say $\lambda(t)$ , $t \in [{-}\pi,\pi)^d$ , whose coefficients are given, roughly speaking, by the mean and the variance operators. Finally, under an assumption on the spectrum of the Fourier transform operator for large perturbations, it can be concluded that all the required stochastic information is actually contained in the nature of the singularity at zero of $(1-\lambda(t))^{-1}$ (note that $\lambda(0)=1$ ). For instance, an integral test criterion, similar to the Chung–Fuchs criterion [Reference Chung and Fuchs22, Reference Spitzer61], involving a singularity of this kind is given in [19].
In this paper, the quasi-compactness condition is dropped and the internal Markov chain is only assumed to be irreducible recurrent. The condition on the spectrum of the Fourier transform operator for large perturbations remains similar, but the nature of the singularity at the origin is analyzed considering the Taylor expansion of the quenched characteristic exponent (in a wide sense) defined in Section 2.2. The terms of order one and two are termed the quenched drift and the quenched dispersion respectively. These quantities are characteristics of the increments of the process and naturally appear in the local limit in Theorem 3.1 and the transience condition in Corollary 3.1.
The paper is organized as follows. Section 2 gathers the main notions involved in the statement of Theorem 3.1 and Corollary 3.1. Section 3 is devoted to the statement of these two results, while Section 4 is dedicated to the proof of Theorem 3.1. In Section 5, Theorem 3.1 and Corollary 3.1 are applied to various families of Markov additive processes, extending the models considered in [Reference Brémont8, Reference Campanino and Petritis12, Reference Matheron and De Marsily49]. Those models are simple random walks on directed graphs built upon $\mathbb{Z}^2$ . Various phenomena are observed, whether the directions are fixed periodically or not. As is clear for the periodically directed model, it is possible to factorize a Markov additive process without changing the distribution of the additive part (see Section 5). It is concluded that a simple random walk on a periodically directed graph is a Markov additive process with a finite internal Markov chain. As such, the internal Markov operator is a matrix and is quasi-compact. For more general directions (random directions, for instance), such a reduction is no longer possible.
2. Markov additive processes
2.1. Definitions
Let $(\Omega,\mathcal F,\mathbb{P})$ be a probability space and $\mathbb{X}$ a countable set. The set $\mathbb{X}$ is naturally endowed with the $\sigma$ -algebra consisting of all subsets of $\mathbb{X}$ .
Definition 2.1 (Markov additive process). Let $d \geq 1$ be an integer. A Markov additive process (MAP) is a Markov chain $((X_n,Z_n))_{n \geq 0}$ taking values in $\mathbb{X} \times \mathbb{Z}^d$ defined on $(\Omega,\mathcal F, \mathbb{P})$ satisfying, for all $n \geq 0$ and all bounded functions $f \,:\, \mathbb{X} \times \mathbb{Z}^d \rightarrow \mathbb{R}$ ,
From this equality, it follows immediately that $(X_n)_{n \geq 0}$ is a Markov chain on $\mathbb{X}$ , termed the internal Markov chain. The corresponding Markov kernel is denoted by P, namely $Pf(x)\,:\!=\,\mathbb{E}[f(X_1) \mid X_0=x]$ for any bounded function $f\,:\, \mathbb{X} \rightarrow \mathbb{R}$ (in symbols, $f \in \ell^\infty(\mathbb{X})$ ). Generally speaking, there exists a $\sigma$ -finite measure m dominating the family of probabilities $(P(x,\cdot))_{x \in \mathbb{X}}$ , i.e. $m(y)=0$ implies $P(x,y)=0$ for all $x \in \mathbb{X}$ . If the internal Markov chain is irreducible and recurrent, the invariant measure (unique up to a positive constant) is a natural choice for m.
The conditional distribution of $Z_{n+1}-Z_n$ given $\big(X_n,X_{n+1}\big)=(x,y)$ will be denoted by $\mu^{x,y}$ , and the Fourier transform of $\mu^{x,y}$ by $\widehat{\mu^{x,y}}$ . Then, for any $t \in \mathbb{R}^d$ , the Fourier transform operator $P_t$ acting on the bounded function $f\,:\, \mathbb{X} \rightarrow \mathbb{C}$ is defined as
From the Markov property, it follows that, for all $n \geq 1$ ,
Moreover, $P^n_t \mathbf{1}(x) = \sum_{z \in \mathbb{Z}^d} {{e}}^{{{i}} \langle t,z \rangle } \mathbb{P}(Z_n-Z_0=z \mid X_0=x)$ . Consequently, the function $\mathbb{R}^d \ni t \rightarrow P_t^n\mathbf{1}(x) \in \mathbb{C}$ is the Fourier transform of the conditional distribution of $Z_n-Z_0$ given $X_0=x$ .
By periodicity, it is sufficient to consider the operator $P_t$ for $t \in \mathbb{T}^d=[{-}\pi,\pi)^d$ . In addition, $\mathbb{T}^d_\ast$ will stand for $\mathbb{T}^d \setminus \{ 0 \}$ .
2.2. Conditional characteristic exponent, conditional drift, and conditional dispersion
Proposition 2.1. For any MAP, we have the identity, for all $n \geq 0$ ,
and $P^n_t \mathbf{1}(x) = \mathbb{E} [\Pi_n(t) \mid X_0=x ]$ , where $\Pi_n(t) \,:\!=\, \prod_{k=0}^{n-1} \mathbb{E}\big[{{e}}^{{{i}} \langle t,Z_{k+1}-Z_k \rangle} \mid X_k, X_{k+1}\big]$ .
The proof of Proposition 2.1 relies on a result from [Reference Ēžov and Skorohod26, (8)], which is restated here.
Lemma 2.1. For all $m \geq n \geq p \geq 0$ ,
Proof of Proposition 2.1. By inverse Fourier transform,
Then, by Lemma 2.1, setting $\mathcal G_n = \sigma(X_\ell, 0 \leq \ell \leq n)$ , $n \geq 1$ ,
Proposition 2.1 follows immediately.
As a matter of fact, $\Pi_n$ is an almost surely continuous $\mathbb{C}$ -valued function of $\mathbb{T}^d$ satisfying $\Pi_n(0)=1$ . Hence, in a neighborhood of the origin, the logarithm of $\Pi_n$ is well defined. We may call $\log \Pi_n$ the quenched characteristic exponent (with a slight abuse of terminology, since $\log \Pi_n$ is not defined on $\mathbb{R}^d$ in general). Its second-order Taylor expansion at $t=0$ then exhibits two important quantities:
The latter are respectively called the quenched drift and quenched dispersion of the additive component.
Remark 2.1. The normalization factor for the quenched dispersion is chosen as the approximate growth rate of the sum. In particular, its eigenvalues remain bounded and away from zero almost surely (a.s.), as illustrated in (5.3).
2.3. Spectral condition
Definition 2.2. (Spectral condition.) A MAP is said to satisfy the spectral condition if, for any compact $K \subset \mathbb{T}^d$ with $0 \notin K$ , there exist constants $C > 0$ and $\gamma \in (0,1)$ such that, for all $n \geq 1$ , $\|P_t^n \mathbf{1} \|_{\ell^\infty(\mathbb{X})} \leq C \gamma^n$ .
Proposition 2.2. Assume that the family $(\mu^{x,y})_{x,y \in \mathbb{X}}$ of probability measures is uniformly aperiodic, i.e. for all $t \in \mathbb{T}^d_\ast$ , $\sup_{x,y \in \mathbb{X}} |\widehat{\mu^{x,y}}(t)| < 1$ ; then the MAP fulfills the spectral condition.
Proof. For all $x \in \mathbb{X}$ and all $f \in \ell^{\infty}(\mathbb{X})$ ,
Therefore, $\|P_t^n \mathbf{1} \|_{\ell^\infty(\mathbb{X})} \leq \big[ \sup_{x,y \in \mathbb{X}} |\widehat{\mu^{x,y}}(t)| \big]^n$ .
In the literature, a stronger condition than the spectral condition of Definition 2.2 is usually assumed. It involves the spectral radius of each $P_t$ acting on some Banach subspace of $\ell^\infty(\mathbb{X})$ . Such assumptions imply the spectral condition of Definition 2.2, as shown in Proposition 2.3.
Proposition 2.3. Let $(\mathcal B, \| \cdot \|_{\mathcal B})$ be a Banach subspace of $\ell^\infty(\mathbb{X})$ such that
-
(i) $\mathbf{1} \in \mathcal B$ , and the canonical injection $\mathcal B \hookrightarrow \ell^\infty(\mathbb{X})$ is continuous;
-
(ii) for each $t \in \mathbb{T}^d$ , the operator $P_t$ acts continuously on $\mathcal B$ ;
-
(iii) the map $\mathbb{T}^d \ni t \rightarrow P_t \in \mathcal L(\mathcal B)$ is continuous for the subordinated operator norm induced by $\| \cdot \|_{\mathcal B}$ .
Suppose that the spectral radius of $P_t$ , defined, for all $t \in \mathbb{T}^d$ , by
satisfies $r_{\mathcal B}(t)<1$ as soon as $t \in \mathbb{T}^d \setminus \{ 0 \}$ . Then, the resulting MAP satisfies the spectral condition of Definition 2.2.
Proof. Since $\mathbb{T}^d \ni t \rightarrow P_t \in \mathcal L(\mathcal B)$ is continuous, the spectral radius $t \rightarrow r_{\mathcal B}(t)$ is upper semi-continuous as the infimum of continuous functions. Consequently, if $K \subset \mathbb{T}^d$ with $0 \notin K$ then there exists $t_K \in K$ such that $1 > r_{\mathcal B}(t_K) = \sup_{t \in K} r_{\mathcal B}(t)$ .
Choose $\gamma \in (r_{\mathcal B}(t_K),1)$ and denote by $\Gamma \subset \mathbb{C}$ the circle of the complex plane centered at 0 of radius $\gamma$ . As a matter of fact, for any $(\lambda,t) \in \Gamma \times K$ , the operator $\lambda-P_t$ is invertible. By [Reference Dunford and Schwartz25, Theorem 10, p. 560], it follows that
Now, the map $\Gamma \times K \ni (\lambda,t) \rightarrow \lambda-P_t \in \mathcal L(\mathcal B)$ is continuous. Also, by [Reference Dunford and Schwartz25, Lemma 1, p. 584], the map $A \rightarrow A^{-1}$ is a homeomorphism of the open subset of invertible operators in $\mathcal L(\mathcal B)$ . Consequently, the map $\Gamma \times \mathbb{K} \ni (\lambda,t) \rightarrow (\lambda-P_t)^{-1} \in \mathcal L(\mathcal B)$ is continuous on the compact $\Gamma \times K$ , and hence $\sup_{(\lambda,t) \in \Gamma \times K} \| (\lambda-P_t)^{-1} \|_{\mathcal B} < \infty$ . The spectral condition of Definition 2.2 follows immediately from the continuity of the canonical injection of $\mathcal B$ in $\ell^\infty(\mathbb{X})$ .
The uniform aperiodicity condition introduced in Proposition 2.2 is far from being necessary. We now introduce the usual notion of an aperiodic MAP (we refer to [Reference Uchiyama62], for instance).
Definition 2.3 (Aperiodic Markov additive process). A MAP is said to be aperiodic if there exists no proper subgroup H of the additive group $\mathbb{Z}^d$ such that, for every positive integer n and every $x,y \in \mathbb{X}$ with $m(x)P^n(x,y)>0$ , there exists $a=a_n(x,y) \in \mathbb{Z}^d$ satisfying $\mathbb{P}[Z_n-Z_0 \in a+H \mid X_0=x,X_n=y]=1$ .
Proposition 2.4. Let (X, Z) be an aperiodic MAP for which the internal Markov chain X is irreducible and recurrent. Let $(\mathcal B, \| \cdot \|_{\mathcal B})$ be a Banach subspace of $\ell^\infty(\mathbb{X})$ satisfying the assumptions in Proposition 2.3(i), (ii), and (iii). Then, for any $t \in \mathbb{T}^d \setminus \{ 0 \}$ , the operator $P_t \in \mathcal L(\mathcal B)$ has no eigenvalues of modulus one.
Proof. Suppose, on the contrary, that there exist $t_0 \in \mathbb{T}^d \setminus \{ 0 \}$ , $f \in \mathcal B \setminus \{ 0 \}$ , and $\theta \in \mathbb{R}$ such that
By Jensen’s inequality and the fact that $|\widehat{\mu^{x,y}}(t_0)| \leq 1$ , it follows that $|f|\leq P|f|$ . Consequently, $\|f\|_{\ell^\infty(\mathbb{X})}-|f|$ is a non-negative superharmonic function. Since P is irreducible and recurrent, the function $|f|$ is constant (see [Reference Woess65, Theorem 1.16, p. 5], for instance). Hence, for all $x \in \mathbb{X}$ such that $m(x)>0$ ,
This means that, for all $x,y \in \mathbb{X}$ with $m(x)P(x,y)>0$ , $|\widehat{\mu^{x,y}}(t_0)|=1$ . Since $|f|$ is constant and $f \neq 0$ , (2.1) can be rewritten as
This convex combination is again extremal so that, for all $x,y \in \mathbb{X}$ ,
More generally, for all $n \geq 1$ and for all $(x_0, \ldots, x_n)$ ,
Now, fix $n \geq 1$ and $x,y \in \mathbb{X}$ , and choose, once for all, $a_n(x,y) \in \mathbb{R}^d$ such that ${{e}}^{{{i}} n\theta} {f(x)}/{f(y)} = {{e}}^{{{i}} \langle t_0,a_n(x,y) \rangle}$ . In fact, $a_n(x,y) \in \mathbb{Z}^d$ since the left-hand side of (2.2) is nothing but the Fourier transform of a measure supported by $\mathbb{Z}^d$ . We denote by $S_n(x,y)$ the support of the distribution of $Z_n-Z_0-a_n(x,y)$ conditionally on the event $\{ X_0=x \} \cap \{ X_n=y \}$ , and set H as the group generated by the family of sets $S_n(x,y)$ , $n \geq 1$ and $x,y \in \mathbb{X}$ . By the construction of H, we have that, for all $n \geq 1$ and for all $x,y \in \mathbb{X}$ , there exists $a_n(x,y) \in \mathbb{Z}^d$ such that $\mathbb{P}[Z_n-Z_0 \in a_n(x,y) + H \mid X_0=x,X_n=y]=1$ . By aperiodicity, $H=\mathbb{Z}^d$ .
Finally, take the expectation on both sides of (2.2), so that
Then, the extremality in the convex combination yields that, for all $w \in S_n(x,y)$ , $\langle t_0,w\rangle=0$ modulo $2\pi$ . By linearity, this identity extends to the whole group $H=\mathbb{Z}^d$ . Taking $w=e_i$ , $i=1, \ldots, d$ , the vectors of the standard basis of $\mathbb{Z}^d$ , it follows that $t_0=0$ , leading to a contradiction.
Let us point out that an arithmetic condition such as the aperiodicity fails to give information on spectral values that are not eigenvalues. Nonetheless, if, in addition, it is known that the peripheral spectrum consists of eigenvalues then the spectral condition of Definition 2.2 is clearly fulfilled. Such an assumption on the peripheral spectrum, which is rather technical, can be difficult to verify in practice. That is why it can be often preferable to check directly the spectral condition of Definition 2.2.
3. Main results
Let us first summarize the assumptions involved in the statement of the local limit theorem and the sufficiency criterion for transience.
Assumption 3.1. The internal Markov chain P is irreducible and recurrent.
Assumption 3.2. The family of probability measures $(\mu^{x,y})_{x,y \in \mathbb{X}}$ admits a uniform third-order moment: $\sup_{x,y \in \mathbb{X}} \sum_{z \in \mathbb{Z}^d} \|z\|^3 \mu^{x,y}(z) < \infty$ .
Assumption 3.3. The quenched dispersion is uniformly elliptic: there exists a (deterministic) constant $\alpha > 0$ such that, for all $t \in \mathbb{R}^d$ and all $n \geq 1$ , $\langle t,V_n t \rangle \geq \alpha \|t\|^2$ almost surely.
Assumption 3.4. The MAP satisfies the spectral condition.
3.1. A local limit theorem
Theorem 3.1 (Local limit theorem). Under Assumptions 3.1–3.4, for all $x \in \mathbb{X}$ , for all $\varepsilon \in \big(0, \frac12\big)$ ,
3.2. A sufficient criterion for transience: Some potential theory
This section is devoted to a sufficiency criterion for the transience of the additive part of Markov additive processes. The recurrence or transience in this context is defined in [Reference Cénac, de Loynes, Offret and Rousselle19] as follows.
Definition 3.1 (Recurrence versus transience). A MAP $((X_n,Z_n)_{n \geq 0}$ is said to be recurrent if, for any $(x,z) \in \mathbb{X} \times \mathbb{Z}^d$ , there exists $r>0$ such that
It is said to be transient if, for any $(x,z) \in \mathbb{X} \times \mathbb{Z}^d$ ,
In [Reference Cénac, de Loynes, Offret and Rousselle19], it is proved that a MAP is either recurrent or transient under Assumption 3.1. In particular, the recurrence or transience of such a MAP does not depend on the initial state $(x,z) \in \mathbb{X} \times \mathbb{Z}^d$ .
For any positive function f on $\mathbb{X} \times \mathbb{Z}^d$ , recall that the potential of the charge f is given by $Gf(x,z)\,:\!=\,\mathbb{E} \big[ \sum_{n \geq 0} f(X_n,Z_n) \mid X_0=x,Z_0=z \big]$ . By analogy with the classical context of random walks, it is natural to look for a criterion for the recurrence or transience of a MAP that involves the mean sojourn time of the set $\mathbb{X} \times \{ z \}$ :
Naturally, if the quantity in (3.1) is finite, the additive component of the MAP hits $z \in \mathbb{Z}^d$ only finitely many times almost surely by applying the Markov inequality.
Definition 3.2 (Irreducibility). A MAP is said to be irreducible if, for any $x \in \mathbb{X}$ and any $z,z^\prime \in \mathbb{Z}^d$ , there exists $n \geq 0$ such that $\mathbb{P}[Z_n=z^\prime \mid X_0=x,Z_0=z] > 0$ .
Therefore, if a MAP is irreducible and if the quantity in (3.1) is finite for all $x \in \mathbb{X}$ and for some (equivalently any) $z \in \mathbb{Z}^d$ , then the MAP is transient by the Markov property. Consequently, we obtain the following criterion as a corollary of Theorem 3.1.
Corollary 3.1 (Sufficient criterion for transience). Let $d \geq 2$ . Suppose that the MAP is irreducible. Then, under Assumptions 3.1–3.4, if, for any $x \in \mathbb{X}$ ,
then the MAP is transient.
Remark 3.1. Under the assumptions of Corollary 3.1, the quantities in (3.1) and (3.2) are simultaneously finite or infinite. When the internal Markov chain is positive recurrent, the MAP is recurrent as soon as the quantity in (3.1) is infinite, as shown in [Reference Cénac, de Loynes, Offret and Rousselle19, Proposition 2.2], giving rise to a complete characterization of the transient or recurrent type. Such a characterization remains an open question in the case of a null recurrent internal Markov chain since the possibility of a transient MAP for which the quantity in (3.1) remains infinite cannot be ruled out.
Remark 3.2. Intuitively, Assumption 3.3 means that the MAP remains genuinely d-dimensional and is not attracted by a sub-manifold. In addition, it is worth noting that, under Assumption 3.3, ${det}(V_n) \geq \alpha^d$ for all $n \geq 1$ and, under Assumption 3.2, $\sup_{n \geq 1} \| V_n \|_2 < \infty$ , so that the factor ${det}(V_n)$ does not play any role in the nature of the series (3.2).
4. Proofs
Proposition 4.1. Under Assumption 3.2, for any sufficiently small $\delta \in (0,1)$ and any $t \in \delta \mathbb{T}^d$ , where $\delta \mathbb{T}^d$ is the magnification of the hypercube $\mathbb{T}^d$ by $\delta$ , there exist a (deterministic) constant K and $\Theta_n(t)$ with $|\Theta_n(t)| \leq nK\|t\|_\infty^3$ such that
Proof. Fix $\ell \geq 0$ . Under Assumption 3.2, there exists a (deterministic) constant $\kappa > 0$ such that
Then, for all $\delta \in (0,1/2\kappa) \cap (0,1)$ and all $t \in \delta \mathbb{T}^d$ , the $\mathbb{C}$ -valued function $\pi_\ell \,:\, t \rightarrow \log \mathbb{E} \big[{{e}}^{{{i}} \langle t,Z_{\ell+1}-Z_\ell \rangle } \big | X_\ell,X_{\ell+1} \big]$ is three times continuously differentiable. Observe that $\pi_\ell(0)=0$ . Also, the partial derivative with respect to the pth coordinate, $p \in \{1, \ldots, d \}$ , is given by
Finally, the partial derivative with respect to the pth and qth coordinates, $p,q \in \{ 1, \ldots, d \}$ , is written as
Hence, the Taylor expansion at $t=0$ yields, for all $t \in \delta \mathbb{T}^d$ ,
where $R_\ell$ stands for the Lagrange’s remainder in the Taylor expansion. Then, for any $t \in \delta \mathbb{T}^d$ , almost surely,
In fact, (4.1) implies that, for all $\delta \in (0,1/2k) \cap (0,1)$ and all $t \in \delta \mathbb{T}^d$ , $\big| \mathbb{E} \big[ {{e}}^{{{i}} \langle t,Z_{\ell+1}-Z_\ell \rangle} \mid X_\ell, X_{\ell+1} \big] \big| \geq \frac12$ . Thus, computing explicitly the third-order derivatives of $\pi_\ell$ , it follows that, for any $p,q,r \in \{ 1, \ldots, d \}$ and any $t \in \delta \mathbb{T}^d$ ,
The existence of the deterministic constant K follows immediately from Assumption 3.2. Now,
where $\Theta_n(t)=\sum_{\ell=0}^{n-1} R_\ell(t)$ and $|\Theta_n(t)| \leq n K \|t\|_\infty^3$ .
Proof of Theorem 3.1. Let $\delta \in (0,1)$ be such that Proposition 4.1 holds. Then, by Proposition 2.1,
The change of variables $u=t/\sqrt{n}$ in the second term yields
Let $a \in (0,\delta \sqrt{n})$ . The quantity in 4.3 can be decomposed as follows:
Finally, noting that $\| t / \sqrt{n}\|_\infty \leq \delta$ as soon as $t \in a \mathbb{T}^d$ , so that Proposition 4.1 holds, (4.2) and (4.3) imply
where
Lemma 4.1. Under Assumptions 3.2 and 3.3, for all $n \geq 1$ and $a \in ( 0, \delta \sqrt{n})$ ,
where K is defined in Proposition 4.1.
Proof. Let $t \in a \mathbb{T}^d$ . By Proposition 4.1, under Assumptions 3.2 and 3.3,
The result follows by integrating with respect to $t \in a\mathbb{T}^d$ .
Lemma 4.2. Under Assumption 3.3, for all $n \geq 1$ and $a > 0$ ,
Proof. Assumption 3.3 implies
Lemma 4.3. Under Assumptions 3.2 and 3.3, for any $\delta > 0$ sufficiently small and any $a \in ( 0, \delta \sqrt{n} )$ ,
Proof. As a matter of fact,
Under Assumptions 3.2 and 3.3, by Proposition 4.1 the integrand in (4.4) satisfies $|\Pi_n(t/\sqrt{n})| \leq \exp\! \big\{ {-}\frac12 \langle t, V_nt \rangle + |\Theta_n(t/\sqrt{n})| \big\}$ . Moreover, since $t \in \delta \sqrt{n} \mathbb{T}^d \setminus a \mathbb{T}^d$ ,
and $-\frac12 \langle t,V_n t \rangle \leq -\frac12 \alpha \|t\|_2^2$ . Thus, choosing any $\delta > 0$ small enough that $K \delta \pi d \leq \alpha/4$ (and that Proposition 4.1 holds), the integrand in (4.4) is bounded above by $\exp\!\big({-}\beta \|t\|_2^2\big)$ for some $\beta > 0$ . Consequently, there exist $K_3,K_4 > 0$ such that $|A_3(z,n,a,\delta)| \leq K_3 {{e}}^{-K_4 a^2}$ .
Lemma 4.4. Under Assumption 3.4, for any $\delta \in (0,1)$ there exist $\gamma \in (0,1)$ and a constant $C > 0$ such that $\sup_{z \in \mathbb{Z}^d} |A_4(z,n,\delta)| \leq (2\pi)^d C \gamma^n$ .
Proof. Fix $\delta \in (0,1)$ . Then, under Assumption 3.4, there exist $\gamma \in (0,1)$ and a constant $C > 0$ such that, for all $t \in \mathbb{T}^d \setminus \delta \mathbb{T}^d$ , $\|P_t^n\mathbf{1}\|_\infty \leq C \gamma^n$ . The result follows immediately.
by setting $a=\delta n^{\varepsilon/3}$ for any $\varepsilon \in \big(0,\frac12\big)$ .
Finally, noting that $V_n$ is invertible by Assumption 3.3, the proof of Theorem 3.1 ends with the help of the Fourier transform
5. Application: Random walks with local drift
This section is dedicated to the illustration of Theorem 3.1 and Corollary 3.1. The examples presented below are largely inspired by those studied in [Reference Campanino and Petritis12], which in turn were previously introduced in [Reference Matheron and De Marsily49]. The discrete-time processes considered in [Reference Campanino and Petritis12] are basically simple random walks on directed graphs built upon $\mathbb{Z}^2$ . More precisely, whereas the random walker can move freely toward North or South at each step, he can only move toward East or West depending on a prescribed environment. As shown in [Reference Campanino and Petritis12], different environments lead to different behaviors for the simple random walk, explaining why a significant part of the literature considers these models (see, for instance, [Reference Brémont8, Reference Brémont9, Reference Brémont10, Reference Campanino and Petritis13, Reference Castell, Guillotin-Plantard, Pène and Schapira14, Reference de Loynes24, Reference Guillotin-Plantard and Le Ny33, Reference Hervé and Pène41, Reference Pène54] or, even more recently, [Reference Bosi, Hu and Peres7]).
In the model considered below, the random walker can move simultaneously vertically and horizontally in one of the North West, North East, South East, or South West quarter-planes. While the choice between North or South remains unrestricted, the choice between East and West is dictated by a prescribed environment. Thus, the case of a periodic environment considered in Section 5.2 has to be compared to the model $\mathbb{L}$ of [Reference Campanino and Petritis12]. The two-directed half-planes of Section 5.3 is analogous to the model $\mathbb{H}$ of [Reference Campanino and Petritis12]. Finally, the randomly directed case of Section 5.4 is reminiscent of the model $\mathbb{O}$ .
5.1. The model of random walks with local drift
In this section the internal Markov chain X of the MAP (X, Z) will be the simple random walk on $\mathbb{X}=\mathbb{Z}$ . More precisely, let $(\xi_k)_{k \geq 1}$ be a sequence of i.i.d. random variables, where $\xi_1$ is uniformly distributed on $\{ -1,1 \}$ . Let $X_0$ be a $\mathbb{Z}$ -valued random variable and, for $n \geq 1$ , $X_n\,:\!=\,X_0+\xi_1+\cdots+\xi_n$ .
Now, let us define the additive part Z of the MAP (X, Z). To this end, introduce a sequence $((T_{{{h}},k},T_{{{v}},k}))_{k \geq 0}$ of independent copies of some $\mathbb{N}^2$ -valued random vector $(T_{{h}},T_{{v}})$ , and a sequence $\varepsilon=(\varepsilon_x)_{x \in \mathbb{Z}}$ of $\mathbb{Z}$ -valued random variables. Then, the additive part Z taking values in $\mathbb{Z}^2$ is defined, for all $n \geq 0$ , as
Assumption 5.1.
-
(i) The internal Markov chain X, and the sequences $((T_{{{h}},k},T_{{{v}},k}))_{k \geq 0}$ and $\varepsilon$ are independent.
-
(ii) $\mathbb{E}[T_{{h}}]=m_{{h}} \in (0,\infty)$ and $\mathbb{E}[T_{{v}}] = m_{{v}} \in (0,\infty)$ .
-
(iii) $\mathbb{V}[T_{{h}}] = \sigma^2_{{v}} > 0$ and $\mathbb{V}[T_{{v}}] = \sigma^2_{{h}} > 0$ .
-
(iv) $T_{{h}}$ and $T_{{v}}$ are independent.
Proposition 5.1. Under Assumptions 5.1, conditionally on the environment $\varepsilon$ , the quenched drift and quenched dispersion are respectively given by
Proof. Conditionally on the environment $\varepsilon$ , the quenched drift and quenched dispersion are respectively given by
By independence, for all $\ell \geq 0$ ,
The result follows immediately.
Proposition 5.2. Under Assumptions 5.1, conditionally on the environment $\varepsilon$ , the Fourier transform operator is given, for all $x \in \mathbb{Z}$ and $t \in \mathbb{T}^2$ , by
where P is the Markov operator associated with the simple random walk X, and $\widehat{\mu_\varepsilon^{x,y}}(t)=\varphi_{T_{{h}}}(\varepsilon_x t_1)\varphi_{T_{{v}}}((y-x)t_2)$ , with $x,y \in \mathbb{Z}$ such that $P(x,y)>0$ , and $t=(t_1,t_2) \in \mathbb{T}^2$ , where $\varphi_{T_{{h}}}$ and $\varphi_{T_{{v}}}$ are the characteristic functions of $T_{{h}}$ and $T_{{v}}$ respectively.
Proof. Let $t=(t_1,t_2) \in \mathbb{T}^2$ . Then
In order to apply Theorem 3.1 and Corollary 3.1, it is necessary to make further assumptions.
Assumption 5.2.
-
(i) There exists $B>0$ such that $\mathbb{P}\big[\sup_{x \in \mathbb{Z}} |\varepsilon_x \mid \geq B\big]=0$ .
-
(ii) For all $x \in \mathbb{Z}$ , $\mathbb{P}[\varepsilon_x=0]=0$ .
-
(iii) $\mathbb{E}\big[T_{{h}}^3\big], \mathbb{E}\big[T_{{v}}^3\big]$ are finite.
-
(iv) The distributions of $T_{{h}}$ and $T_{{v}}$ are aperiodic.
Proposition 5.3. Under Assumptions 5.1 and 5.2, conditionally on the environment $\varepsilon$ , Assumptions 3.1, 3.2, 3.3, and 3.4 are fulfilled.
Proof. It is well known that the simple random walk X is irreducible recurrent (Assumption 3.1).
The uniform moment condition involved in Assumption 3.2 is equivalent to
The latter is straightforward by Assumptions 5.2(i) and (iii).
The uniform ellipticity condition of Assumption 3.3 follows immediately from Assumption 5.2(ii).
Finally, since $T_{{h}}$ and $T_{{v}}$ are aperiodic, for all $t \in [{-}\pi,\pi) \setminus \{ 0 \}$ , $|\varphi_{T_{{h}}}(t)|<1$ , and $|\varphi_{T_{{v}}}(t)|<1$ . Consequently, conditionally on the environment $\varepsilon$ , the family of probability measure $(\mu^{x,y}_\varepsilon)_{x,y \in \mathbb{Z}}$ is uniformly aperiodic. By Proposition 2.2, Assumption 3.4 follows.
As a matter of fact, the quantity $({1}/{2n})\langle (U_n(\varepsilon)-z),V_n^{-1}(\varepsilon)(U_n(\varepsilon)-z) \rangle$ , $n \geq 1$ , $z=(z_1,z_2) \in \mathbb{Z}^2$ , appearing in Theorem 3.1 is given by
Also, it is worth noting that, under Assumptions 5.2(i) and (ii), for all $n \geq 1$ ,
Consequently, this factor will be omitted hereafter when applying Corollary 3.1.
The end of this section is devoted to the application of Theorem 3.1 or Corollary 3.1 for three different environments $\varepsilon$ . The first two are deterministic, whereas the last one is random.
5.2. Periodic environment
In this example, the environment $\varepsilon$ is supposed periodic. Namely, it is assumed that, for all $x \in \mathbb{Z}$ , $\varepsilon_x=({-}1)^x$ .
Proposition 5.4. Let $z \in \mathbb{Z}^2$ . Then, under Assumptions 5.1 and 5.2, for all $x \in \mathbb{Z}$ ,
Proof. First observe that, for all $n \geq 1$ ,
Then, setting $z=(z_1,z_2)$ ,
As a matter of fact, the first exponential factor converges toward 1 as $n \to \infty$ .
For the second factor, it suffices to remark that $n^{-1/2} \big[m_{{v}}(X_n-X_0)-z_2\big]$ converges in distribution to a centered standard Gaussian random variable. Since $t \rightarrow {{e}}^{-t^2/2\sigma_{{v}}^2}$ is continuous, it follows that
By Theorem 3.1, the result of the proposition follows immediately.
The estimate of Proposition 5.4 implies that the series in Corollary 3.1 is infinite. However, since the internal Markov chain is null recurrent, the recurrence of the MAP cannot be deduced directly. Actually, it turns out that, taking advantage of the periodicity of the environment, the distribution of the additive component can be described with a simpler internal Markov chain.
Proposition 5.5. Suppose the MAP (X, Z) is irreducible. Then, under Assumptions 5.1 and 5.2, the MAP Z is recurrent.
Proof. The key idea consists in remarking that $M=((\varepsilon_{X_n},\xi_{n+1}))_{n \geq 0}$ is a Markov chain taking values in the finite space $\{-1,1 \}^{2}$ . The transition probabilities are given, for all $x,x^\prime, y, y^\prime \in \{-1,1 \}$ , by $Q((x,y),(x^\prime,y^\prime))\,:\!=\,Q_1(x,x^\prime)Q_2(y,y^\prime),$ with
Now, let us set, for all $n \geq 1$ , $\widetilde{Z}_n-\widetilde{Z}_0 = \Big( \sum_{k=0}^{n-1} M_k^{(1)} T_{{{h}},k}, \sum_{k=0}^{n-1} M_{k}^{(2)} T_{{{v}},k} \Big)$ , where $M^{(1)}$ and $M^{(2)}$ are respectively the first and second component of M. From (5.1), the following equality in distribution holds:
From this equality in distribution, the MAP $(M,\widetilde Z)$ inherits irreducibility from the MAP (M, Z). Moreover, applying [Reference Cénac, de Loynes, Offret and Rousselle19], the MAP $(M,\widetilde Z)$ is recurrent, noting that M is positive recurrent. Consequently, (X, Z) is recurrent.
5.3. Two directed half-planes
The reduction made in the proof of Proposition 5.5 is naturally not always possible, as illustrated by the example in this section. This obstruction was the main motivation for the extension to the null recurrent case.
The two directed half-planes presented below correspond to the environment $\varepsilon$ defined, for all $x \in \mathbb{Z}$ , by $\varepsilon_x=2\mathbf{1}_{\mathbb{Z}_+}(x)-1$ , where $\mathbb{Z}_+$ is the set of non-negative integers.
Proposition 5.6. Assume that the MAP (X, Z) is irreducible. Then, under Assumptions 5.1 and 5.2, (X, Z) is transient.
Proof. By irreducibility, Equation (5.2) with $z_1=z_2=0$ , and the fact that, for all $n \geq 1$ , ${det}(V_n)=\sigma_{{h}}^2\sigma_{{v}}^2$ , the transience of (X, Z) follows from Corollary 3.1 by proving that the series
is finite for all $x \in \mathbb{Z}$ . Thereafter, a simple computation yields, for all $n \geq 1$ ,
Then, verifying that the function
is $k_n$ -Lipschitz with constant $k_n = {{e}}^{-\frac18}\sqrt{{n m_{{h}}^2}/{\sigma_{{h}}^2}}$ , it follows that
where $\Gamma$ is distributed as an arc-sine law supported by [0,1], and $d_{\mathcal W}$ stands for the Kantorovich–Rubinstein metric on probability measures (see [Reference Villani63, Chapter 6, p. 94]).
It is shown in [Reference Goldstein and Reinert29, Theorem 1.2] that $d_{\mathcal W}(\mathcal{L}(N_n(\mathbb{Z}_+)/n),\mathcal L(\Gamma))=O(1/n)$ , so that the series in (5.4) is finite if and only if, applying Fubini’s theorem,
However, the singularity at $y=1$ coming from the density function of the arc-sine distribution is integrable, and so is the singularity at $y=\frac12$ since $y \rightarrow \log y$ is locally integrable in the positive neighborhood of 0.
5.4. Randomly directed random walks
The last family of examples involve a random environment. More precisely, it will be assumed that $\varepsilon$ is a sequence of i.i.d. random variables; the common marginal distribution is denoted by $\pi$ .
Proposition 5.7. Additionally to Assumptions 5.1 and 5.2, suppose that $\varepsilon=(\varepsilon_x)_{x \in \mathbb{Z}}$ is a sequence of i.i.d. random variables such that $\mathbb{E}[\varepsilon_0]=0$ . Then, for $\pi^{\otimes \mathbb{Z}}$ -almost-every sequence $\varepsilon$ , if the MAP (X, Z) in the environment $\varepsilon$ is irreducible, then it is transient.
Proof. Applying the Markov inequality to the probability measure $\pi^{\otimes \mathbb{Z}}$ , it is only necessary to consider
Without loss of generality we may suppose that $x=0$ , since $\pi^{\otimes \mathbb{Z}}$ is shift invariant. Noting that $n \leq \sum_{k=0}^{n-1} \varepsilon_{X_k}^2 \leq n B^2$ by Assumptions 5.2(i) and (ii), for any $\alpha \in \big(\frac12,\frac34\big)$ we have
Now, set $D_n = \big\{ x \in \mathbb{R} \,:\, n^{3/4} x \in [{-}n^\alpha, n^{\alpha}] \cap \mathbb{Z} \big\}$ . By [Reference Castell, Guillotin-Plantard, Pène and Schapira14, Theorem 1], with the very same function C, the following estimate holds:
where the $o\big(n^{-3/4}\big)$ is uniform in $x \in \mathbb{R}$ . Noting that C is bounded (see, for instance, [Reference Castell, Guillotin-Plantard, Pène and Schapira14, Lemma 4]), the probability above behaves like $O\big(n^{\alpha-3/4}\big)$ , which ends the proof of the proposition.
Acknowledgements
The author thanks the two anonymous referees for their very detailed and thorough reviews that significantly improved the readability and content of the paper.
Funding information
There are no funding bodies to thank relating to the creation of this article.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.