Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-24T11:33:59.025Z Has data issue: false hasContentIssue false

Statistical aspects of mean field coupled intermittent maps

Published online by Cambridge University Press:  19 July 2023

WAEL BAHSOUN
Affiliation:
Department of Mathematical Sciences, Loughborough University, Loughborough, Leicestershire, LE11 3TU, UK (e-mail: [email protected])
ALEXEY KOREPANOV*
Affiliation:
Department of Mathematical Sciences, Loughborough University, Loughborough, Leicestershire, LE11 3TU, UK (e-mail: [email protected])
Rights & Permissions [Opens in a new window]

Abstract

We study infinite systems of mean field weakly coupled intermittent maps in the Pomeau–Manneville scenario. We prove that the coupled system admits a unique ‘physical’ stationary state, to which all absolutely continuous states converge. Moreover, we show that suitably regular states converge polynomially.

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1 Introduction

Mean field coupled dynamics can be thought of as a dynamical system with n ‘particles’ with states $x_1, \ldots , x_n$ evolving according to an equation of the type

$$ \begin{align*} x_k \mapsto T \bigg( x_k, \varepsilon \frac{\delta_{x_1} + \cdots + \delta_{x_n}}{n} \bigg). \end{align*} $$

Here T is some transformation, $\varepsilon \in {\mathbb R}$ is the strength of coupling and $\delta _{x_k}$ are the delta functions, so $(\delta _{x_1} + \cdots + \delta _{x_n}) / n$ is a probability measure describing the ‘mean state’ of the system.

As $n \to \infty $ , it is natural to consider the evolution of the distribution of particles: if $\mu $ is a probability measure describing distribution of particles, then one looks at the operator that maps $\mu $ to the distribution of $T(x, \varepsilon \mu )$ , where $x \sim \mu $ is random.

In chaotic dynamics, mean field coupled systems have been studied first when T is a perturbation of a uniformly expanding circle map by Keller [Reference Keller, Bandt, Graf and Zähle5] and followed, among others, by Bálint et al. [Reference Bálint, Keller, Sélley and Tóth2], Blank [Reference Blank3], Galatolo [Reference Galatolo4], and Sélley and Tanzi [Reference Sélley and Tanzi9]. The case when T is a perturbation of an Anosov diffeomorphism has been covered by Bahsoun, Liverani and Sélley [Reference Bahsoun, Liverani and Sélley1] (see in particular [Reference Bahsoun, Liverani and Sélley1, §2.2] for a motivation of such study). See the paper by Galatolo [Reference Galatolo4] for a general framework when the site dynamics admits exponential decay of correlations. The results of [Reference Galatolo4] also apply to certain mean field coupled random systems. We refer the reader to Tanzi [Reference Tanzi10] for a recent review on the topic and to [Reference Bahsoun, Liverani and Sélley1] for connections with classical and important partial differential equations.

In this work, we consider the situation where T is a perturbation of the prototypical chaotic map with non-uniform expansion and polynomial decay of correlations: the intermittent map on the unit interval $[0,1]$ in the Pomeau–Manneville scenario [Reference Pomeau and Manneville8]. We restrict to the case when the coupling is weak, that is, $\varepsilon $ is small.

Our results apply to a wide class of intermittent systems satisfying standard assumptions (see §2). To keep the introduction simple, here we consider a very concrete example.

Fix $\gamma _* \in (0,1)$ and let, for $\varepsilon \in {\mathbb R}$ , $h \in L^1[0,1]$ and $x \in [0,1]$ ,

(1.1) $$ \begin{align} T_{\varepsilon h} (x) = x (1 + x^{\gamma_* + \varepsilon \gamma_h}) + \varepsilon \varphi_h(x) \,\mod\! 1, \end{align} $$

where

$$ \begin{align*} \gamma_h = \int_0^1 h(s) \sin (2 \pi s) \, ds \quad \text{and} \quad \varphi_h(x) = x^2 (1-x) \int_0^1 h(s) \cos (2 \pi s) \, ds. \end{align*} $$

This way, $T_{\varepsilon h}$ is a perturbation of the intermittent map $x \mapsto x (1 + x^{\gamma _*}) \,\mod \! 1$ . Informally, $\gamma _h$ changes the degree of the indifferent point at $0$ and $\varphi _h$ is responsible for perturbations away from $0$ .

We restrict to $\varepsilon \in [-\varepsilon _0, \varepsilon _0]$ with $\varepsilon _0$ small and to h non-negative with $\int _0^1 h(x) \, dx = 1$ (that is, h is a probability density).

Let ${\mathcal L}_{\varepsilon h} \colon L^1[0,1] \to L^1[0,1]$ be the transfer operator for $T_{\varepsilon h}$ :

(1.2) $$ \begin{align} ({\mathcal L}_{\varepsilon h} g) (x) = \sum_{y \in T_{\varepsilon h}^{-1}(x)} \frac{g(y)}{T_{\varepsilon h}'(y)} , \end{align} $$

and let

(1.3) $$ \begin{align} {\mathcal L}_{\varepsilon} h = {\mathcal L}_{\varepsilon h} h. \end{align} $$

We call ${\mathcal L}_\varepsilon $ the self-consistent transfer operator. Observe that ${\mathcal L}_{\varepsilon }$ is nonlinear and that ${\mathcal L}_\varepsilon h$ is the density of the distribution of $T_{\varepsilon h} (x)$ , if x is distributed according to the probability measure with density h.

We prove that for sufficiently small $\varepsilon _0$ , the self-consistent transfer operator ${\mathcal L}_\varepsilon $ admits a unique physical (see [Reference Bahsoun, Liverani and Sélley1, Definition 2.1]) invariant state $h_\varepsilon $ and that ${\mathcal L}_\varepsilon ^n h$ converges to $h_\varepsilon $ in $L^1$ polynomially for all sufficiently regular h.

Theorem 1.1. There exists $\varepsilon _0 \in (0, 1 - \gamma _*)$ so that each ${\mathcal L}_\varepsilon $ with $\varepsilon \in [-\varepsilon _0, \varepsilon _0]$ , as an operator on probability densities, has a unique fixed point $h_\varepsilon $ . For every probability density h,

$$ \begin{align*} \lim_{n \to \infty} \| {\mathcal L}_\varepsilon^n h - h_\varepsilon \|_{L^1} = 0. \end{align*} $$

Moreover, $h_\varepsilon \in C^\infty (0,1]$ and there are $A, a_1, a_2, \ldots> 0$ such that for all $\ell \geq 1$ and $x \in (0,1]$ ,

(1.4) $$ \begin{align} \int_0^x h_\varepsilon(s) \, ds \leq A x^{1 - 1 / (\gamma_* + \varepsilon_0)} \quad \text{and} \quad \frac{|h_\varepsilon^{(\ell)}(x)|}{h_\varepsilon(x)} \leq \frac{a_\ell}{x^\ell}. \end{align} $$

Theorem 1.2. In the setup of Theorem 1.1, suppose that a probability density h is twice differentiable on $(0,1]$ and satisfies, for some ${\tilde A}, {\tilde a}_1, {\tilde a}_2> 0$ and all $\ell = 1,2$ and $x \in (0,1]$ ,

$$ \begin{align*} \int_0^x h(s) \, ds \leq {\tilde A} x^{1 - 1 / (\gamma_* + \varepsilon_0)} \quad \text{and} \quad \frac{|h_\varepsilon^{(\ell)}(x)|}{h_\varepsilon(x)} \leq \frac{{\tilde a}_\ell}{x^\ell}. \end{align*} $$

Then,

(1.5) $$ \begin{align} \| {\mathcal L}_\varepsilon^n h - h_\varepsilon \|_{L^1} \leq C n^{ - (1 - \gamma_* - \varepsilon_0) / (\gamma_* + \varepsilon_0)} , \end{align} $$

where C depends only on ${\tilde A}, {\tilde a}_1, {\tilde a}_2$ and $\varepsilon _0$ .

Remark 1.3. The restriction $\varepsilon _0 < 1 - \gamma _*$ serves to guarantee that $\gamma _* + \varepsilon \gamma _h$ is bounded away from $1$ and that the right-hand side of equation (1.5) converges to zero.

Remark 1.4. A curious corollary of Theorem 1.1 is that the density of the unique absolutely continuous invariant probability measure for the map $x \mapsto x ( 1 + x^\gamma _*)$ is smooth, namely $C^\infty (0,1]$ with the bounds in equation (1.4). Our abstract framework covers such a result also for the Liverani–Saussol–Vaienti maps [Reference Liverani, Saussol and Vaienti7]. To the best of our knowledge, this is the first time such a result is written down. At the same time, we are aware of at least two different unwritten prior proofs which achieve similar or stronger results, one by Damien Thomine and the other by Caroline Wormell.

Remark 1.5. Another example to which our results apply is

$$ \begin{align*} T_{\varepsilon h} (x) = x (1 + x^{\gamma_*})+ \varepsilon x (1-x) \int_0^1h(s) \sin(\pi s) \, ds \,\mod\! 1 , \end{align*} $$

where $\gamma _* \in (0,1)$ and $\varepsilon \in [0, \varepsilon _0]$ . This is interesting because now each $T_{\varepsilon h}$ with $\varepsilon> 0$ is uniformly expanding, but the expansion is not uniform in $\varepsilon $ . Thus, even for this example, standard operator contraction techniques employed in [Reference Galatolo4, Reference Keller, Bandt, Graf and Zähle5] do not apply.

Remark 1.6. Let $h_\varepsilon $ be as in Theorem 1.1. A natural question is to study the regularity of the map $\varepsilon \mapsto h_\varepsilon $ . We expect that it should be differentiable in a suitable topology.

The paper is organized as follows. Theorems 1.1 and 1.2 are corollaries of the general results in §2, where we introduce the abstract framework and state the abstract results. The abstract proofs are carried out in §3, and in §4, we verify that the specific map in equation (1.1) fits the abstract assumptions.

2 Assumptions and results

We consider a family of maps $T_{\varepsilon h} \colon [0,1] \to [0,1]$ , where $\varepsilon \in [-\varepsilon _*, \varepsilon _*]$ , $\varepsilon _*> 0$ , and h is a probability density on $[0,1]$ .

We require that each such $T_{\varepsilon h}$ is a full branch increasing map with finitely many branches, i.e. there is a finite partition of the interval $(0,1)$ into open intervals $B_{\varepsilon h}^k$ , modulo their endpoints, such that each restriction $T_{\varepsilon h} \colon B_{\varepsilon h}^k \to (0,1)$ is an increasing bijection.

We assume that each restriction $T_{\varepsilon h} \colon B_{\varepsilon h}^k \to (0,1)$ satisfies the following assumptions with the constants independent of $\varepsilon $ , h or the branch.

  1. (a) $T_{\varepsilon h}$ is $r+1$ times continuously differentiable with $r \geq 2$ .

  2. (b) There are $c_\gamma> 0$ , $C_\gamma> 1$ and $\gamma \in [0,1)$ such that

    (2.1) $$ \begin{align} 1 + c_\gamma x^\gamma \leq T_{\varepsilon h}'(x) \leq C_\gamma. \end{align} $$
  3. (c) Denote $w = 1 / T^{\prime }_{\varepsilon h}$ . There are $b_1, \ldots , b_r> 0$ and $\chi _* \in (0,1]$ so that for all $1 \leq ~\ell ~\leq ~r$ , $0 \leq j \leq \ell $ and each monomial $w_{\ell ,j}$ in the expansion of $(w^\ell )^{(\ell -j)}$ ,

    (2.2) $$ \begin{align} \frac{w^\ell}{\chi_\ell} \leq \frac{1}{\chi_\ell \circ T_{\varepsilon h}} - b_\ell \frac{|w_{\ell,j}|}{\chi_j} , \end{align} $$
    where $\chi _\ell (x) = \min \{ x^\ell , \chi _* \}$ . (For example, the expansion of $(w^3)"$ is $6w (w')^2 + 3 w^2 w"$ .)
  4. (d) If $\partial B^k_{\varepsilon h} \not \ni 0$ , that is, $B^k_{\varepsilon h}$ is not the leftmost branch, then $T_{\varepsilon h}$ has bounded distortion:

    (2.3) $$ \begin{align} \frac{T_{\varepsilon h}"}{(T_{\varepsilon h}')^2} \leq C_d , \end{align} $$
    with $C_d> 0$ .

Remark 2.1. Assumption (c) is unusual, but we did not see a way to replace it with something natural. At the same time, it is straightforward to verify and to apply. It plays the role of a distortion bound in $C^r$ adapted to an intermittency at $0$ .

In addition to the above, we assume that the transfer operators corresponding to $T_{\varepsilon h}$ vary nicely in h. We state this formally in equation (2.6) after we introduce the required notation.

Define the transfer operators ${\mathcal L}_{\varepsilon h}$ and ${\mathcal L}_\varepsilon $ as in equations (1.2) and (1.3).

For an integer $k \geq 1$ , let $H^k$ denote the set of k-Hölder functions $g \colon (0,1] \to (0, \infty )$ , that is, such that g is $k - 1$ times continuously differentiable with $g^{(k-1)}$ Lipschitz. Denote ${\mathrm {Lip}}_g (x) = \limsup _{y \to x} |g(x) - g(y)| / |x - y|$ .

Suppose that $a_1, \ldots , a_r> 0$ . For $1 \leq k \leq r$ , let

(2.4) $$ \begin{align} \begin{aligned} {\mathcal D}^k = \bigg\{ g \in H^k : \ & \frac{|g^{(\ell)}|}{g} \leq \frac{a_\ell}{\chi_\ell} \text{ for all } 1 \leq \ell < k, \frac{{\mathrm{Lip}}_{g^{(k-1)}}}{g} \leq \frac{a_k}{\chi_k} \bigg\}. \end{aligned} \end{align} $$

Take $A> 0$ and let

(2.5) $$ \begin{align} {\mathcal D}^k_1 = \bigg\{ g \in {\mathcal D}^k : \int_0^1 g(s) \, ds = 1 , \ \int_0^x g(s) \, ds \leq A x^{1 - \gamma} \bigg\}. \end{align} $$

Remark 2.2. If $g \in {\mathcal D}^1_1$ , then $g(x) \leq C x^{-\gamma }$ , where C depends only on $a_1$ and A.

Now and for the rest of the paper, we fix $a_1, \ldots , a_r$ and A so that ${\mathcal D}^k$ and ${\mathcal D}_1^k$ are non-empty and invariant under ${\mathcal L}_{\varepsilon h}$ . This can be done thanks to the following lemma.

Lemma 2.3. There are $a_1, \ldots , a_r, A> 0$ such that for all $1 \leq q \leq r$ :

  1. (a) $g \in {\mathcal D}^q$ implies ${\mathcal L}_{\varepsilon h} g \in {\mathcal D}^q$ ;

  2. (b) $g \in {\mathcal D}^q_1$ implies ${\mathcal L}_{\varepsilon h} g \in {\mathcal D}^q_1$ .

Moreover, given $C> 0$ , we can ensure that $\min \{a_1, \ldots , a_r, A \}> C$ .

The proof of Lemma 2.3 is postponed to §3.

Finally, we assume that there are $0 \leq \beta < \min \{ \gamma , 1 - \gamma \}$ and $C_\beta> 0$ such that if $h_0, h_1 \in L^1$ and $v \in {\mathcal D}^2_1$ , then

(2.6) $$ \begin{align} {\mathcal L}_{\varepsilon h_0} v - {\mathcal L}_{\varepsilon h_1} v = \delta (f_0 - f_1), \end{align} $$

for some $f_0, f_1 \in {\mathcal D}^1_1$ with $f_0(x), f_1(x) \leq C_\beta x^{-\beta }$ and $\delta \leq |\varepsilon | C_\beta \|h_0 - h_1\|_{L^1}$ .

Let ${\mathbf C} = (\varepsilon _*, r, c_\gamma , C_\gamma , \gamma , b_1, \ldots , b_r, \chi _*, C_d, A, a_1, \ldots , a_r, \beta , C_\beta )$ be the collection of constants from the above assumptions.

Our main abstract result is the following theorem.

Theorem 2.4. There exists $\varepsilon _0> 0$ such that for every $\varepsilon \in [-\varepsilon _0, \varepsilon _0]$ :

  1. (a) there exists $h_\varepsilon $ in ${\mathcal D}^r_1$ so that for every probability density h,

    $$ \begin{align*} \lim_{n \to \infty} \| {\mathcal L}_\varepsilon^n h - h_\varepsilon \|_{L^1} = 0; \end{align*} $$
  2. (b) let ${\widetilde {{\mathcal D}}}^2_1$ be a version of ${\mathcal D}^2_1$ with constants ${\tilde A}, {\tilde a}_1, {\tilde a}_2$ in place of $A, a_1, a_2$ . (We do not require that ${\widetilde {{\mathcal D}}}^2_1$ is invariant.) Then for every $h \in {\widetilde {{\mathcal D}}}^2_1$ ,

    $$ \begin{align*} \| {\mathcal L}_\varepsilon^n h - h_\varepsilon \|_{L^1} \leq C n^{1-1/\gamma} , \end{align*} $$
    where C depends only on ${\mathbf C}$ and ${\tilde A}, {\tilde a}_1, {\tilde a}_2$ .

3 Proofs

In this section, we prove Lemma 2.3 and Theorem 2.4. The latter follows from Lemma 3.3, and Propositions 3.6 and 3.8.

Throughout, we work with maps $T_{\varepsilon h}$ as per our assumptions, in particular, $\varepsilon $ is always assumed to belong to $[-\varepsilon _*, \varepsilon _*]$ and h is always a probability density.

3.1 Invariance of ${\mathcal D}^q$ , ${\mathcal D}^q_1$ and distortion bounds

We start with the proof of Lemma 2.3. Our construction of $A, a_1, \ldots , a_r$ allows them to be arbitrarily large and, without mentioning this further, we restrict the choice so that

(3.1) $$ \begin{align} x \mapsto (1 - \gamma) x^{-\gamma} \quad \text{is in } \breve{{\mathcal D}}_1^r , \end{align} $$

where $\breve {{\mathcal D}}_1^r$ is the version of ${\mathcal D}_1^r$ with $A/2, a_1 / 2, \ldots , a_r / 2$ in place of $A, a_1, \ldots , a_r$ . Informally, we require that $(1-\gamma ) x^{-\gamma }$ is deep inside ${\mathcal D}^r_1$ .

Lemma 3.1. There is a choice of $a_1, \ldots , a_r$ such that if $B \subset (0,1)$ is a branch of $T_{\varepsilon h}$ and $g \in {\mathcal D}^q$ with $1 \leq q \leq r$ , then ${\mathcal L}_{\varepsilon h} (1_B g) \in {\mathcal D}^q$ .

Proof. To simplify the notation, let $T \colon B \to (0,1)$ denote the restriction of $T_{\varepsilon h}$ to B. Then its inverse $T^{-1}$ is well defined. Let $w = 1 / T'$ and $f = (g w) \circ T^{-1}$ . We have to choose $a_1, \ldots , a_r$ to show $f \in {\mathcal D}^q$ independently of g and B.

For illustration, it is helpful to write out a couple of derivatives of f:

$$ \begin{align*} f' & = [g' w^2 + g w' w] \circ T^{-1} , \\ f" & = [g" w^3 + 3 g' w' w^2 + g w" w^2 + g (w')^2 w] \circ T^{-1}. \end{align*} $$

An observation that $f^{(\ell )} = (u_\ell w) \circ T^{-1}$ , where $u_0 = g$ and $u_{\ell + 1} = (u_\ell w)'$ , generalizes the pattern:

(3.2) $$ \begin{align} f^{(\ell)} = \bigg[ g^{(\ell)} w^{\ell + 1} + \sum_{j = 0}^{\ell - 1} g^{(j)} W_{\ell,j} w \bigg] \circ T^{-1}. \end{align} $$

Here each $W_{\ell ,j}$ is a linear combination of monomials from the expansion of $(w^\ell )^{(\ell -j)}$ .

By equation (2.2), for each $\ell $ , there is $c_\ell> 0$ depending only on $b_1, \ldots , b_{\ell -1}$ , such that

$$ \begin{align*} \frac{w^\ell}{\chi_\ell} \leq \frac{1}{\chi_\ell \circ T} - c_\ell \sum_{j=0}^{\ell-1} \frac{|W_{\ell,j}|}{\chi_j}. \end{align*} $$

Using this and the triangle inequality,

(3.3) $$ \begin{align} \begin{aligned} &\bigg| \frac{g^{(\ell)}}{g} w^{\ell} + \sum_{j = 0}^{\ell - 1} \frac{g^{(j)}}{g} W_{\ell,j} \bigg| \leq \frac{|\chi_\ell g^{(\ell)}|}{g} \frac{w^{\ell}}{\chi_\ell} + \max_{j < \ell} \frac{|\chi_j g^{(j)}|}{g} \sum_{j = 0}^{\ell - 1} \frac{|W_{\ell,j}|}{\chi_j} \\ &\quad \leq \frac{|\chi_\ell g^{(\ell)}|}{\chi_\ell \circ T \, g} - \bigg[ c_\ell \frac{|\chi_\ell g^{(\ell)}|}{g} - \max_{j < \ell} \frac{|\chi_j g^{(j)}|}{g} \bigg] \sum_{j = 0}^{\ell - 1} \frac{|W_{\ell,j}|}{\chi_j}. \end{aligned} \end{align} $$

Choose $a_1 \geq c_1^{-1}$ and $a_\ell \geq c_\ell ^{-1} \max _{j < \ell } a_j$ for $2 \leq \ell \leq r$ . It is immediate that if $g \in {\mathcal D}^q$ and $1 \leq \ell < q$ , then the right-hand side of equation (3.3) is at most $a_\ell / \chi _\ell \circ T$ , which in turn implies that $f^{(\ell )} / f \leq a_\ell / \chi _\ell $ . A similar argument yields ${\mathrm {Lip}}_{f^{(q-1)}} / f \leq a_q / \chi _q$ , and hence $f \in {\mathcal D}^q$ as required.

Proof of Lemma 2.3

First we show that part (a) follows from Lemma 3.1. Indeed, let $a_1, \ldots , a_r$ be as in Lemma 3.1 and suppose that $g \in {\mathcal D}^q$ . Write

$$ \begin{align*} {\mathcal L}_{\varepsilon h} g = \sum_B {\mathcal L}_{\varepsilon h} (1_B g) , \end{align*} $$

where the sum is taken over the branches of $T_{\varepsilon h}$ . Each ${\mathcal L}_{\varepsilon h} (1_B g)$ belongs to ${\mathcal D}^q$ by Lemma 3.1, and ${\mathcal D}^q$ is closed under addition. Hence, ${\mathcal L}_{\varepsilon h} g \in {\mathcal D}^q$ .

It remains to prove part (b) by choosing a suitable A. Without loss of generality, we restrict to $q = 1$ .

Fix $\varepsilon $ , h and denote, to simplify notation, $T = T_{\varepsilon h}$ and ${\mathcal L} = {\mathcal L}_{\varepsilon h}$ . Suppose that $g \in {\mathcal D}^1$ with $\int _0^1 g(s) \, ds = 1$ and $\int _0^x g(s) \, ds \leq A x^{1-\gamma }$ for all x. We have to show that if A is sufficiently large, then $\int _0^x ({\mathcal L} g)(s) \, ds \leq A x^{1-\gamma }$ .

Suppose that T has branches $B_1, \ldots , B_N$ , where $B_1$ is the leftmost branch. Denote by $T_k \colon B_k \to (0,1)$ the corresponding restrictions. Taking the sum over branches, write

(3.4) $$ \begin{align} \int_0^x ({\mathcal L} g)(s) \, ds = \sum_{k = 1}^N \int_{T_k^{-1} (0,x)} g(s) \, ds. \end{align} $$

Since $T_1^{-1}(x) \leq x / (1 + c x^\gamma )$ with some c depending only on $c_\gamma $ and $\gamma $ ,

(3.5) $$ \begin{align} \int_{T_1^{-1} (0,x)} g(s) \, ds \leq A \biggl(\frac{x}{1 + c x^\gamma}\biggr)^{1-\gamma} \leq A ( x^{1-\gamma} - c' x) , \end{align} $$

where $c'> 0$ also depends only on $c_\gamma $ and $\gamma $ .

Let now $k \geq 2$ . Note that $T_k^{-1}(0,x) \subset (C_\gamma ^{-1}, 1)$ . Observe that if $g \in {\mathcal D}^1$ with $\int _0^1 g(s) \, ds = 1$ , then $g(s) \leq C$ for $s \in (C_\gamma ^{-1}, 1)$ , where C depends only on $a_1$ and $\chi _*$ . Since $T_k$ is uniformly expanding with bounded distortion in equation (2.3), $|T_k^{-1}(0,x)| \leq C' |B_k| x$ with some $C'$ that depends only on $C_d$ . Hence,

(3.6) $$ \begin{align} \int_{T_k^{-1} (0,x)} g(s) \, ds \leq C C' |B_k| x. \end{align} $$

Assembling equations (3.4), (3.5) and (3.6), we have

$$ \begin{align*} \int_0^x ({\mathcal L} g)(s) \, ds \leq A ( x^{1-\gamma} - c' x) + C" x \end{align*} $$

with $c', C"> 0$ independent of A, $\varepsilon $ and h. For each $A \geq C" / c'$ , the right-hand side above is bounded by $A x^{1-\gamma }$ , as desired.

A useful corollary of Lemma 3.1 is a distortion bound.

Lemma 3.2. Let $n> 0$ and $\delta> 0$ . Consider maps $T_{\varepsilon h_k}$ , $1 \leq k \leq n$ with some $\varepsilon $ and $h_k$ as per our assumptions. Choose and restrict to a single branch for every $T_{\varepsilon h_k}$ , so that all $T_{\varepsilon h_k}$ are invertible and $T_{\varepsilon h_k}^{-1}$ is well defined. Denote

$$ \begin{align*} T_n = T_{\varepsilon h_n} \circ \cdots \circ T_{\varepsilon h_1} \quad \text{and} \quad J_n = 1 / T_n' \circ T_n^{-1}. \end{align*} $$

Then,

(3.7) $$ \begin{align} \frac{|J_n^{(\ell)}|}{J_n} \leq \frac{a_\ell}{\chi_\ell} \quad \text{for } 1 \leq \ell < r , \quad \text{and} \quad \frac{{\mathrm{Lip}}_{J_n^{(r-1)}}}{J_n} \leq \frac{a_r}{\chi_r}. \end{align} $$

In particular, for every $\delta> 0$ , the bounds above are uniform in $x \in [\delta ,1]$ .

Proof. Let $P_{\varepsilon h_k}$ be the transfer operator for $T_{\varepsilon h_k}$ , restricted to the chosen branch:

$$ \begin{align*} P_{\varepsilon h_k} g = \frac{g}{T_{\varepsilon h_k}'} \circ T_{\varepsilon h_k}^{-1}. \end{align*} $$

Denote $P_n = P_{\varepsilon h_n} \cdots P_{\varepsilon h_1}$ .

Let $g \equiv 1$ . Clearly, $g \in {\mathcal D}^r$ . By Lemma 3.1, $P_{\varepsilon h_k} {\mathcal D}^r \subset {\mathcal D}^r$ and thus $P_n g \in {\mathcal D}^r$ . However, $P_n g = J_n$ , and the desired result follows from the definition of ${\mathcal D}^k$ .

3.2 Fixed point and memory loss

Further, let $h_\varepsilon $ be a fixed point of ${\mathcal L}_\varepsilon $ as in the following lemma; later we will show that it is unique.

Lemma 3.3. There exists $h_\varepsilon \in {\mathcal D}^r_1$ such that ${\mathcal L}_\varepsilon h_\varepsilon = h_\varepsilon $ .

Proof. Suppose that $f,g \in {\mathcal D}^r_1$ . Write

$$ \begin{align*} \| {\mathcal L}_{\varepsilon f} f - {\mathcal L}_{\varepsilon g} g \|_{L^1} \leq \| {\mathcal L}_{\varepsilon f} f - {\mathcal L}_{\varepsilon f} g \|_{L^1} + \| {\mathcal L}_{\varepsilon f} g - {\mathcal L}_{\varepsilon g} g \|_{L^1}. \end{align*} $$

The first term on the right is bounded by $\|f - g\|_{L^1}$ because ${\mathcal L}_{\varepsilon f}$ is a contraction in $L^1$ . By equation (2.6), so is the second term, up to a multiplicative constant. It follows that ${\mathcal L}_\varepsilon $ is continuous in $L^1$ . Recall that ${\mathcal L}_\varepsilon $ preserves ${\mathcal D}^r_1$ and note that ${\mathcal D}^r_1$ is compact in the $L^1$ topology. By the Schauder fixed point theorem, ${\mathcal L}_\varepsilon $ has a fixed point in ${\mathcal D}^r_1$ .

Further, we use the rates of memory loss for sequential dynamics from [Reference Korepanov and Leppänen6].

Theorem 3.4. Suppose that $f,g \in {\mathcal D}^1_1$ and $h_1, h_2, \ldots $ are probability densities. Denote ${\mathcal L}_n = {\mathcal L}_{\varepsilon h_n} \cdots {\mathcal L}_{\varepsilon h_1}$ . Then,

$$ \begin{align*} \| {\mathcal L}_n f - {\mathcal L}_n g \|_{L^1} \leq C_1 n^{- 1 / \gamma + 1}. \end{align*} $$

More generally, if $f(x),g(x) \leq C_\gamma ' x^{-\gamma '}$ with $C_\gamma '> 0$ and $\gamma ' \in [0,\gamma ]$ , then

$$ \begin{align*} \| {\mathcal L}_n f - {\mathcal L}_n g \|_{L^1} \leq C_2 n^{- 1 / \gamma + \gamma' / \gamma}. \end{align*} $$

The constant $C_1$ depends only on ${\mathbf C}$ , and $C_2$ depends additionally on $\gamma '$ , $C_\gamma '$ .

Proof. In the language of [Reference Korepanov and Leppänen6], the family $T_{\varepsilon h_k}$ defines a non-stationary non-uniformly expanding dynamical system. As a base of ‘induction’, we use the whole interval $(0,1)$ . For a return time of $x \in (0,1)$ corresponding to a sequence $T_{\varepsilon h_k}$ , $k \geq n$ , we take the minimal $j \geq 1$ such that $T_{\varepsilon h_k} \circ \cdots \circ T_{\varepsilon h_{j-1}} (x)$ belongs to one of the right branches of $T_{\varepsilon h_j}$ , that is, not to the leftmost branch. Note that we work with the return time which is not a first return time, unlike in [Reference Korepanov and Leppänen6], but this is a minor issue that can be solved by extending the space where the dynamics is defined.

It is a direct verification that our assumptions and Lemma 3.2 verify [Reference Korepanov and Leppänen6, equations (NU:1)--(NU:7)] with tail function $h(n) = C n^{-1/\gamma }$ with C depending only on $\gamma $ and $c_\gamma $ .

Further, in the language of [Reference Korepanov and Leppänen6], functions in ${\mathcal D}^1_1$ are densities of probability measures with a uniform tail bound $C n^{-1/\gamma + 1}$ , where C depends only on ${\mathbf C}$ ; each $f \in {\mathcal D}^1_1$ with $f(x) \leq C_\gamma ' x^{-\gamma '}$ is a density of a probability measure with tail bound $C n^{-1/\gamma + \gamma ' / \gamma }$ with C depending only on ${\mathbf C}$ and $\gamma ', C_\gamma '$ .

In this setup, Theorem 3.4 is a particular case of [Reference Korepanov and Leppänen6, Theorem 3.8 and Remark 3.9].

Recall that, as a part of assumption in equation (2.6), we fixed $\beta \in [ 0, \min \{ \gamma , 1 - \gamma \} )$ .

Lemma 3.5. There is a constant $C_{\beta , \gamma }> 0$ , depending only on $\beta $ and $\gamma $ , such that if a non-negative sequence $\delta _n$ , $n \geq 0$ , satisfies

(3.8) $$ \begin{align} \delta_n \leq \xi n^{ -1 / \gamma + 1} + \sigma \sum_{j=0}^{n-1} \delta_j ( n - j )^{ -1 / \gamma + \beta / \gamma} \quad \text{for all } n> 0 \end{align} $$

with some $\sigma \in (0, C_{\beta , \gamma }^{-1})$ and $\xi> 0$ , then

$$ \begin{align*} \delta_n \leq \max \bigg\{ \delta_0, \frac{\xi}{ 1 - \sigma C_{\beta, \gamma} } \bigg\} \, n^{ -1 / \gamma + 1} \quad \text{for all } n> 0. \end{align*} $$

Proof. We choose $C_{\beta ,\gamma }$ which makes the following inequality true for all n:

(3.9) $$ \begin{align} \sum_{j=0}^{n-1} (j + 1)^{ -1 / \gamma + 1} (n - j)^{ -1 / \gamma + \beta / \gamma } \leq C_{\beta, \gamma} (n + 1)^{ -1 / \gamma + 1 }. \end{align} $$

Let $K = \max \{ \delta _0, \xi / ( 1 - \sigma C_{\beta , \gamma } ) \}$ . Then $\delta _0 \leq K$ , and if $\delta _j \leq K (j + 1)^{ - 1 / \gamma + 1 }$ for all $j < n$ , then by equations (3.8) and (3.9),

$$ \begin{align*} \delta_n \leq ( \xi + \sigma C_{\beta, \gamma} K ) (n + 1)^{ -1 / \gamma + 1 } \leq K (n + 1)^{ -1 / \gamma + 1 }. \end{align*} $$

It follows by induction that this bound holds for all n.

Proposition 3.6. Let ${\widetilde {{\mathcal D}}}^2_1$ be as in Theorem 2.4. There is $\varepsilon _0> 0$ and $C> 0$ such that for all $f,g \in {\widetilde {{\mathcal D}}}^2_1$ and $\varepsilon \in [-\varepsilon _0, \varepsilon _0]$ ,

$$ \begin{align*} \| {\mathcal L}_\varepsilon^n f - {\mathcal L}_\varepsilon^n g \|_{L^1} \leq C n^{-1/\gamma + 1}. \end{align*} $$

Proof. Without loss of generality, suppose that $g (x) = (1 - \gamma ) x^{-\gamma }$ . By equation (3.1), $g \in {\mathcal D}^2_1$ . Choose $\xi> 0$ large enough so that $(f + \xi g) / (\xi + 1) \in {\mathcal D}^2_1$ . Such $\xi $ exists by Remark 2.2, equation (3.1) and the definition of ${\mathcal D}^2_1$ ; it depends only on $A, a_1, a_2$ and ${\tilde A}, {\tilde a}_1, {\tilde a}_2$ .

Denote $f_n = {\mathcal L}_\varepsilon ^n f$ and $g_n = {\mathcal L}_\varepsilon ^n g$ . Write $f_n - g_n = A_n + B_n$ , where

$$ \begin{align*} A_n & = (\xi + 1) \bigg( {\mathcal L}_{\varepsilon f_{n-1}} \cdots {\mathcal L}_{\varepsilon f_0} \frac{h + \xi g}{\xi + 1} - {\mathcal L}_{\varepsilon f_{n-1}} \cdots {\mathcal L}_{\varepsilon f_0} g \bigg) , \\ B_n & = {\mathcal L}_{\varepsilon f_{n-1}} \cdots {\mathcal L}_{\varepsilon f_0} g - {\mathcal L}_{\varepsilon g_{n-1}} \cdots {\mathcal L}_{\varepsilon g_0} g \\ & = \sum_{j=0}^{n-1} {\mathcal L}_{\varepsilon f_{n-1}} \cdots {\mathcal L}_{\varepsilon f_{j+1}} ({\mathcal L}_{\varepsilon f_j} - {\mathcal L}_{\varepsilon g_j}) {\mathcal L}_{\varepsilon g_{j-1}} \cdots {\mathcal L}_{\varepsilon g_0} g. \end{align*} $$

By the invariance of ${\mathcal D}^2_1$ , the assumption in equation (2.6) and Theorem 3.4,

$$ \begin{align*} \|A_n\|_{L^1} & \leq C' (\xi + 2) n^{-1/\gamma + 1} ,\\ \|B_n\|_{L^1} &\leq C' |\varepsilon| \sum_{j=0}^{n-1} \| f_j - g_j \|_{L^1} (n-j)^{-1/\gamma + \beta / \gamma}. \end{align*} $$

Here $C'$ depends only on ${\mathbf C}$ . Let $\delta _n = \| f_n - g_n \|_{L^1}$ . Then,

$$ \begin{align*} \delta_n \leq \|A_n\|_{L^1} + \|B_n\|_{L^1} \leq C' n^{ -1 / \gamma + 1} + C' |\varepsilon| \sum_{j=0}^{n-1} \delta_j ( n - j )^{ -1 / \gamma + \beta / \gamma}. \end{align*} $$

By Lemma 3.5, $\delta _n \leq \max \{2, C' (1 - |\varepsilon | C' C_{\beta ,\gamma })^{-1} \} n^{ -1 / \gamma + 1 }$ for all $n> 0$ , provided that $|\varepsilon | C' < C_{\beta , \gamma }^{-1}$ .

Lemma 3.7. Suppose that f is a probability density on $[0,1]$ . For every $\delta> 0$ , there exist $n \geq 0$ and $g \in {\mathcal D}_1^r$ such that $\| {\mathcal L}_\varepsilon ^n f - g \|_{L^1} \leq \delta $ .

Proof. Denote $f_k = {\mathcal L}_\varepsilon ^k f$ and ${\mathcal L}_k = {\mathcal L}_{\varepsilon f_0} \cdots {\mathcal L}_{\varepsilon f_{k-1}}$ . Let ${\tilde f}$ be a $C^\infty $ probability density with $\|f - {\tilde f}\|_{L^1} \leq \delta / 2$ . It exists because $C^\infty $ is dense in $L^1$ . Then for all k,

$$ \begin{align*} \| {\mathcal L}_k f - {\mathcal L}_k {\tilde f} \|_{L^1} \leq \| f - {\tilde f} \|_{L^1} \leq \delta / 2. \end{align*} $$

Choose $C \geq 0$ large enough so that $({\tilde f} + C) / (C + 1) \in {\mathcal D}_1^r$ . Write

$$ \begin{align*} {\mathcal L}_k {\tilde f} - {\mathcal L}_k 1 = (C + 1) \bigg[ {\mathcal L}_k \bigg( \frac{{\tilde f} + C}{C+1} \bigg) - {\mathcal L}_k 1 \bigg]. \end{align*} $$

By Proposition 3.6, the right-hand side above converges to $0$ , in particular, $\|{\mathcal L}_n {\tilde f} - {\mathcal L}_n 1\|_{L^1} \leq \delta / 2$ for some n.

Take $g = {\mathcal L}_n 1$ . Then $g \in {\mathcal D}_1^r$ by the invariance of ${\mathcal D}_1^r$ , and $\| {\mathcal L}_\varepsilon ^n f - g \| \leq \delta $ by construction.

Proposition 3.8. Suppose that f is a probability density on $[0,1]$ . There is $\varepsilon _0> 0$ such that for all $\varepsilon \in [-\varepsilon _0, \varepsilon _0]$ ,

$$ \begin{align*} \lim_{n \to \infty} \| {\mathcal L}_\varepsilon^n f - h_\varepsilon \|_{L^1} = 0. \end{align*} $$

Proof. Choose a small $\delta> 0$ . Without loss of generality, suppose that $\|f - {\tilde f}\|_{L^1} \leq \delta $ with ${\tilde f} \in {\mathcal D}_1^1$ . (The general case is recovered using Lemma 3.7 and replacing f with ${\mathcal L}_\varepsilon ^n f$ with sufficiently large n.)

As in the proof of Proposition 3.6, denote $f_n = {\mathcal L}_\varepsilon ^n f$ and ${\tilde f}_n = {\mathcal L}_\varepsilon ^n {\tilde f}$ , and write $f_n - {\tilde f}_n = A_n + B_n$ , where

$$ \begin{align*} A_n & = {\mathcal L}_{\varepsilon f_{n-1}} \cdots {\mathcal L}_{\varepsilon f_0} f - {\mathcal L}_{\varepsilon f_{n-1}} \cdots {\mathcal L}_{\varepsilon f_0} {\tilde f} , \\ B_n & = {\mathcal L}_{\varepsilon f_{n-1}} \cdots {\mathcal L}_{\varepsilon f_0} {\tilde f} - {\mathcal L}_{\varepsilon {\tilde f}_{n-1}} \cdots {\mathcal L}_{\varepsilon {\tilde f}_0} {\tilde f} \\ & = \sum_{j=0}^{n-1} {\mathcal L}_{\varepsilon f_{n-1}} \cdots {\mathcal L}_{\varepsilon f_{j+1}} ({\mathcal L}_{\varepsilon f_j} - {\mathcal L}_{\varepsilon {\tilde f}_j}) {\mathcal L}_{\varepsilon {\tilde f}_{j-1}} \cdots {\mathcal L}_{\varepsilon {\tilde f}_0} {\tilde f}. \end{align*} $$

Since all ${\mathcal L}_{\varepsilon f_j}$ are contractions in $L^1$ ,

(3.10) $$ \begin{align} \|A_n\|_{L^1} \leq \|f - {\tilde f}\|_{L^1} \leq \delta. \end{align} $$

By equation (2.6) and Theorem 3.4,

(3.11) $$ \begin{align} \begin{aligned} \|B_n\|_{L^1} & \leq C' |\varepsilon| \sum_{j=0}^{n-1} \|f_j - {\tilde f}_j\|_{L^1} (n-j)^{-1/\gamma + \beta / \gamma} \\ & \leq C" |\varepsilon| \max_{j < n} \|f_j - {\tilde f}_j\|_{L^1} , \end{aligned} \end{align} $$

where $C'$ depends only on ${\mathbf C}$ and $C" = C' \sum _{j=1}^{\infty } j^{-1/\gamma + \beta / \gamma }$ ; recall that $-1/\gamma + \beta / \gamma < -1$ , so this sum is finite.

From equations (3.10) and (3.11),

$$ \begin{align*} \| f_n - {\tilde f}_n \|_{L^1} \leq \|A_n\|_{L^1} + \|B_n\|_{L^1} \leq \delta + C" |\varepsilon| \max_{j < n} \| f_j - {\tilde f}_j \|_{L^1}. \end{align*} $$

Hence, if $\varepsilon $ is sufficiently small so that $C" |\varepsilon | < 1$ , then

$$ \begin{align*} \| f_n - {\tilde f}_n \|_{L^1} \leq \delta / (1 - C" |\varepsilon|) \quad \text{for all } n. \end{align*} $$

Since $\delta> 0$ is arbitrary, $\| f_n - {\tilde f}_n \|_{L^1} \to 0$ as $n \to \infty $ .

4 Example: verification of assumptions

Here we verify that the example in equation (1.1) fits the assumptions of §2, namely assumptions (a), (b), (c), (d) and equation (2.6). The key statements are Proposition 4.1 and Corollary 4.3.

Let $\varepsilon _*> 0$ , and denote $\gamma _- = \gamma _* - 2 \varepsilon _*$ and $\gamma _+ = \gamma _* + 2 \varepsilon _*$ , so that for all $\varepsilon ,h$ ,

$$ \begin{align*} \gamma_- - \varepsilon_* < \gamma_* + \varepsilon \gamma_h < \gamma_+ + \varepsilon_*. \end{align*} $$

Force $\varepsilon _*$ to be small so that $0 < \gamma _- < \gamma _+ < 1$ . Let $\gamma = \gamma _+$ and fix $r \geq 2$ .

In Proposition 4.1, we verify assumptions (a), (b), (c) and (d) from §2, and in Corollary 4.3, we verify equation (2.6).

In this section, we use the notation $A \lesssim B$ for $A \leq C B$ with C depending only on $\varepsilon _*$ , and $A \sim B$ for $A \lesssim B \lesssim A$ .

Proposition 4.1. The family of maps $T_{\varepsilon h}$ satisfies assumptions (a), (b), (c) and (d) from §2.

Proof. It is immediate that assumptions (a), (b) and (d) hold, so we only need to justify assumption (c). Denote ${\tilde {\gamma }} = \gamma _* + \varepsilon \gamma _h$ and observe that, with w and each $w_{\ell , j}$ as in equation (2.2),

$$ \begin{align*} \frac{1}{x^{\ell} \circ T_{\varepsilon h}} - \frac{w^\ell}{x^\ell} \sim x^{{\tilde{\gamma}} - \ell} \quad \text{and} \quad \frac{|w_{\ell,j}|}{x^j} \lesssim x^{{\tilde{\gamma}} - \ell}. \end{align*} $$

The implied constants depend on $\ell $ and j but not on $\varepsilon $ or h, and assumption (c) follows.

It remains to verify equation (2.6). The precise expressions for $\gamma _h$ and $\varphi _h$ are not too important, so we rely on the following properties.

  • $\varphi _h(0) = \varphi _h'(0) = \varphi _h(1) = 0$ for each h, so that, informally, $\varphi _h$ has no effect on the indifferent fixed point at $0$ .

  • The maps $h \mapsto \varphi _h$ , $L^1 \to C^3$ , $h \mapsto \varphi _h'$ , $L^1 \mapsto C^2$ , and $h \mapsto \gamma _h$ , $L^1 \to {\mathbb R}$ are continuously Fréchet differentiable, that is, for each $h,f$ ,

    $$ \begin{align*} \| \varphi_{h + f} - \varphi_h - \Phi_h f \|_{C^3} & = o(\|f\|_{L^1}) , \\ \| \varphi^{\prime}_{h + f} - \varphi^{\prime}_h - \Phi^{\prime}_h f \|_{C^2} & = o(\|f\|_{L^1}) , \\ | \gamma_{h + f} - \gamma_h - \Gamma_h f | & = o(\|f\|_{L^1}) , \end{align*} $$
    where $\Phi _h \colon L^1 \to C^3$ , $\Phi ^{\prime }_h \colon L^1 \to C^2$ and $\Gamma \colon L^1 \to {\mathbb R}$ are bounded linear operators, continuously depending on h.

Suppose that $f_0,f_1 \in L^1$ and $v \in {\mathcal D}^2_1$ . Let $f_s = (1-s) f_0 + s f_1$ with $s \in [0,1]$ . Denote $T_s = T_{\varepsilon f_s}$ and let ${\mathcal L}_s$ be the associated transfer operator.

Proposition 4.2. $| \partial _s ({\mathcal L}_s v) | \lesssim |\varepsilon | x^{- (\gamma _+ - \gamma _-)}$ and $| (\partial _s ({\mathcal L}_s v))'(x) | \lesssim |\varepsilon | x^{- (\gamma _+ - \gamma _-) - 1}$ .

Proof. We abuse notation, restricting to a single branch of $T_s$ , so that $T_s$ is invertible and ${\mathcal L}_s v = (v / T_s') \circ T_s^{-1}$ . Let $\zeta _s = \Phi _{f_s} (f_1 - f_0)$ , $\psi _s = \Phi ^{\prime }_{f_s}(f_1 - f_0)$ and $\unicode{x3bb} _s = \Gamma _{f_s} (f_1 - f_0)$ . Then,

$$ \begin{align*} \partial_s ({\mathcal L}_s v) & = \biggl[ \frac{(v' T_s' - v T_s") \partial_s T_s}{T_s^{\prime 3}} + \frac{v \partial_s T_s'}{T_s^{\prime 2}} \biggr] \circ T_s^{-1} \end{align*} $$

with

$$ \begin{align*} (\partial_s T_s)(x) & = \varepsilon \unicode{x3bb}_s x^{1 + \gamma + \varepsilon \nu_{f_s}} \log x + \varepsilon \zeta_s(x) , \\ (\partial_s T_s')(x) & = \varepsilon \unicode{x3bb}_s x^{\gamma + \varepsilon \nu_{f_s}} [ 1 + (1 + \gamma + \varepsilon \nu_{f_s}) \log x ] + \varepsilon \psi_s(x). \end{align*} $$

Observe that $|v(x)| \lesssim x^{-\gamma _+}$ , $|v'(x)| \lesssim x^{-\gamma _+ - 1}$ , $|\partial _s T_s| \lesssim |\varepsilon | x^{1 + \gamma _-}$ , $|\partial _s T_s'| \lesssim |\varepsilon | x^{\gamma _-}$ , $T_s'(x) \sim 1$ and $|T_s"(x)| \lesssim x^{\gamma _- - 1}$ . Hence,

$$ \begin{align*} |\partial_s ({\mathcal L}_s v) (x)| \lesssim |\varepsilon| x^{-(\gamma_+ - \gamma_-)}. \end{align*} $$

Differentiating in x further and observing that $|v"(x)| \lesssim x^{-\gamma _+ - 2}$ , $|T_s"'(x)| \lesssim x^{\gamma _- - 2}$ , $|(\partial _s T_s)'(x)| \lesssim |\varepsilon | x^{\gamma _-}$ and $|(\partial _s T_s')'(x)| \lesssim |\varepsilon | x^{\gamma _- - 1}$ , we obtain

$$ \begin{align*} |(\partial_s ({\mathcal L}_s v))'(x)| \lesssim |\varepsilon| x^{-(\gamma_+ - \gamma_-) - 1}.\\[-33pt] \end{align*} $$

Corollary 4.3. In the setup of Proposition 4.2, we can represent

$$ \begin{align*} {\mathcal L}_{\varepsilon f_0} v - {\mathcal L}_{\varepsilon f_1} v = \delta (g_0 - g_1) , \end{align*} $$

where $\delta \lesssim |\varepsilon | \|f_0 - f_1\|_{L^1}$ , and $g_0, g_1 \in {\mathcal D}^1_1$ with $g_0(x), g_1(x) \lesssim x^{-4 \varepsilon }$ .

Acknowledgements

The research of both authors is supported by EPSRC grant EP/V053493/1. A.K. is thankful to Nicholas Fleming-Vázquez for helpful discussions and advice.

References

Bahsoun, W., Liverani, C. and Sélley, F.. Globally coupled Anosov diffeomorphisms: statistical properties. Comm. Math. Phys. 400 (2023), 17911822.Google Scholar
Bálint, P., Keller, G., Sélley, F. M. and Tóth, P. I.. Synchronization versus stability of the invariant distribution for a class of globally coupled maps. Nonlinearity 31(8) (2018), 37703793.Google Scholar
Blank, M. L.. Self-consistent mappings and systems of interacting particles. Dokl. Math. 83(1) (2011), 4952.CrossRefGoogle Scholar
Galatolo, S.. Self-consistent transfer operators: invariant measures, convergence to equilibrium, linear response and control of the statistical properties. Comm. Math. Phys. 395 (2022), 715772; doi:10.1007/s00220-022-04444-4 Google Scholar
Keller, G.. An ergodic theoretic approach to mean field coupled maps. Fractal Geometry and Stochastics II. Eds. Bandt, C., Graf, S. and Zähle, M.. Birkhäuser, Basel, 2000, pp. 183208.Google Scholar
Korepanov, A. and Leppänen, J.. Loss of memory and moment bounds for nonstationary intermittent dynamical systems. Comm. Math. Phys. 385 (2021), 905935.CrossRefGoogle Scholar
Liverani, C., Saussol, B. and Vaienti, S.. A probabilistic approach to intermittency. Ergod. Th. & Dynam. Sys. 19 (1999), 671685.Google Scholar
Pomeau, Y. and Manneville, P.. Intermittent transition to turbulence in dissipative dynamical systems. Comm. Math. Phys. 74 (1980), 189197.CrossRefGoogle Scholar
Sélley, F. M. and Tanzi, M.. Linear response for a family of self-consistent transfer operators. Comm. Math. Phys. 382(3), (2021), 16011624.Google Scholar
Tanzi, M.. Mean-field coupled systems and self-consistent transfer operators: a review. Boll. Unione Mat. Ital. 16 (2023), 297336.Google Scholar