Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-26T03:25:46.736Z Has data issue: false hasContentIssue false

A central limit theorem for conservative fragmentation chains

Published online by Cambridge University Press:  17 March 2023

Sylvain Rubenthaler*
Affiliation:
Université Côte d’Azur
*
*Postal address: Université Côte d’Azur, CNRS, LJAD, France. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We are interested in a fragmentation process. We observe fragments frozen when their sizes are less than $\varepsilon$ ($\varepsilon>0$). It is known (Bertoin and Martínez, 2005) that the empirical measure of these fragments converges in law, under some renormalization. Hoffmann and Krell (2011) showed a bound for the rate of convergence. Here, we show a central limit theorem, under some assumptions. This gives us an exact rate of convergence.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

1.1. Scientific and economic context

One of the main goals in the mining industry is to extract blocks of metallic ore and then separate the metal from the valueless material. To do so, rock is fragmented into smaller and smaller pieces. This is carried out in a series of steps, the first one being blasting, after which the material goes through a sequence of crushers. At each step, the particles are screened, and if they are smaller than the diameter of the mesh of a classifying grid, they go to the next crusher. The process stops when the material has a sufficiently small size (more precisely, small enough to enable physicochemical processing).

This fragmentation process is energetically costly (each crusher consumes a certain quantity of energy to crush the material it is fed). One of the problems that faces the mining industry is that of minimizing the energy used. The optimization parameters are the number of crushers and their technical specifications.

In [Reference Bertoin and Martínez4], the authors proposed a mathematical model of what happens in a crusher. In this model, the rock pieces/fragments are fragmented independently of each other, in a random and auto-similar manner. This is consistent with what is observed in the industry, and is supported by [Reference Devoto and Martnez12, Reference Perrier and Bird19, Reference Turcotte22, Reference Weiss25]. Each fragment has a size s (in $\mathbb{R}^{+}$ ) and is then fragmented into smaller fragments of sizes $s_{1}, s_{2}, \ldots$ such that the sequence $(s_{1}/s,s_{2}/s,\dots)$ has a law $\nu$ which does not depend on s (which is why the fragmentation is said to be auto-similar). This law $\nu$ is called the dislocation measure (each crusher has its own dislocation measure). The dynamic of the fragmentation process is thus modeled in a stochastic way.

In each crusher, the rock pieces are fragmented repetitively until they are small enough to slide through a mesh whose holes have a fixed diameter. So the fragmentation process stops for each fragment when its size is smaller than the diameter of the mesh, which we denote by $\varepsilon$ ( $\varepsilon>0$ ). We are interested in the statistical distribution of the fragments coming out of a crusher. If we renormalize the sizes of these fragments by dividing them by $\varepsilon$ , we obtain a measure $\gamma_{-\log(\varepsilon)}$ , which we call the empirical measure (the reason for the index $-\log(\varepsilon)$ instead of $\varepsilon$ will be made clear later). In [Reference Bertoin and Martínez4], the authors showed that the energy consumed by the crusher to reduce the rock pieces to fragments whose diameters are smaller than $\varepsilon$ can be computed as an integral of a bounded function against the measure $\gamma_{-\log(\varepsilon)}$ [Reference Bond5, Reference Charles6, Reference Walker, Lewis, McAdams and Gilliland24]. For each crusher, the empirical measure $\gamma_{-\log(\varepsilon)}$ is one of the only two observable variables (the other one being the size of the pieces pushed into the grinder). The specifications of a crusher are summarized in $\varepsilon$ and $\nu$ .

1.2. State of the art

In [Reference Bertoin and Martínez4], the authors showed that the energy consumed by a crusher to reduce rock pieces of a fixed size into fragments whose diameter are smaller than $\varepsilon$ behaves asymptotically like a power of $\varepsilon$ when $\varepsilon$ goes to zero. More precisely, this energy multiplied by a power of $\varepsilon$ converges towards a constant of the form $\kappa=\nu(\varphi)$ (the integral of $\nu$ , the dislocation measure, against a bounded function $\varphi$ ). They also showed a law of large numbers for the empirical measure $\gamma_{-\log(\varepsilon)}$ . More precisely, if f is bounded continuous, $\gamma_{-\log(\varepsilon)}(f)$ converges in law, when $\varepsilon$ goes to zero, towards an integral of f against a measure related to $\nu$ (this result also appears in [Reference Hoffmann and Krell16, p. 399]). We set $\gamma_{\infty}(f)$ to be this limit (check (5.2), (2.5), and (2.2) to get an exact formula). The empirical measure $\gamma_{-\log(\varepsilon)}$ thus contains information relative to $\nu$ and we could extract from it an estimation of $\kappa$ or of an integral of any function against $\nu$ .

It is worth noting that by studying what happens in various crushers, we could study a family $(\nu_{i}(f_{j}))_{i\in I,j\in J}$ (with an index i for the number of the crusher and the index j for the jth test function in a well-chosen basis). Using statistical learning methods, we could from there make a prediction for $\nu(f_{j}$ ) for a new crusher for which we know only the mechanical specifications (shape, power, frequencies of the rotating parts, …). It would evidently be interesting to know $\nu$ before even building the crusher.

In the same spirit, [Reference Fontbona, Krell and Martnez14] studied the energy efficiency of two crushers used one after the other. When the final size of the fragments tends to zero, [Reference Fontbona, Krell and Martnez14] tells us whether it is more efficient energywise to use one crusher or two crushers in a row (another asymptotic is also considered there).

In [Reference Harris, Knobloch and Kyprianou15], the authors proved a convergence result for the empirical measure similar to the one in [Reference Bertoin and Martínez4], the convergence in law being replaced by an almost sure convergence. In [Reference Hoffmann and Krell16], the authors gave a bound on the rate of this convergence, in an $L^{2}$ sense, under the assumption that the fragmentation is conservative. This assumption means there is no loss of mass due to the formation of dust during the fragmentation process.

The state of the art as described is shown in Fig. 1. We have convergence results [Reference Bertoin and Martínez4, Reference Harris, Knobloch and Kyprianou15] of an empirical quantity towards constants of interest (a different constant for each test function f). Using some transformations, these constants could be used to estimate the constant $\kappa$ . Thus, it is natural to ask what the exact rate of convergence in this estimation is, if only to be able to build confidence intervals. In [Reference Hoffmann and Krell16], we only have a bound on the rate.

Figure 1. State of the art.

When a sequence of empirical measures converges to some measure, it is natural to study the fluctuations, which often turn out to be Gaussian. For such results in the case of empirical measures related to the mollified Boltzmann equation, see [Reference Dawson and Zheng7, Reference Meleard18, Reference Uchiyama23]. When interested in the limit of an n-tuple as in (1.1), we say we are looking at the convergence of a U-statistic. Textbooks deal with the case where the points defining the empirical measure are independent or have a known correlation (see [Reference de la Peña and Giné8, Reference Dynkin and Mandelbaum13, Reference Lee17]). The problem is more complex when the points defining the empirical measure interact with each other as is the case here.

1.3. Goal of the paper

As explained above, we want to obtain the rate of convergence in $\gamma_{-\log(\varepsilon)}$ when $\varepsilon$ goes to zero. We want to produce a central limit theorem of the form: for a bounded continuous f, $\varepsilon^{\beta}(\gamma_{-\log(\varepsilon)}(f)-\gamma_{\infty}(f))$ converges towards a non-trivial measure when $\varepsilon$ goes to zero (the limiting measure will in fact be Gaussian), for some exponent $\beta$ . The techniques used will allow us to prove the convergence towards a multivariate Gaussian of a vector of the form

(1.1) \begin{equation}\varepsilon^{\beta}(\gamma_{-\log(\varepsilon)}(f_{1})-\gamma_{\infty}(f_{1}),\dots,\gamma_{-\log(\varepsilon)}(f_{n})-\gamma_{\infty}(f_{n}))\end{equation}

for functions $f_{1}, \ldots , f_{n}$ .

More precisely, if by $Z_{1}, Z_{2}, \ldots , Z_{N}$ we denote the fragments sizes that go out from a crusher (with mesh diameter equal to $\varepsilon$ ), we would like to show that, for a bounded continuous f,

\begin{align*}\gamma_{-\log(\varepsilon)}(f)\;:\!=\;\sum_{i=1}^{N}Z_{i}f(Z_{i}/\varepsilon)\longrightarrow\gamma_{\infty}(f)\end{align*}

almost surely (a.s.) when $\varepsilon\rightarrow0$ , and that, for all n, and $f_{1}, \ldots ,f_{n}$ bounded continuous function such that $\gamma_{\infty}(f_{i})=0$ , $\varepsilon^{\beta}(\gamma_{-\log(\varepsilon)}(f_{1}),\dots,\gamma_{-\log(\varepsilon)}(f_{n}))$ converges in law towards a multivariate Gaussian when $\varepsilon$ goes to zero.

The exact results are stated in Proposition 5.1 and Theorem 5.1.

1.4. Outline of the paper

We will state our assumptions along the way (Assumptions 2.1, 2.2, 2.3, and 3.1). Assumption 3.1 can be found at the beginning of Section 3. We define our model in Section 2. The main idea is that we want to follow tags during the fragmentation process. Let us imagine the fragmentation is the process of breaking a stick (modeled by [0, 1]) into smaller sticks. We suppose that the original stick has painted dots, and that during the fragmentation process we take note of the sizes of the sticks supporting the painted dots. When the sizes of these sticks get smaller than $\varepsilon$ ( $\varepsilon>0$ ), the fragmentation is stopped for them and we call them the painted sticks. In Section 3, we make use of classical results on renewal processes and of [Reference Sgibnev21] to show that the size of one painted stick has an asymptotic behavior when $\varepsilon$ goes to zero and that we have a bound on the rate with which it reaches this behavior. Section 4 is the most technical. There we study the asymptotics of symmetric functionals of the sizes of the painted sticks (always when $\varepsilon$ goes to zero). In Section 5, we precisely define the measure we are interested in ( $\gamma_{T}$ with $T=-\log(\varepsilon)$ ). Using the results of Section 4, it is then easy to show a law of large numbers for $\gamma_{T}$ (Proposition 5.1) and a central limit theorem (Theorem 5.1). Proposition 5.1 and Theorem 5.1 are our two main results. The proof of Theorem 5.1 is based on a simple computation involving characteristic functions (the same technique was previously used in [Reference Del Moral, Patras and Rubenthaler9Reference Del Moral, Patras and Rubenthaler11, Reference Rubenthaler20]).

1.5. Notation

For x in $\mathbb{R}$ , we set $\lceil x\rceil=\inf\{n\in\mathbb{Z}\;:\; n\geq x\}$ , $\lfloor x\rfloor=\sup\{n\in\mathbb{Z}\;:\; n\leq x\}$ . The symbol $\sqcup$ means ‘disjoint union’. For n in $\mathbb{N}^{*}$ , we set $[n]=\{1,2,\dots,n\}$ . For f an application from a set E to a set F, we write $f\;:\; E\hookrightarrow F$ if f is injective and, for k in $\mathbb{N}^{*}$ , if $F=E$ , we set

\begin{align*}f^{\circ k}=\underset{k\mbox{ times }}{\underbrace{f\circ f\circ\dots\circ f}}.\end{align*}

For any set E, we set $\mathcal{P}(E)$ to be the set of subsets of E.

2. Statistical model

2.1. Fragmentation chains

Let $\varepsilon>0$ . As in [Reference Hoffmann and Krell16], we start with the space

\begin{align*}\mathcal{S}^{\downarrow}=\Bigg\{ \mathbf{s}=(s_{1},s_{2},\dots),\,s_{1}\geq s_{2}\geq\dots\geq0,\,\sum_{i=1}^{+\infty}s_{i}\leq1\Bigg\} .\end{align*}

A fragmentation chain is a process in $\mathcal{S}^{\downarrow}$ characterized by

  • a dislocation measure $\nu$ , which is a finite measure on $\mathcal{S}^{\downarrow}$ ;

  • a description of the law of the times between fragmentations.

A fragmentation chain with dislocation measure $\nu$ is a Markov process $X=(X(t),t\geq0)$ with values in $\mathcal{S}^{\downarrow}$ . Its evolution can be described as follows: a fragment with size x lives for some time (which may or may not be random) then splits and gives rise to a family of smaller fragments distributed as $x\xi$ , where $\xi$ is distributed according to $\nu(\!\cdot\!)/\nu(\mathcal{S}^{\downarrow})$ . We suppose the lifetime of a fragment of size x is an exponential time of parameter $x^{\alpha}\nu(\mathcal{S}^{\downarrow})$ , for some $\alpha$ . We could make different assumptions here on the lifetime of fragments, but this would not change our results. Indeed, as we are interested in the sizes of the fragments frozen as soon as they are smaller than $\varepsilon$ , the time they need to become this small is not important.

We denote by $\mathbb{P}_{m}$ the law of X started from the initial configuration $(m,0,0,\dots\!)$ with m in (0, 1]. The law of X is entirely determined by $\alpha$ and $\nu(\!\cdot\!)$ [Reference Bertoin2, Theorem 3].

We make the same assumption as in [Reference Hoffmann and Krell16].

Assumption 2.1. $\nu(\mathcal{S}^{\downarrow})=1$ and $\nu(s_{1}\in]0;\;1[)=1$ .

Let $\mathcal{U}\;:\!=\;\{0\}\cup\bigcup_{n=1}^{+\infty}(\mathbb{N}^{*})^{n}$ denote the infinite genealogical tree. For u in $\mathcal{U}$ , we use the conventional notation $u=()$ if $u=\{0\}$ and $u=(u_{1},\dots,u_{n})$ if $u\in(\mathbb{N}^{*})^{n}$ with $n\in\mathbb{N}^{*}$ . This way, any u in $\mathcal{U}$ can be denoted by $u=(u_{1},\dots,u_{n})$ for some $u_{1},\dots,u_{n}$ and with n in $\mathbb{N}$ . Now, for $u=(u_{1},\dots,u_{n})\in\mathcal{U}$ and $i\in\mathbb{N}^{*}$ , we say that u is in the nth generation and we write $|u|=n$ ; we write $ui=(u_{1},\dots,u_{n},i)$ , $u(k)=(u_{1},\dots,u_{k})$ for all $k\in[n]$ . For any $u=(u_{1},\dots,u_{n})$ and $v=ui$ ( $i\in\mathbb{N}^{*}$ ), we say that u is the mother of v. For any u in $\mathcal{U}\backslash\{0\}$ ( $\mathcal{U}$ deprived of its root), u has exactly one mother and we denote it by ${\boldsymbol{{m}}}(u)$ . The set $\mathcal{U}$ is ordered alphanumerically:

  • If u and v are in $\mathcal{U}$ and $|u|<|v|$ then $u<v$ .

  • If u and v are in $\mathcal{U}$ and $|u|=|v|=n$ , $u=(u_{1},\dots,u_{n})$ , and $v=(v_{1},\dots,v_{n})$ with $u_{1}=v_{1}, \dots , u_{k}=v_{k},u_{k+1}<v_{k+1}$ then $u<v$ .

Suppose we have a process X which has the law $\mathbb{P}_{m}$ . For all $\omega$ , we can index the fragments that are formed by the process X with elements of $\mathcal{U}$ in a recursive way.

  • We start with a fragment of size m indexed by $u=()$ .

  • If a fragment x, with a birth time $t_{1}$ and a split time $t_{2}$ , is indexed by u in $\mathcal{U}$ , at time $t_{2}$ this fragment splits into smaller fragments of sizes $(xs_{1},xs_{2},\dots)$ with $(s_{1},s_{2},\dots)$ of law $\nu(\!\cdot\!)/\nu(\mathcal{S}^{\downarrow})$ . We index the fragment of size $xs_{1}$ by u1, the fragment of size $xs_{2}$ by u2, and so on.

A mark is an application from $\mathcal{U}$ to some other set. We associate a mark $\xi_{\dots}$ on the tree $\mathcal{U}$ to each path of the process X. The mark at node u is $\xi_{u}$ , where $\xi_{u}$ is the size of the fragment indexed by u. The distribution of this random mark can be described recursively as follows.

Proposition 2.1. (Consequence of Proposition 1.3 (p. 25) of [Reference Bertoin3].) There exists a family of independent and identically distributed (i.i.d.) variables indexed by the nodes of the genealogical tree $\big(\big(\widetilde{\xi}_{ui}\big)_{i\in\mathbb{N}^{*}},u\in\mathcal{U}\big)$ , where each $\big(\widetilde{\xi}_{ui}\big)_{i\in\mathbb{N}^{*}}$ is distributed according to the law $\nu(\!\cdot\!)/\nu(\mathcal{S}^{\downarrow})$ , and such that, given the marks $(\xi_{v},|v|\leq n)$ of the first n generations, the marks at generation $n+1$ are given by $\xi_{ui}=\widetilde{\xi}_{ui}\xi_{u}$ , where $u=(u_{1,}\dots,u_{n})$ and $ui=(u_{1},\dots,u_{n},i)$ is the ith child of u.

2.2. Tagged fragments

From now on, we suppose that we start with a block of size $m=1$ . We assume that the total mass of the fragments remains constant through time, as follows.

Assumption 2.2. (Conservation property.) $\nu\big(\sum_{i=1}^{+\infty}s_{i}=1\big)=1$ .

This assumption was already present in [Reference Hoffmann and Krell16]. We observe that the Malthusian exponent of [Reference Bertoin3, p. 27] is equal to 1 under our assumptions. Without this assumption, the link between the empirical measure $\gamma_{-\log(\varepsilon)}$ and the tagged fragments, (5.1), vanishes and our proofs of Proposition 5.1 and Theorem 5.1 fail.

We can now define tagged fragments. We use the representation of fragmentation chains as random infinite marked trees to define a fragmentation chain with q tags. Suppose we have a fragmentation process X of law $\mathbb{P}_{1}$ . We take $(Y_{1},Y_{2},\dots,Y_{q})$ to be q i.i.d. variables of law $\mathcal{U}([0,1])$ . We set, for all u in $\mathcal{U}$ , $(\xi_{u},A_{u},I_{u})$ with $\xi_{u}$ defined as above. The random variables $A_{u}$ take values in the subsets of [q]. The random variables $I_{u}$ are intervals. These variables are defined as follows.

  • We set $A_{\{0\}}=[q]$ , $I_{\{0\}}=(0,1]$ ( $I_{\{0\}}$ is of length $\xi_{\{0\}}=1$ ).

  • For all $n\in\mathbb{N}$ , suppose we are given the marks of the first n generations. Suppose that, for u in the nth generation, $I_{u}=(a_{u},a_{u}+\xi_{u}]$ for some $a_{u}\in\mathbb{R}$ (it is of length $\xi_{u}$ ). Then the marks at generation $n+1$ are given by Proposition 2.1 (concerning $\xi_{\cdot}$ ) and, for all u such that $|u|=n$ and for all i in $\mathbb{N}^{*}$ ,

    \begin{align*}I_{ui}=\big(a_{u}+\xi_{u}\big(\widetilde{\xi}_{u1}+\dots+\widetilde{\xi}_{u(i-1)}\big),a_{u}+\xi_{u}\big(\widetilde{\xi}_{u1}+\dots+\widetilde{\xi}_{ui}\big)\big],\end{align*}
    $k\in A_{ui}$ if and only if $Y_{k}\in I_{ui}$ ( $I_{ui}$ is then of length $\xi_{ui}$ ). We observe that for all $j\in[q]$ , $u\in\mathcal{U}$ , $i\in\mathbb{N}^{*}$ ,
    (2.1) \begin{equation} \mathbb{P}\big(j\in A_{ui}\mid j\in A_{u},\widetilde{\xi}_{ui}\big)=\widetilde{\xi}_{ui}. \end{equation}

In this definition, we imagine having q dots on the interval [0, 1], and we impose that dot j has the position $Y_{j}$ (for all j in [q]). During the fragmentation process, if we know that dot j is in the interval $I_{u}$ of length $\xi_{u}$ , then the probability that this dot is on $I_{ui}$ (for some i in $\mathbb{N}^{*}$ , $I_{ui}$ of length $\xi_{ui}$ ) is equal to $\xi_{ui}/\xi_{u}=\widetilde{\xi}_{ui}$ .

In the case $q=1$ , the branch $\{u\in\mathcal{U}\;:\; A_{u}\neq\emptyset\}$ has the same law as the randomly tagged branch of [Reference Bertoin3, Section 1.2.3]. The presentation is simpler in our case because the Malthusian exponent is 1 under Assumption 2.2.

2.3. Observation scheme

We freeze the process when the fragments become smaller than a given threshold $\varepsilon>0$ . That is, we have the data $(\xi_{u})_{u\in\mathcal{U}_{\varepsilon}}$ , where $\mathcal{U}_{\varepsilon}=\{u\in\mathcal{U},\,\xi_{{\boldsymbol{{m}}}(u)}\geq\varepsilon,\,\xi_{u}<\varepsilon\}$ .

We now look at q tagged fragments ( $q\in\mathbb{N}^{*}$ ). For each i in [q], we call $L_{0}^{(i)}=1, L_{1}^{(i)}, L_{2}^{(i)},\dots{}$ the successive sizes of the fragment having the tag i. More precisely, for each $n\in\mathbb{N}^{*}$ , there is almost surely exactly one $u\in\mathcal{U}$ such that $|u|=n$ and $i\in A_{u}$ ; and so, $L_{n}^{(i)}=\xi_{u}$ . For each i, the process $S_{0}^{(i)}=-\log\big(L_{0}^{(i)}\big)=0\leq S_{1}^{(i)}=-\log\big(L_{1}^{(i)}\big)\leq\cdots$ is a renewal process without delay, with waiting time following a law $\pi$ (see [Reference Asmussen1, Chapter V] for an introduction to renewal processes). The waiting times (for i in [q]) are $S_{0}^{(i)}, S_{1}^{(i)}-S_{0}^{(i)}, S_{2}^{(i)}-S_{1}^{(i)},\dots{}$ The renewal times (for i in [q]) are $S_{0}^{(i)},S_{1}^{(i)}, S_{2}^{(i)}, \dots{}$ The law $\pi$ is defined by the following:

For all bounded measurable $f\colon[0,1]\rightarrow[0,+\infty),$

(2.2) \begin{align} \qquad\qquad\qquad\qquad\qquad \int_{\mathcal{S}^{\downarrow}}\sum_{i=1}^{+\infty}s_{i}f(s_{i})\nu(\textrm{d}{\boldsymbol{{s}}})= \int_{0}^{+\infty}f(\textrm{e}^{-x})\pi(\textrm{d} x)\end{align}

(see [Reference Bertoin3, Proposition 1.6, p. 34], or [Reference Hoffmann and Krell16, (3) and (4), p. 398]). Under Assumptions 2.1 and 2.2, [Reference Bertoin3, Proposition 1.6] is true, even without the Malthusian hypothesis of [Reference Bertoin3].

We make the following assumption on $\pi$ .

Assumption 2.3. There exist a and b, $0<a<b<+\infty$ , such that the support of $\pi$ is [a,b]. We set $\delta=\textrm{e}^{-b}$ .

We have added a comment about Assumption 2.3 in Remark 4.1. We believe that we could replace it by the following.

Assumption 2.4. The support of $\pi$ is $(0,+\infty)$ .

However, this would lead to difficult computations.

We set

(2.3) \begin{equation}T=-\log(\varepsilon).\end{equation}

We set, for all $i\in[q]$ , $t\geq0$ ,

(2.4) \begin{align} N_{t}^{(i)} & = \inf\big\{\,j\;:\; S_{j}^{(i)}>t\big\}, \nonumber \\[5pt] B_{t}^{(i)} & =S_{N_{t}^{(i)}}^{(i)}-t, \\[5pt] C_{t}^{(i)} & = t-S_{N_{t}^{(i)}-1}^{(i)} \nonumber \end{align}

(see Fig. 2 for an illustration). The processes $B^{(i)}$ , $C^{(i)}$ , and $N^{(i)}$ are time-homogeneous Markov processes [Reference Asmussen1, Proposition 1.5, p. 141]. All of them are càdlàg (i.e. right-continuous with a left-hand-side limit). We call $B^{(i)}$ the residual lifetime of the fragment tagged by i. We call $C^{(i)}$ the age of the fragment tagged by i. We call $N^{(i)}$ the number of renewals up to time t. In the following, we treat t as a time parameter. This has nothing to do with the time in which the fragmentation process X evolves.

Figure 2. Process $B^{(1)}$ and $C^{(1)}$ .

We observe that, for all t, $\big(B_{t}^{(1)},\dots,B_{t}^{(q)}\big)$ is exchangeable, meaning that for all $\sigma$ in the symmetric group of order q, $\big(B_{t}^{(\sigma(1))},\dots,B_{t}^{(\sigma(q))}\big)$ has the same law as $\big(B_{t}^{(1)},\dots,B_{t}^{(q)}\big)$ . When we look at the fragments of sizes $(\xi_{u},\,u\in\mathcal{U}_{\varepsilon}\;:\; A_{u}\neq\emptyset)$ , we have almost the same information as when we look at $\big(B_{T}^{(1)},B_{T}^{(2)},\dots,B_{T}^{(q)}\big)$ . We say almost because knowing $\big(B_{T}^{(1)},B_{T}^{(2)},\dots,B_{T}^{(q)}\big)$ does not give exactly the number of u in $\mathcal{U}_{\varepsilon}$ such that $A_{u}$ is not empty.

In the remainder of Section 2 we define processes that will be useful when we describe the asymptotics of our model (in Section 4).

2.4. Stationary renewal processes $\big(\overline{B}^{(1)}$ , $\overline{B}^{(1),v}\big)$

We define $\widetilde{X}$ to be an independent copy of X. We suppose it has q tagged fragments. Therefore it has a mark $\big(\widetilde{\xi},\widetilde{A}\big)$ and renewal processes $\big(\widetilde{S}_{k}^{(i)}\big)_{k\geq0}$ (for all i in [q]) defined in the same way as for X. We let $\big(\widetilde{B}^{(1)},\widetilde{B}^{(2)}\big)$ be the residual lifetimes of the fragments tagged by 1 and 2.

Let $\mu=\int_{0}^{+\infty}x\pi(\textrm{d} x)$ , and let $\pi_{1}$ be the distribution with density $x\mapsto x/\mu$ with respect to $\pi$ . We set $\overline{C}$ to be a random variable of law $\pi_{1}$ ; U to be independent of $\overline{C}$ and uniform on (0, 1); and $\widetilde{S}_{-1}=\overline{C}(1-U)$ . The process $\overline{S}_{0}=\widetilde{S}_{-1}$ , $\overline{S}_{1}=\widetilde{S}_{-1}+\widetilde{S}_{0}^{(1)}$ , $\overline{S}_{2}=\widetilde{S}_{-1}+\widetilde{S}_{1}^{(1)}$ , $\overline{S}_{2}=\widetilde{S}_{-1}+\widetilde{S}_{2}^{(1)}$ , … is a renewal process with delay $\pi_{1}$ (with waiting times $\overline{S}_{0},\overline{S}_{1}-\overline{S}_{0}, \dots{}$ all smaller than b by Assumption 2.3). The renewal times are $\overline{S}_{0}, \overline{S}_{1}, \overline{S}_{2},\dots{}$ We set $(\overline{B}_{t}^{(1)})_{t\geq0}$ to be the residual lifetime process of this renewal process,

\begin{equation*} \overline{B}_{t}^{(1)}=\left\{\begin{array}{l@{\quad}l} \overline{C}(1-U)-t & \mbox{ if }t<\overline{S}_{0},\\[5pt] \inf_{n\geq0}\big\{\overline{S}_{n}\;:\;\overline{S}_{n}>t\big\}-t & \mbox{ if }t\geq\overline{S}_{0}; \end{array} \right. \end{equation*}

we define $(\overline{C}_{t}^{(1)})_{t\geq0}$ as

\begin{equation*} \overline{C}_{t}^{(1)}=\left\{\begin{array}{l@{\quad}l} \overline{C}U+t & \mbox{ if }t<\overline{S}_{0},\\[5pt] t-\sup_{n\geq0}\big\{\overline{S}_{n}\;:\;\overline{S}_{n}\leq t\big\} & \mbox{ if }t\geq\overline{S}_{0} \end{array} \right. \end{equation*}

(we call it the age process of our renewal process); and we set $\overline{N}_{t}^{(1)}=\inf\big\{\,j\;:\;\overline{S}_{j}>t\big\}$ .

Fact 2.1. Theorem 3.3 on p. 151 of [Reference Asmussen1] tells us that $\big(\overline{B}_{t}^{(1)},\overline{C}_{t}^{(1)}\big)_{t\geq0}$ has the same transition as $\big(B_{t}^{(1)},C_{t}^{(1)}\big)_{t\geq0}$ defined above, and that $\big(\overline{B}_{t}^{(1)},\overline{C}_{t}^{(1)}\big)_{t\geq0}$ is stationary. In particular, this means that the law of $\overline{B}_{t}^{(1)}$ does not depend on t.

Figure 3 provides a graphic representation of $\overline{B}_{\cdot}^{(1)}$ . It might be counter-intuitive to start with $\overline{B}_{0}^{(1)}$ having a law which is not $\pi$ in order to get a stationary process, but [Reference Asmussen1, Corollary 3.6, p. 153] is clear on this point: a delayed renewal process (with waiting time of law $\pi$ ) is stationary if and only if the distribution of the initial delay is $\eta$ (defined below).

Figure 3. Renewal process with delay.

We define a measure $\eta$ on $\mathbb{R}^{+}$ by its action on bounded measurable functions:

For all bounded measurable f:

(2.5) \begin{align} \mathbb{R}^{+}\rightarrow\mathbb{R},\quad \eta(f)=\frac{1}{\mu}\int_{\mathbb{R}^{+}}\mathbb{E}(f(Y-s)\mathbf{1}_{\{Y-s>0\}})\,\textrm{d} s\quad (Y\sim\pi). \end{align}

Lemma 2.1. The measure $\eta$ is the law of $\overline{B}_{t}^{(1)}$ (for any $t\geq 0$ ). It is also the law of $\big(\overline{C}_{t}^{(1)}\big)$ (for any t).

Proof. We show the proof for $\overline{B}_{t}^{(1)}$ only. Let $\xi\geq0$ . We set $f(y)=\mathbf{1}_{y\geq\xi}$ , for all y in $\mathbb{R}$ . We have, with Y of law $\pi$ ,

\begin{align*} \frac{1}{\mu}\int_{\mathbb{R}^{+}}\mathbb{E}(f(Y-s)\mathbf{1}_{Y-s>0})\,\textrm{d} s &= \frac{1}{\mu}\int_{\mathbb{R}^{+}}\bigg(\int_{0}^{y}\mathbf{1}_{y-s\geq\xi}\,\textrm{d} s\bigg)\pi(\textrm{d} y) \\[5pt] & = \frac{1}{\mu}\int_{\mathbb{R}^{+}}(y-\xi)_{+}\pi(\textrm{d} y) \\[5pt] &= \int_{\xi}^{+\infty}\bigg(1-\frac{\xi}{y}\bigg)\frac{y}{\mu}\pi(\textrm{d} y) = \mathbb{P}(\overline{C}(1-U)\geq\xi). \end{align*}

We set $\eta_{2}$ to be the law of $\big(\overline{C}_{0}^{(1)},\overline{B}_{0}^{(1)}\big)=\big(\overline{C}U,\overline{C}(1-U)\big)$ . The support of $\eta_{2}$ is $\mathcal{C}\;:\!=\;\{(u,v)\in[0,b]^{2}\;:\; a\leq u+v\leq b\}$ .

For v in $\mathbb{R}$ , we now want to define a process

(2.6) \begin{align} \big(\overline{C}_{t}^{(1),v},\overline{B}_{t}^{(1),v}\big)_{t\geq v-2b}\text{ having the same} &\;\text{transition as } \nonumber \\[5pt] & \quad \big(C_{t}^{(1)},B_{t}^{(1)}\big)\text{ and being stationary.}\end{align}

We set $\big(\overline{C}_{v-2b}^{(1),v},\overline{B}_{v-2b}^{(1),v}\big)$ such that it has the law $\eta_{2}$ . As we have given its transition, the process $\big(\overline{C}_{t}^{(1),v},\overline{B}_{t}^{(1),v}\big)_{t\geq v-2b}$ is well defined in law. In addition, we suppose that it is independent of all the other processes. By Fact 2.1, the process $\big(\overline{C}_{t}^{(1),v},\overline{B}_{t}^{(1),v}\big)_{t\geq v-2b}$ is stationary.

We define the renewal times of $\overline{B}^{(1),v}$ by $\overline{S}_{1}^{(1),v}=\inf\big\{t\geq v-2b\;:\;\overline{B}_{t+}^{(1),v}\neq\overline{B}_{t-}^{(1),v}\big\}$ , and, by recurrence, $\overline{S}_{k}^{(1),v}=\inf\big\{t>\overline{S}_{k-1}^{(1),v}\;:\;\overline{B}_{t+}^{(1),v}\neq\overline{B}_{t-}^{(1),v}\big\}$ . We also define, for all t, $\overline{N}_{t}^{(1),v}=\inf\big\{j\;:\;\overline{S}_{j}^{(1),v}>t\big\}$ . As will be seen later, the processes $\overline{B}^{(1),v}$ and $\overline{B}^{(2),v}$ are used to define asymptotic quantities (see, for example, Proposition 4.1) and we need them to be defined on an interval $[v,+\infty)$ with v possibly in $\mathbb{R}^{-}$ . The process $\overline{B}^{(2),v}$ is defined below (Section 2.6).

Figure 4. Processes $\widehat{B}^{(1),v}$ , $\widehat{B}^{(2),v}$ .

2.5. Tagged fragments conditioned to split up $\big(\widehat{B}^{(1),v},\widehat{B}^{(2),v}\big)$

For v in $[0,+\infty)$ , we define a process $\big(\widehat{B}_{t}^{(1),v},\widehat{B}_{t}^{(2),v}\big)_{t\geq0}$ such that

(2.7) \begin{align} \;\widehat{B}^{(1),v}=B^{(1)}\text{ and, with }B^{(1)}\text{ fixed, }& \widehat{B}^{(2),v}\text{ has the law of }B^{(2)}\text{ conditioned on } \nonumber \\[5pt] & \text{for all } u\in\mathcal{U},\ 1\in A_{u}\Rightarrow[2\in A_{u}\Leftrightarrow-\log(\xi_{u})\leq v], \end{align}

which reads as follows: the tag 2 remains on the fragment bearing the tag 1 until the size of the fragment is smaller than $\textrm{e}^{-v}$ . We observe that, conditionally on $\widehat{B}_{v}^{(1),v}$ and $\widehat{B}_{v}^{(2),v}$ , $\big(\widehat{B}_{v+\widehat{B}_{v}^{(1),v}+t}^{(1),v}\big)_{t\geq0}$ and $\big(\widehat{B}_{v+\widehat{B}_{v}^{(2),v}+t}^{(2),v}\big)_{t\geq0}$ are independent. We also define $\widehat{C}^{(1),v}=C^{(1)}$ . There is an algorithmic way to define $\widehat{B}^{(1),v}$ and $\widehat{B}^{(2),v}$ , which is illustrated in Fig. 4. Remember that $\widehat{B}^{(1),v}=B^{(1)}$ , and the definition of the mark $(\xi_{u},A_{u},I_{u})_{u\in\mathcal{U}}$ in Section 2.2. We call $\big(\widehat{S}_{j}^{(i)}\big)_{i=1,2;j\geq1}$ the renewal times of these processes (as before, they can be defined as the times when the right-hand-side and left-hand-side limits are not the same). If $\widehat{S}_{j}^{(1)}\leq v$ then $\widehat{S}_{j}^{(2)}=\widehat{S}_{j}^{(1)}$ . If k is such that $\widehat{S}_{k-1}^{(1)}\leq v$ and $\widehat{S}_{k}^{(1)}>v$ , we remember that

(2.8) \begin{equation} \exp\big(\widehat{S}_{k}^{(1)}-\widehat{S}_{k-1}^{(1)}\big)=\widetilde{\xi}_{ui}\end{equation}

for some u in $\mathcal{U}$ with $|u|=k-1$ and some i in $\mathbb{N}^{*}$ (because $\widehat{B}^{(1),v}=B^{(1)}$ ). We have points $Y_{1}, Y_{2}\in[0,1]$ such that $Y_{1}$ and $Y_{2}$ are in $I_{u}$ of length $\xi_{u}$ . Conditionally on $\{Y_{1},Y_{2}\in I_{u}\}$ , $Y_{1}$ and $Y_{2}$ are independent and uniformly distributed on $I_{u}$ . The interval $I_{ui}$ , of length $\xi_{u}\widetilde{\xi}_{ui}$ , is a sub-interval of $I_{u}$ such that $Y_{1}\in I_{ui}$ , because of (2.8). Then, for $r\in\mathbb{N}^{*}\backslash\{i\}$ , we want $Y_{2}$ to be in $I_{ur}$ with probability $\widetilde{\xi}_{ur}/\big(1-\widetilde{\xi}_{ui}\big)$ (because we want $2\notin A_{ui}$ ). So we take $\widehat{S}_{k}^{(2)}=\widehat{S}_{k-1}^{(1)}-\log\widetilde{\xi}_{ur}$ with probability $\widetilde{\xi}_{ur}/\big(1-\widetilde{\xi}_{ui}\big)$ ( $r\in\mathbb{N}^{*}\backslash\{i\}$ ).

Fact 2.2.

  1. (i) The knowledge of the couple $\big(\widehat{S}_{N_{v}^{(1)}-1}^{(1)},\widehat{B}_{v}^{(1),v}\big)$ is equivalent to the knowledge of the couple $\big(\widehat{C}_{v}^{(1),v},\widehat{B}_{v}^{(1),v}\big)$ .

  2. (ii) The law of $B_{v}^{(1)}$ knowing $C_{v}^{(1)}$ is $\pi-C_{v}^{(1)}$ , with $\pi$ conditioned to be bigger than $C_{v}^{(1)}$ ; we call it $\eta_{1}\big(\dots\mid C_{v}^{(1)}\big)$ . As $\widehat{B}^{(1),v}=B^{(1)}$ and $\widehat{C}^{(1),v}=C^{(1)}$ , we also have that the law of $\widehat{B}_{v}^{(1),v}$ knowing $\widehat{C}_{v}^{(1),v}$ is $\eta_{1}\big(\dots\mid \widehat{C}_{v}^{(1),v}\big)$ .

  3. (iii) The law of $\widehat{B}_{v}^{(2),v}$ knowing $\big(\widehat{C}_{v}^{(1),v},\widehat{B}_{v}^{(1),v}\big)$ does not depend on v and we denote it by $\eta'\big(\dots\mid\widehat{C}_{v}^{(1),v},\widehat{B}_{v}^{(1),v}\big)$ .

The subsequent waiting times $\widehat{S}_{k+1}^{(1)}-\widehat{S}_{k}^{(1)},\widehat{S}_{k+1}^{(2)}-\widehat{S}_{k}^{(2)}, \dots{}$ are chosen independently of each other, each of them having the law $\pi$ . For j equal to 1 or 2 and t in $[0,+\infty)$ , we define $\widehat{N}_{t}^{(j)}=\inf\{i\;:\;\widehat{S}_{i}^{(j)}>t\}$ . We observe that, for $t\geq2b$ , $\widehat{N}_{t}^{(1)}$ is bigger than 2 (because of Assumption 2.3).

2.6. Two stationary processes after a split-up ( $\overline{B}^{(1),v},\overline{B}^{(2),v}$ )

Let k be an integer, $k\geq2$ , such that

(2.9) \begin{equation} k\times(b-a)\geq b.\end{equation}

Now we state a small lemma that will be useful in what follows. Remember that the process $\big(\overline{C}_{t}^{(1),v},\overline{B}_{t}^{(1),v}\big)_{t\geq v-2b}$ is defined in (2.6). The process $\big(\widehat{C}_{t}^{(1),v},\widehat{B}_{t}^{(1),v}\big)_{t\geq0}$ is defined in the previous section.

Lemma 2.2. Let v be in $\mathbb{R}$ . The variables $\big(\overline{C}_{v}^{(1),v},\overline{B}_{v}^{(1),v}\big)$ and $\big(\widehat{C}_{kb}^{(1),kb},\widehat{B}_{kb}^{(1),kb}\big)$ have the same support (and it is $\mathcal{C}$ , defined below Lemma 2.1).

Proof. The law $\eta_{2}$ is the law of $\big(\overline{C}_{0}^{(1)},\overline{B}_{0}^{(1)}\big)$ ( $\eta_{2}$ is defined below Lemma 2.1). As previously stated, the support of $\eta_{2}$ is $\mathcal{C}$ ; so, by stationarity, the support of $\big(\overline{C}_{v}^{(1),v},\overline{B}_{v}^{(1),v}\big)$ is $\mathcal{C}$ .

Keep in mind that $\widehat{B}^{(1),v}=B^{(1)}$ , $\widehat{C}^{(1),v}=C^{(1)}$ . By Assumption 2.3, the support of $S_{k}^{(1)}$ is [ka, kb] and the support of $S_{k+1}^{(1)}-S_{k}^{(1)}$ is [a, b]. If $S_{k+1}^{(1)}>kb$ then $B_{kb}^{(1)}=S_{k+1}^{(1)}-S_{k}^{(1)}-\big(kb-S_{k}^{(1)}\big)$ and $C_{kb}^{(1)}=kb-S_{k}^{(1)}$ (see Fig. 5).

Figure 5. $B_{kb}^{(1)}$ and $C_{kb}^{(1)}$ .

The support of $S_{k}^{(1)}$ is [ka, kb] and $kb-ka\geq b$ (see (2.9)), so, as $S_{k}^{(1)}$ and $S_{k+1}^{(1)}-S_{k}^{(1)}$ are independent, we get that the support of $\big(C_{kb}^{(1)},S_{k+1}^{(1)}-S_{k}^{(1)}\big)$ includes $\{(u,w)\in[0;\;b]^{2}\;:\; w\geq\sup(a,u)\}$ . Hence, the support of $\big(C_{kb}^{(1)},B_{kb}^{(1)}\big)=\big(C_{kb}^{(1)},S_{k+1}^{(1)}-S_{k}^{(1)}-C_{kb}^{(1)}\big)$ includes $\mathcal{C}$ . As this support is included in $\mathcal{C}$ , we have proved the desired result.

For v in $\mathbb{R}$ , we define a process $\big(\overline{B}_{t}^{(2),v}\big)_{t\geq v}$ . We start with:

(2.10) \begin{equation} \overline{B}_{v}^{(2),v}\text{ has the law } \eta'\big(\dots\mid\overline{C}_{v}^{(1),v},\overline{B}_{v}^{(1),v}\big)\end{equation}

(remember that $\eta'$ is defined in Fact 3.1). This conditioning is correct because the law of $\big(\overline{C}_{v}^{(1),v},\overline{B}_{v}^{(1),v}\big)$ is the law $\eta_{2}$ , whose support is included in the support of the law of $\big(\widehat{C}_{kb}^{(1),kb},\widehat{B}_{kb}^{(1),kb}\big)$ , which is $\eta_{2}$ (see the lemma above, and below (2.6)). We then let the process $\big(\overline{B}_{t}^{(1),v},\overline{B}_{t}^{(2),v}\big)_{t\geq v}$ run its course as a Markov process having the same transition as $\big(\widehat{B}_{t-v+kb}^{(1),kb},\widehat{B}_{t-v+kb}^{(2),kb}\big)_{t\geq v}$ . This means that, after time v, $\overline{B}_{t}^{(1),v}$ and $\overline{B}_{t}^{(2),v}$ decrease linearly (with slope $-1$ ) until they reach 0. When they reach 0, each of these two processes makes a jump of law $\pi$ , independently of the other one. After that, they decrease linearly, and so on.

Fact 2.3. The process $\big(\overline{B}_{t}^{(1),v},\overline{B}_{t}^{(2),v}\big)_{t\geq v}$ is supposed independent from all the other processes (until now, we have defined its law and said that that $\overline{B}^{(1),v}$ is independent from all the other processes).

3. Rate of convergence in the key renewal theorem

We need the following regularity assumption.

Assumption 3.1. The probability $\pi(\textrm{d} x)$ is absolutely continuous with respect to the Lebesgue measure (we will write $\pi(\textrm{d} x)=\pi(x)\,\textrm{d} x$ ). The density function $x\mapsto\pi(x)$ is continuous on $(0;+\infty)$ .

Fact 3.1. Let $\theta>1$ ( $\theta$ is fixed in the rest of the paper). The density $\pi$ satisfies $\limsup_{x\rightarrow+\infty}\exp(\theta x)\pi(x)<+\infty$ .

For $\varphi$ a non-negative Borel-measurable function on $\mathbb{R}$ , we set $S(\varphi)$ to be the set of complex-valued measures $\rho$ (on the Borelian sets) such that $\int_{\mathbb{R}}\varphi(x)|\rho|(\textrm{d} x)<\infty$ , where $|\rho|$ stands for the total variation norm. If $\rho$ is a finite complex-valued measure on the Borelian sets of $\mathbb{R}$ , we define $\mathcal{T}\rho$ to be the $\sigma$ -finite measure with the density

\begin{align*}v(x)=\left\{\begin{array}{l@{\quad}l}\rho((x,+\infty)) & \text{ if }x\geq0,\\[5pt] -\rho((\!-\!\infty,x]) & \text{ if }x<0.\end{array} \right. \end{align*}

Let F be the cumulative distribution function of $\pi$ .

We set $B_{t}=B_{t}^{(1)}$ (see (2.4) for the definition of $B^{(1)}, B^{(2)}, \ldots$ ). By [Reference Asmussen1, Theorem 3.3, p. 151, and Theorem 4.3, p. 156], we know that $B_{t}$ converges in law to a random variable $B_{\infty}$ (of law $\eta$ ) and that $C_{t}$ converges in law to a random variable $C_{\infty}$ (of law $\eta$ ). The following theorem is a consequence of [Reference Sgibnev21, Theorem 5.1, p. 2429]. It shows there is actually a rate of convergence for these convergences in law.

Theorem 3.1. Let $\varepsilon'\in(0,\theta)$ , $M\in(0,+\infty)$ , and

\begin{align*} \varphi(x) = \left\{\begin{array}{l@{\quad}l} \textrm{e}^{(\theta-\varepsilon')x} & if x\geq0, \\[5pt] 1 & if x<0. \end{array} \right. \end{align*}

If Y is a random variable of law $\pi$ , then

(3.1) \begin{equation} \sup_{\alpha\;:\;\Vert\alpha\Vert_{\infty}\leq M} \bigg|\mathbb{E}(\alpha(B_{t}))-\frac{1}{\mu}\int_{\mathbb{R}^{+}}\mathbb{E}(\alpha(Y-s)\mathbf{1}_{\{Y-s>0\}})\,\textrm{d} s\bigg| = o\bigg(\frac{1}{\varphi(t)}\bigg) \end{equation}

as t approaches $+\infty$ outside a set of Lebesgue measure zero (the supremum is taken on $\alpha$ in the set of Borel-measurable functions on $\mathbb{R}$ ), and

(3.2) \begin{equation} \sup_{\alpha\;:\;\Vert\alpha\Vert_{\infty}\leq M} \bigg|\mathbb{E}(\alpha(C_{t}))-\frac{1}{\mu}\int_{\mathbb{R}^{+}}\mathbb{E}(\alpha(Y-s)\mathbf{1}_{\{Y-s>0\}})\,\textrm{d} s\bigg| = o\bigg(\frac{1}{\varphi(t)}\bigg) \end{equation}

as t approaches $+\infty$ outside a set of Lebesgue measure zero (the supremum is taken on $\alpha$ in the set of Borel-measurable functions on $\mathbb{R}$ ).

Proof. We give the proof of (3.1); the proof of (3.2) is very similar.

Let $*$ stand for the convolution product. We define the renewal measure $U(\textrm{d} x) = \sum_{n=0}^{+\infty}\pi^{*n}(\textrm{d} x)$ (where $\pi^{*0}(\textrm{d} x) = \delta_{0}$ , the Dirac mass at 0, and $\pi^{*n}=\pi*\pi*\dots*\pi$ , n times). We take i.i.d. variables $X,X_{1},X_{2},\dots$ of law $\pi$ . Let $f\;:\;\mathbb{R}\rightarrow\mathbb{R}$ be a measurable function such that $\Vert f\Vert_{\infty}\leq M$ . We have, for all $t\geq0$ ,

\begin{align*} \mathbb{E}(f(B_{t})) &= \mathbb{E}\Bigg(\sum_{n=0}^{+\infty}f(X_{1}+X_{2}+\dots+X_{n+1}-t) \mathbf{1}_{\{X_{1}+\dots+X_{n}\leq t<X_{1}+\dots+X_{n+1}\}}\Bigg) \\[5pt] &= \int_{0}^{t}\mathbb{E}(f(s+X-t)\mathbf{1}_{\{s+X-t>0\}})U(\textrm{d} s). \end{align*}

We set

\begin{align*} g(t) = \left\{\begin{array}{l@{\quad}l} \mathbb{E}(f(X-t)\mathbf{1}_{\{X-t>0\}}) & \text{ if }t\geq0, \\[5pt] 0 & \text{ if }t<0. \end{array} \right. \end{align*}

We observe that $\Vert g\Vert_{\infty}\leq M$ . We have, for all $t\geq0$ ,

\begin{equation*} |\mathbb{E}(f(X-t)\mathbf{1}_{\{X-t>0\}})| \leq \Vert f\Vert_{\infty}\mathbb{P}(X>t) \leq \Vert f\Vert_{\infty} \textrm{e}^{-(\theta-{\varepsilon'}/{2})t}\mathbb{E}\big(\textrm{e}^{(\theta-{\varepsilon'}/{2})X}\big). \end{equation*}

We have, by Fact 3.1, $\mathbb{E}\big(\textrm{e}^{(\theta-{\varepsilon'}/{2})X}\big)<\infty$ . The function $\varphi$ is sub-multiplicative and is such that

\begin{align*} \lim_{x\rightarrow-\infty}\frac{\log(\varphi(x))}{x} = 0 \leq \lim_{x\rightarrow+\infty}\frac{\log(\varphi(x))}{x} = \theta-\varepsilon'. \end{align*}

The function g is in $L^{1}(\mathbb{R})$ , and the function $g.\varphi$ is in $L^{\infty}(\mathbb{R})$ . We have $g(x)\varphi(x)\rightarrow0$ as $|x|\rightarrow\infty$ ,

\begin{align*} \varphi(t)\int_{t}^{+\infty}|g(x)|\,\textrm{d} x\underset{t\rightarrow+\infty}{\longrightarrow}0,\qquad \varphi(t)\int_{-\infty}^{t}|g(x)|\,\textrm{d} x\underset{t\rightarrow-\infty}{\longrightarrow}0, \end{align*}

and $\mathcal{T}^{\circ2}(\pi)\in S(\varphi)$ .

Let us now take a function $\alpha$ such that $\Vert\alpha\Vert_{\infty}\leq M$ . We set

\begin{align*} \widehat{\alpha}(t) = \left\{\begin{array}{l@{\quad}l} \mathbb{E}\big(\alpha(X-t)\mathbf{1}_{\{X-t\geq0\}}\big) & \text{ if }t\geq0, \\[5pt] 0 & \text{ if }t<0. \end{array} \right. \end{align*}

Then we have $\Vert\widehat{\alpha}\Vert_{\infty}\leq M$ and, computing as above for f, $\mathbb{E}(\alpha(B_{t})) = \widehat{\alpha}*U(t)$ .

In the case where f is a constant equal to M, we have $\Vert g\Vert_{\infty}=M$ . So, by [Reference Sgibnev21, Theorem 5.1] (applied to the case $f\equiv M$ ), we have proved the desired result.

Corollary 3.1. There exists a constant $\Gamma_{1}$ bigger than 1 such that, for any bounded measurable function F on $\mathbb{R}$ such that $\eta(F)=0$ , for t outside a set of Lebesgue measure zero,

(3.3) \begin{align} |\mathbb{E}(F(B_{t}))| \leq \Vert F\Vert_{\infty}\times\frac{\Gamma_{1}}{\varphi(t)}, \end{align}
(3.4) \begin{align} |\mathbb{E}(F(C_{t}))| & \leq \Vert F\Vert_{\infty}\times\frac{\Gamma_{1}}{\varphi(t)}.\end{align}

for t outside a set of Lebesgue measure zero.

Proof. We provide the proof of (3.3) only; the proof of (3.4) is very similar.

We take $M=1$ in Theorem 3.1. Keep in mind that $\eta$ is defined in (2.5). By the above theorem, there exists a constant $\Gamma_{1}$ such that, for all measurable functions $\alpha$ such that $\Vert\alpha\Vert_{\infty}\leq1$ ,

(3.5) \begin{equation} \left|\mathbb{E}(\alpha(B_{t}))-\eta(\alpha)\right|\leq\frac{\Gamma_{1}}{\varphi(t)}\ \text{(for }t\text{ outside a set of Lebesque measure zero).} \end{equation}

Let us now take a bounded measurable F such that $\eta(F)=0$ . By (3.5), we have, for t outside a set of Lebesgue measure zero,

\begin{align*} \bigg|\mathbb{E}\bigg(\frac{F(B_{t})}{\Vert F\Vert_{\infty}}\bigg) - \eta\bigg(\frac{F}{\Vert F\Vert_{\infty}}\bigg)\bigg| & \leq \frac{\Gamma_{1}}{\varphi(t)} \\[5pt] |\mathbb{E}(F(B_{t}))| &\leq \Vert F\Vert_{\infty}\times\frac{\Gamma_{1}}{\varphi(t)}. \end{align*}

4. Limits of symmetric functionals

4.1. Notation

We fix $q\in\mathbb{N}^{*}$ , and set $\mathcal{S}_{q}$ to be the symmetric group of order q. A function $F\;:\;\mathbb{R}^{q}\rightarrow\mathbb{R}$ is symmetric if, for all $\sigma\in\mathcal{S}_{q}$ and all $(x_{1},\dots,x_{q})\in\mathbb{R}^{q}$ ,

\begin{align*}F(x_{\sigma(1)},x_{\sigma(2)},\dots,x_{\sigma(q)})=F(x_{1},x_{2},\dots,x_{q}).\end{align*}

For $F\;:\;\mathbb{R}^{q}\rightarrow\mathbb{R}$ , we define a symmetric version of F by

(4.1) \begin{equation} F_{\text{sym}}(x_{1},\dots,x_{q}) = \frac{1}{q!}\sum_{\sigma\in\mathcal{S}_{q}}F(x_{\sigma(1)},\dots,x_{\sigma(q)})\qquad \text{for all }(x_{1},\dots,x_{q})\in\mathbb{R}^{q}.\end{equation}

We set $\mathcal{B}_{\text{sym}}(q)$ to be the set of bounded, measurable, symmetric functions F on $\mathbb{R}^{q}$ , and we set $\mathcal{B}_{\text{sym}}^{0}(q)$ to be the F of $\mathcal{B}_{\text{sym}}(q)$ such that

\begin{align*} \int_{x_{1}}F(x_{1},x_{2},\dots,x_{q})\eta(\textrm{d} x_{1}) = 0 \qquad \text{for all } (x_{2},\dots,x_{q})\in\mathbb{R}^{q-1}.\end{align*}

Suppose that k is in [q] and $l\geq1$ . For t in [0, T], we consider the following collections of nodes of $\mathcal{U}$ (remember that $T=-\log\varepsilon$ , and $\mathcal{U}$ and $\mathbf{m}(\!\cdot\!)$ are defined in Section 2.1):

(4.2) \begin{align} \mathcal{T}_{1} & = \{u\in\mathcal{U}\backslash\{0\}\;:\; A_{u}\neq\emptyset,\,\xi_{{\boldsymbol{{m}}}(u)}\geq\varepsilon\}\cup\{0\}, \nonumber \\[5pt] S(t) & =\{u\in\mathcal{T}_{1}\;:\;-\log(\xi_{{\boldsymbol{{m}}}(u)})\leq t,\,-\log(\xi_{u})>t\}=\mathcal{U}_{\textrm{e}^{-t}},\end{align}
(4.3) \begin{align} L_{t} & = \sum_{u\in S(t)\,:\,A_{u}\neq\emptyset}(\#A_{u}-1).\end{align}

We set $\mathcal{L}_{1}$ to be the set of leaves in the tree $\mathcal{T}_{1}$ . For t in [0, T] and i in [q], there exists one and only one u in S(t) such that $i\in A_{u}$ . We call it $u\{t,i\}$ . Under Assumption 2.3, there exists a constant bounding the numbers of vertices of $\mathcal{T}_{1}$ almost surely.

Figure 6. Example tree and marks.

Let us consider the example in Fig. 6. Here, we have a graphical representation of a realization of $\mathcal{T}_{1}$ . Each node u of $\mathcal{T}_{1}$ is written above a rectangular box in which we read $A_{u}$ ; the right side of the box has the coordinate $-\log(\xi_{u})$ on the x-axis. For simplicity, the node (1,1) is designated by 11, the node (1,2) by 12, and so on. In this example:

\begin{align*} \mathcal{T}_{1} & = \{(0),(1),(2),(1,1),(2,1),(1,2),(1,1,1),(2,1,1),(1,1,1,1),(1,2,1)\}, \\[5pt] \mathcal{L}_{1} & = \{(2,1,1),(1,1,1,1),(1,2,1)\}, \\[5pt] A_{(1)} &= \{1,2,3\},\ A_{(1,2)}=\{1,2\},\ \dots{}, \\[5pt] S(t) &= \{(1,2),(1,1),(2,1)\}, \\[5pt] u\{t,1\} &=(1,2),\ u\{t,2\}=(1,2),\ u\{t,3\}=(1,1),\ u\{t,4\}=(2,1).\end{align*}

For $k, l \in \mathbb{N}$ and $t \in [0,T]$ , we define the event

\begin{align*} C_{k,l}(t)=\Bigg\{\sum_{u\in S(t)}\mathbf{1}_{\#A_{u}=1}=k,\,\sum_{u\in S(t)}(\#A_{u}-1)=l\Bigg\}.\end{align*}

For example, in Fig. 6, we are in the event $C_{2,1}(t)$ .

We define $\mathcal{T}_{2}=\{u\in\mathcal{T}_{1}\backslash\{0\}\;:\;\#A_{{\boldsymbol{{m}}}(u)}\geq2\}\cup\{0\}$ , $m_{2}\;:\; u\in\mathcal{T}_{2}\mapsto(\xi_{u},\inf\{i,i\in A_{u}\})$ . For example, in Fig. 6, $\mathcal{T}_{2}=\{(0),(1),(2),(1,1),(1,2),(1,2,1)\}$ . Let $\alpha$ be in (0, 1).

Fact 4.1. We can always suppose that $(1-\alpha)T>b$ because we are interested in T going to infinity. So, in the following, we suppose $(1-\alpha)T>b$ .

For any t, we can compute $\sum_{u\in S(t)}(\#A_{u}-1)$ if we know $\sum_{u\in S(t)}\mathbf{1}_{\#A_{u}=1}$ and $\#S(t)$ . As $T-\alpha T>b$ , any u in $S(\alpha T)$ satisfies $\#A_{u}\geq2$ if and only if u is the mother of some v in $\mathcal{T}_{2}$ . So we deduce that $C_{k,l}(\alpha T)$ is measurable with respect to $(\mathcal{T}_{2},m_{2})$ . We set, for all u in $\mathcal{T}_{2}$ ,

(4.4) \begin{equation} T_{u}=-\log(\xi_{u}).\end{equation}

For any i in [q], $t\mapsto u\{t,i\}$ is piecewise constant and the ordered sequence of its jump times is $S_{_{1}}^{(i)}<S_{2}^{(i)}<\cdots$ (the $S_{\dots}^{(i)}$ are defined in Section 2.3). We simply have that $1, \textrm{e}^{-S_{1}^{(i)}}, \textrm{e}^{-S_{2}^{i)}}, \dots$ are the successive sizes of the fragment supporting the tag i. For example, in Fig. 6, we have

(4.5) \begin{equation} S_{1}^{(1)}=-\log(\xi_{1}), \quad S_{2}^{(1)}=-\log(\xi_{(1,2)}), \quad S_{3}^{(1)}=-\log(\xi_{(1,2,1)}), \dots\end{equation}

Let $\mathcal{L}_{2}$ be the set of leaves u in the tree $\mathcal{T}_{2}$ such that the set $A_{u}$ has a single element $n_{u}$ . For example, in Fig. 6, $\mathcal{L}_{2}=\{(2),(1,1)\}$ . We observe that $\# \mathcal{L}_{1}=q\Leftrightarrow\# \mathcal{L}_{2}=q$ , and thus

(4.6) \begin{equation} \{\# \mathcal{L}_{1}=q\}\in\sigma(\mathcal{L}_{2}).\end{equation}

We summarize the definition of $n_{u}$ :

(4.7) \begin{equation} \#A_{u}=1\Rightarrow A_{u}=\{n_{u}\}.\end{equation}

For q even ( $q=2p$ ) and for all t in [0, T], we define the events

\begin{align*} & G_{t} =\{\text{for all } i\in[p],\text{ there exists } u_{i}\in\mathcal{U}\;:\;\xi_{u_{i}} \lt \textrm{e}^{-t},\,\xi_{{\boldsymbol{{m}}}(u_{i})}\geq \textrm{e}^{-t},\,A_{u_{i}}=\{2i-1,2i\}\}, \\[5pt] & \text{for all } i\in[p],\ G_{i,i+1}(t) = \{\text{there exists } u\in S(t)\;:\;\{2i-1,2i\}\subset A_{u}\}.\end{align*}

We set, for all t in [0, T], $\mathcal{F}_{S(t)}=\sigma(S(t),(\xi_{u},A_{u})_{u\in S(t)})$ .

4.2. Intermediate results

The reader must keep in mind that $T=-\log(\varepsilon)$ , (2.3), and that $\delta$ is defined in Assumption 2.3. The set $\mathcal{B}_{\textrm{sym}}^0(q)$ is defined in Section 4.1.

Lemma 4.1. We suppose that F is in $\mathcal{B}_{\textrm{sym}}^0(q)$ and that F is of the form $F=(f_{1}\otimes f_{2}\otimes\dots\otimes f_{q})_{\textrm{sym}}$ , with $f_{1}, f_{2}, \dots, f_{q} \in \mathcal{B}_{\textrm{sym}}^0(1)$ . Let A be in $\sigma(\mathcal{L}_{2})$ . For any $\alpha$ in ]0, 1[, k in [q], and l in $\{0,1,\dots,(q-k-1)_{+}\}$ , we have

\begin{align*} \big|\mathbb{E}(\mathbf{1}_{C_{k,l}(\alpha T)}\mathbf{1}_{A}F\big(B_{T}^{(1)},B_{T}^{(2)},\dots,B_{T}^{(q)}\big))\big| \leq \Vert F\Vert_{\infty}\Gamma_{1}^{q}C_\textrm{tree}(q)\bigg(\frac{1}{\delta}\bigg)^{q}\varepsilon^{q/2} \end{align*}

(for a constant $C_\textrm{tree}(q)$ defined in the proof, and $\Gamma_{1}$ defined in Corollary 3.1), and

\begin{align*} \varepsilon^{-q/2}\mathbb{E}\big(\mathbf{1}_{C_{k,l}(\alpha T)}\mathbf{1}_{A}F\big(B_{T}^{(1)},B_{T}^{(2)},\dots,B_{T}^{(q)}\big)\big) \underset{\varepsilon\rightarrow0}{\longrightarrow}0. \end{align*}

Proof. Let A be in $\sigma(\mathcal{L}_{2})$ . We have

\begin{align*} & \mathbb{E}\big(\mathbf{1}_{C_{k,l}(\alpha T)}\mathbf{1}_{A}F\big(B_{T}^{(1)},B_{T}^{(2)},\dots,B_{T}^{(q)}\big)\big) \\[5pt] & = \mathbb{E}\Bigg(\mathbf{1}_{A}\sum_{f : \mathcal{T}_{2}\rightarrow\mathcal{P}([q])\text{s.t.} \dots} \mathbb{E}\big(F\big(B_{T}^{(1)},B_{T}^{(2)},\dots,B_{T}^{(q)}\big) \mathbf{1}_{A_{u}=f(u),\,\text{for all }u\in\mathcal{T}_{2}}\mid\mathcal{L}_{2},\mathcal{T}_{2},m_{2}\big)\Bigg) \end{align*}

( $\mathcal{P}$ defined in Section 1.5), where we sum on the $f\;:\;\mathcal{T}_{2}\rightarrow\mathcal{P}([q])$ such that

(4.8) \begin{equation} \begin{cases} f(u)=\sqcup_{v\;:\;{\boldsymbol{{m}}}(v)=u}f(v)\qquad \text{for all}\; u \;\text{in} {\mathcal{T}_{2}},\\[5pt] \sum_{u\in S(\alpha T)}\mathbf{1}_{\#f(u)=1}=k\text{ and }\sum_{u\in S(\alpha T)}(\#f(u)-1)=l. \end{cases} \end{equation}

We remind the reader that $\sqcup$ is defined in Section 1.5 (disjoint union), ${\boldsymbol{{m}}}(\!\cdot\!)$ is defined in Section 2.1 (mother), and $S(\dots\!)$ is defined in (4.2). Here, we mean that we sum over the f compatible with a description of tagged fragments.

If $u \in \mathcal{L}_{2}$ and $T_{u}<T$ , then, conditionally on $\mathcal{T}_{2}$ and $m_{2}$ , $B_{T}^{(n_{u})}$ is independent of all the other variables and has the same law as $B_{T-T_{u}}^{(1)}$ ( $T_{u}$ is defined in (4.4), $n_{u}$ in (4.7)). Thus, using Theorem 3.1 and Corollary 3.1, we get, for any $\varepsilon'\in(0,\theta-1)$ , $u\in\mathcal{L}_{2}$ ,

\begin{align*} \big|\mathbb{E}\big(f_{n_{u}}\big(B_{T}^{(n_{u})}\big)|\mathcal{L}_{2},\mathcal{T}_{2},m_{2}\big)\big| & \leq \Gamma_{1}\Vert f_{n_{u}}\Vert_{\infty}\textrm{e}^{-(\theta-\varepsilon')(T-T_{u})_{+}} \\[5pt] & \text{for }T-T_{u}\notin Z_{0},\text{ where} \, {Z_{0}} \text{is of Lebesgue measure zero.} \end{align*}

Thus, we get

For a fixed $\omega$ and a fixed f, we have

\begin{equation*} \prod_{u\in\mathcal{L}_{2}}\textrm{e}^{-(T-T_{{\boldsymbol{{m}}}(u)})-\log(\delta)} \prod_{u\in\mathcal{T}_{2}\backslash\{0\}}\textrm{e}^{-(\#f(u)-1)(T_{u}-T_{{\boldsymbol{{m}}}(u)})} = \bigg(\frac{1}{\delta}\bigg)^{\#\mathcal{L}_{2}}\exp\bigg(-\int_{0}^{T}a(s)\,\textrm{d} s\bigg), \end{equation*}

where, for all s,

\begin{align*} a(s) &= \sum_{u\in\mathcal{L}_{2}\backslash\{0\}\;:\; T_{{\boldsymbol{{m}}}(u)}\leq s<T}\mathbf{1}_{\#f(u)=1} + \sum_{u\in\mathcal{T}_{2}\backslash\{0\}\;:\; T_{{\boldsymbol{{m}}}(u)}\leq s\leq T_{u}}(\#f(u)-1) \\[5pt] &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \text{(if $u\in\mathcal{T}_{2}\backslash\mathcal{L}_{2}$, $\mathbf{1}_{\#f(u)=1}=0$)} \\[5pt] &= \sum_{u\in\mathcal{T}_{2}\backslash\{0\}\;:\; T_{{\boldsymbol{{m}}}(u)}\leq s<T}\mathbf{1}_{\#f(u)=1} + \sum_{u\in\mathcal{T}_{2}\backslash\{0\}\;:\; T_{{\boldsymbol{{m}}}(u)}\leq s\leq T_{u}}(\#f(u)-1) \\[5pt] &\;\,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \text{($S(\!\cdot\!)$ defined in (4.2)} \\[5pt] & \geq \sum_{u\in S(s)}\mathbf{1}_{\#f(u)=1}+\sum_{u\in S(s)}(\#f(u)-1). \end{align*}

We observe that, under (4.8),

\begin{align*} a(t) \geq \left\lceil \frac{q}{2}\right\rceil \quad\text{for all }t, \qquad a(\alpha T)\geq k+l, \end{align*}

and, if t is such that $\sum_{u\in S(t)}\mathbf{1}_{\#f(u)=1}=k'$ and $\sum_{u\in S(t)}(\#f(u)-1)=l'$ for some integers k , l , then, for all $s\geq t$ ,

\begin{align*} a(s)\geq k'+\left\lceil \frac{q-k'}{2}\right\rceil . \end{align*}

We observe that, under Assumption 2.3, there exists a constant which bounds $\#\mathcal{T}_{2}$ almost surely (because, for all u in $\mathcal{U}\backslash\{0\}$ , $-\log(\xi_{u})+\log(\xi_{\mathbf{m}(u)})\geq a$ ), and so there exists a constant $C_{tree}(q)$ which bounds $\#\{f\;:\;\mathcal{T}_{2}\rightarrow\mathcal{P}([q])\}$ almost surely. So, we have

(4.9) \begin{align} & \big|\mathbb{E}\big(\mathbf{1}_{A}F(B_{T}^{(1)},B_{T}^{(2)},\dots,B_{T}^{(q)})\big)\big| \nonumber \\[5pt] &\leq \Vert F\Vert_{\infty}\Gamma_{1}^{q} \mathbb{E}\Bigg(\sum_{f\;:\;\mathcal{T}_{2}\rightarrow\mathcal{P}([q])\,\text{s.t.} \dots} \mathbf{1}_{A} \bigg(\frac{1}{\delta}\bigg)^{\#\mathcal{L}_{2}}\textrm{e}^{-\lceil q/2\rceil\alpha T} \exp\bigg\{{-}\bigg(k+\left\lceil \frac{q-k}{2}\right\rceil \bigg)(T-\alpha T)\bigg\}\Bigg) \nonumber \\[5pt] &\leq \Vert F\Vert_{\infty}\Gamma_{1}^{q}C_\textrm{tree}(q) \bigg(\frac{1}{\delta}\bigg)^{q}\textrm{e}^{-\lceil q/2\rceil\alpha T} \exp\bigg\{{-}\bigg(k+\left\lceil \frac{q-k}{2}\right\rceil \bigg)(1-\alpha)T\bigg\}. \end{align}

Since $k\geq1$ , $k+\left\lceil ({q-k})/{2}\right\rceil \gt{q}/{2}$ , and so we have proved the desired result (remember that $T=-\log\varepsilon$ ).

Remark 4.1. If we replaced Assumption 2.3 by Assumption 2.4, we would have difficulties adapting the above proof. In the second line of (4.9), the $1/\delta$ becomes $\textrm{e}^{T_{u}-T_{{\boldsymbol{{m}}}(u)}}$ . In addition, the tree $\mathcal{T}_{2}$ is no longer a.s. finite. So, the expectation on the second line of (4.9) could certainly be bounded, but for a high price (a lot more computations, maybe assumptions on the tails of $\pi$ , and so on). This is why we stick with Assumption 2.3.

Remember that $L_{t}$ ( $t\geq0$ ) is defined in (4.3).

Lemma 4.2. Let k be an integer, $k\geq q/2$ , and let $\alpha\in[q/(2k),1]$ . Then we have $\mathbb{P}(L_{\alpha T}\geq k)\leq K_{1}(q)\varepsilon^{q/2}$ , where

\begin{align*}K_{1}(q)=\sum_{i\in[q]}\frac{q!}{(q-i)!}\times i^{q-i}.\end{align*}

Let k be an integer, $k>q/2$ , and let $\alpha\in(q/(2k),1)$ . Then

\begin{align*} \varepsilon^{-q/2}\mathbb{P}(L_{\alpha T}\geq k)\underset{\varepsilon\rightarrow0}{\longrightarrow}0. \end{align*}

(We remind the reader that $T=-\log(\varepsilon)$ .)

Proof. Let k be an integer, $k\geq q/2$ , and let $\alpha\in[q/(2k),1]$ . Remember that $S(\!\cdot\!)$ is defined in (4.2). Observe that $\#S(\alpha T)=i$ if and only if $L_{\alpha T}=q-i$ (see (4.3)). We use the decomposition

\begin{align*} \{L_{\alpha T}\geq k\} &= \{L_{\alpha T}\in\{k,k+1,\dots,q-1\}\} \\[5pt] &= \cup_{i\in[q-k]}\{\#S(\alpha T)=i\} \\[5pt] &= \cup_{i\in[q-k]}\cup_{m:[i]\hookrightarrow[q]}(F(i,m)\cap\{\#S(\alpha T)=i\}) \end{align*}

(remember that ‘ $\hookrightarrow$ ’ means we are summing on injections; see Section 1.5), where

\begin{align*} F(i,m)=\{i_{1},i_{2}\in&[i]\text{ with }i_{1}\ne i_{2}\Rightarrow \\[5pt] & \text{there exists } u_{1},u_{2}\in S(\alpha T),u_{1}\ne u_{2},m(i_{1})\in A_{u_{1}},m(i_{2})\in A_{u_{2}}\}. \end{align*}

(To make the above equations easier to understand, observe that if $\#S(\alpha T)=i$ , we have, for each $j\in[i]$ , an index m(j) in $A_{u}$ for some $u\in S(\alpha T)$ , and we can choose m such that we are in the event F(i, m)). Suppose we are in the event F(i, m). For $u\in S(\alpha T)$ and for all j in [i] such that $m(j)\in A_{u}$ , we define (remember $|u|$ and $\mathbf{m}$ are defined in Section 2.1)

\begin{align*} T_{|u|}^{(j)}=-\log(\xi_{u}),\,T_{|u|-1}^{(j)}=-\log(\xi_{{\boldsymbol{{m}}}(u)}),\,\dots,\, T_{1}^{(j)}=-\log(\xi_{{\boldsymbol{{m}}}^{\circ(|u|-1)}(u)}),T_{0}^{(j)}=0, \end{align*}

with $l(j)=|u|$ , $v(j)=u$ . We have

\begin{align*} &\mathbb{P}(L_{\alpha T}\geq k)\leq\sum_{i\in[q-k]}\sum_{m:[i]\hookrightarrow[q]}\mathbb{P}(F(i,m)\cap\{\#S(\alpha T)=i\}) \\[5pt] &=\sum_{i\in[q-k]}\sum_{m\;:\;[i]\hookrightarrow[q]}\mathbb{E}\big(\mathbf{1}_{F(i,m)} \mathbb{E}\big(\mathbf{1}_{\#S(\alpha T)=i}\mid F(i,m),\big(T_{p}^{(j)}\big)_{j\in[i],p\in[l(j)]},(v(j))_{j\in[i]}\big)\big) \\[5pt] & \!\!\!\quad \qquad \quad \qquad\text{(below, we sum over the partitions $\mathcal{B}$ of $[q]\backslash m([i])$ into i subsets ${\mathcal{B}_{1},\mathcal{B}_{2},\dots,\mathcal{B}_{i}}$)} \\[5pt] & = \sum_{i\in[q-k]}\sum_{m\;:\;[i]\hookrightarrow[q]}\sum_{\mathcal{B}}\mathbb{E}\Bigg(\mathbf{1}_{F(i,m)} \mathbb{E}\Bigg(\prod_{j\in[i]}\prod_{r\in\mathcal{B}_{j}}\mathbf{1}_{r\in A_{v(j)}}\mid F(i,m), \big(T_{p}^{(j)}\big)_{j\in[i],p\in[l(j)]},(v(j))_{j\in[i]}\Bigg)\Bigg) \\[5pt] & \;\,\quad\qquad\qquad \qquad \qquad \qquad \qquad \qquad \text{ (as $Y_{1},\dots,Y_{q}$ defined in Section 2.2 are independent)}\end{align*}
\begin{align*} &= \sum_{i\in[q-k]}\sum_{m\;:\;[i]\hookrightarrow[q]}\sum_{\mathcal{B}} \mathbb{E}\Bigg(\mathbf{1}_{F(i,m)}\prod_{j\in[i]}\prod_{r\in\mathcal{B}_{j}}\mathbb{E}\big(\mathbf{1}_{r\in A_{v(j)}}\mid F(i,m), \big(T_{p}^{(j)}\big)_{j\in[i],p\in[l(j)]},(v(j))_{j\in[i]}\big)\Bigg) \\[5pt] &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\qquad\qquad\;\, \text{ (because of (2.1) and (4.4))} \\[5pt] &= \sum_{i\in[q-k]}\sum_{m\;:\;[i]\hookrightarrow[q]}\sum_{\mathcal{B}} \mathbb{E}\Bigg(\mathbf{1}_{F(i,m)}\prod_{j\in[i]}\prod_{r\in\mathcal{B}_{j}}\prod_{s=1}^{l(j)} \exp\big({-}T_{s}^{(j)}+T_{s-1}^{(j)}\big)\Bigg) \text{ (as $v(j)\in S(\alpha T)$)} \\[5pt] &\leq \sum_{i\in[q-k]}\sum_{m\;:\;[i]\hookrightarrow[q]}\sum_{\mathcal{B}} \prod_{j\in[i]}\prod_{r\in\mathcal{B}_{j}}\textrm{e}^{-\alpha T} \\[5pt] &= \sum_{i\in[q-k]}\sum_{m\;:\;[i]\hookrightarrow[q]}\sum_{\mathcal{B}}\textrm{e}^{-\alpha(q-i)T} \\[5pt] &\leq \sum_{i\in[q-k]}\sum_{m\;:\;[i]\hookrightarrow[q]}\sum_{\mathcal{B}}\textrm{e}^{-k\alpha T} \leq \textrm{e}^{-k\alpha T}\sum_{i\in[q]}\frac{q!}{(q-i)!}i^{q-i}. \end{align*}

If we suppose that $k>q/2$ and $\alpha\in(q/(2k),1)$ , then

\begin{equation*} \exp\bigg(\frac{qT}{2}\bigg)\exp({-}k\alpha T)\underset{T\rightarrow+\infty}{\longrightarrow}0. \end{equation*}

Immediate consequences of the two lemmas above are the following corollaries.

Corollary 4.1. If q is odd and if $F\in\mathcal{B}_{\textrm{sym}}^{0}(q)$ is of the form $F=(f_{1}\otimes\dots\otimes f_{q})_{\textrm{sym}}$ , then

\begin{align*} \varepsilon^{-q/2}\mathbb{E}\big(F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big)\mathbf{1}_{\#\mathcal{L}_{1}=q}\big) \underset{\varepsilon\rightarrow0}{\longrightarrow}0. \end{align*}

( $\mathcal{B}_{\textrm{sym}}^{0}$ and $\mathcal{L}_{1}$ are defined in Section 4.1 .)

Proof. We take $\alpha\in(({q}/{2})\lceil{q}/{2}\rceil^{-1},1)$ . We observe that, for k in [q], t in (0, T),

\begin{align*} \sum_{u\in S(t)}\mathbf{1}_{\#A_{u}=1} = k \Rightarrow \sum_{u\in S(t)}(\#A_{u}-1)\in\{0,1,\dots,(q-k-1)_{+}\}, \end{align*}

and ( $L_{t}$ is defined in (4.3)) $\sum_{u\in S(t)}\mathbf{1}_{\#A_{u}=1}=0\Rightarrow L_{t}\geq\lceil{q}/{2}\rceil$ . So, we can use the decomposition

(4.10) \begin{align} & \varepsilon^{-q/2}\big|\mathbb{E}\big(F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big)\mathbf{1}_{\#\mathcal{L}_{1}=q}\big)\big| \nonumber \\[5pt] & = \Bigg|\varepsilon^{-q/2}\sum_{k\in[q]}\sum_{l\in\{0,1,\dots,(q-k-1)_{+}\}} \mathbb{E}\big(\mathbf{1}_{C_{k,l}(\alpha T)}\mathbf{1}_{\#\mathcal{L}_{1}=q}F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big)\big) \nonumber \\[5pt] &\qquad + \varepsilon^{-q/2}\mathbb{E}\big(\mathbf{1}_{L_{\alpha T}\geq\left\lceil q/2\right\rceil }\mathbf{1}_{\#\mathcal{L}_{1}=q} F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big)\big)\Bigg| \underset{\varepsilon\rightarrow0}{\longrightarrow}0 \end{align}

(by (4.6) and Lemmas 4.1 and 4.2).

( $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ are defined in Section 4.1.)

Corollary 4.2. Suppose $F\in\mathcal{B}_\textrm{sym}^{0}(q)$ is of the form $F=(f_{1}\otimes\dots\otimes f_{q})_{\textrm{sym}}$ . Let $A \in \sigma(\mathcal{L}_{2})$ . Then

\begin{align*} \big|\mathbb{E}\big(F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big)\mathbf{1}_{A}\big)\big| \leq \Vert F\Vert_{\infty}\varepsilon^{q/2}\bigg\{ K_{1}(q)+\Gamma_{1}^{q}C_{\textrm{tree}}(q) \bigg(\frac{1}{\delta}\bigg)^{q}(q+1)^{2}\bigg\}. \end{align*}

Proof. We get, as in (4.10),

\begin{align*} &\big|\mathbb{E}\big( F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big)\mathbf{1}_{A}\big)\big| \\[5pt] & \qquad = \Bigg|\mathbb{E}\Bigg(F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big)\mathbf{1}_{A}\Bigg( \mathbf{1}_{L_{\alpha T}\geq q/2}+\sum_{k'\in[q]}\sum_{0\leq l\leq(q-k'-1)_{+}}\mathbf{1}_{C_{k',l}(\alpha T)}\Bigg)\Bigg)\Bigg|\\[5pt] & \;\,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{(from Lemmas 4.1 and 4.2)} \\[5pt] & \qquad \leq\Vert F\Vert_{\infty}\varepsilon^{q/2} \Bigg\{K_{1}(q)+\Gamma_{1}^{q}C_{\textrm{tree}}(q)\bigg(\frac{1}{\delta}\bigg)^{q} \sum_{k'\in[q]}1+(q-k'-1)_{+}\Bigg\}, \end{align*}

and $\sum_{k'\in[q]}1+(q-k'-1)_{+}\leq(q+1)^{2}$ (see Section B for a detailed proof).

We now want to find the limit of $\varepsilon^{-q/2}\mathbb{E}\big(\mathbf{1}_{L_{T}\leq q/2}\mathbf{1}_{\#\mathcal{L}_{1}=q}F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big)\big)$ when $\varepsilon$ goes to 0, for q even. First we need a technical lemma.

For any i, the process $\big(B_{t}^{(i)}\big)$ has a stationary law (see [Reference Asmussen1, Theorem 3.3 p. 151]). Let $B_{\infty}$ be a random variable having this stationary law $\eta$ (it has already appeared in Section 3). We can always suppose that it is independent of all the other variables.

Fact 4.2. From now on, when we have an $\alpha$ in (0, 1), we suppose that $\alpha T-\log(\delta)<(T+\alpha T)/2$ and $(T+\alpha T)/2-\log(\delta)<T$ (this is true if T is large enough). (The constant $\delta$ is defined in Assumption 2.3.)

Lemma 4.3. Let $f_{1} , f_{2}$ be in $\mathcal{B}_\textrm{sym}^{0}(1)$ . Let $\alpha$ belong to (0, 1), and $\varepsilon'$ belong to $(0,\theta-1)$ ( $\theta$ is defined in Fact 3.1). We have

(4.11) \begin{equation} \int_{-\infty}^{-\log(\delta)}\textrm{e}^{-v} \big|\mathbb{E}\big(f_{1}\big(\overline{B}_{0}^{(1),v}\big)f_{2}\big(\overline{B}_{0}^{(2),v}\big)\big)\big|\,\textrm{d} v \lt \infty , \end{equation}

and, almost surely, for T large enough,

\begin{align*} & \bigg|\textrm{e}^{T-\alpha T-B_{\alpha T}^{(1)}}\mathbb{E}\big(f_{1}\otimes f_{2}\big(B_{T,}^{(1)}B_{T}^{(2)}\big) \mathbf{1}_{G_{1,2}(T)^{\textrm{c}}}\mid\mathcal{F}_{S(\alpha T)},G_{\alpha T}\big) \\[5pt] & \qquad\qquad\qquad -\int_{-\infty}^{-\log(\delta)}\textrm{e}^{-v}\mathbb{E}\big(\mathbf{1}_{v\leq\overline{B}_{0}^{(1),v}} f_{1}\big(\overline{B}_{0}^{(1),v}\big)f_{2}\big(\overline{B}_{0}^{(2),v}\big)\big)\,\textrm{d} v\bigg| \\[5pt] &\qquad\qquad\qquad\qquad\qquad\qquad \leq \Gamma_{2}\Vert f_{1}\Vert_{\infty}\Vert f_{2}\Vert_{\infty} \exp\bigg({-}(T-\alpha T)\bigg(\frac{\theta-\varepsilon'-1}{2}\bigg)\bigg), \end{align*}

where

\begin{align*} \Gamma_{2}=\frac{\Gamma_{1}^{2}}{\delta^{2+2(\theta-\varepsilon')}(2(\theta-\varepsilon')-1)} + \frac{\Gamma_{1}}{\delta^{\theta-\varepsilon'}} + \frac{\Gamma_{1}^{2}}{\delta^{2(\theta-\varepsilon')}(2(\theta-\varepsilon')-1)}. \end{align*}

(The processes $B^{(1)}$ , $B^{(2)}$ , $\overline{B}^{(1),v}$ , and $\overline{B}^{(2),v}$ are defined in Sections 2.3, 2.4, and 2.6 .)

Proof. We have, for all s in $\big[\alpha T+B_{\alpha T}^{(1)},T\big]$ (because of (2.1) and (4.4)),

\begin{align*} \mathbb{P}\big(u\{s,2\}=u\{s,1\}\mid\mathcal{F}_{S(\alpha T)},G_{\alpha T},\big(S_{j}^{(1)}\big)_{j\geq1}\big) = \exp\big[{-}\big(s+B_{s}^{(1)}-\big(\alpha T+B_{\alpha T}^{(1)}\big)\big)\big] \end{align*}

(we remind the reader that $u\{s,1\}$ , $G_{1,2}$ , are defined in Section 4.1, below (4.3)). Let us introduce the breaking time $\tau_{1,2}$ between 1 and 2 as a random variable having the following property: conditionally on $\mathcal{F}_{S(\alpha T)}$ , $G_{\alpha T}$ , and $\big(S_{j}^{(1)}\big)_{j\geq1}$ , $\tau_{1,2}$ has the density

\begin{align*} s\in\mathbb{R}\mapsto\mathbf{1}_{[\alpha T+B_{\alpha T}^{(1)},+\infty)}(s)\textrm{e}^{-(s-(\alpha T+B_{T}^{(1)}))} \end{align*}

(this is a translation of an exponential law). We have the equalities $\alpha T+B_{\alpha T}^{(1)}=S_{j_{0}}^{(1)}$ for some $j_{0}$ , and $T+B_{T}^{(1)}=S_{i_{0}}^{(1)}$ for some $i_{0}$ . Here, we need to comment on the definitions of Section 4.1. In Fig. 6 we have $-\log(\xi_{(1,2)})=S_{2}^{(1)}$ (as in (4.5)), $S(S_{2}^{(1)})=\{(1,2,1),(1,1,1),(2,1)\}$ , and $u\{-\!\log(\xi_{(1,2)}),1\}=\{(1,2,1)\}$ . It is important to understand this example before reading what follows. The breaking time $\tau_{1,2}$ has the following interesting property (for all $k\geq j_{0}$ ):

\begin{align*} \mathbb{P}\big(u\big\{S_{k}^{(1)},2\big\} \neq u\big\{S_{k}^{(1)},1\big\} & \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T},\big(S_{j}^{(1)}\big)_{j\geq1}\big) \\[5pt] & = \mathbb{P}\big(\tau_{1,2}\in\big[\alpha T+B_{\alpha T}^{(1)},S_{k}^{(1)}\big] \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T},\big(S_{j}^{(1)}\big)_{j\geq1}\big). \end{align*}

Just because we can, we impose, for all $k\geq j_{0}$ , conditionally on $\mathcal{F}_{S(\alpha T)}$ , $G_{\alpha T}$ , and $\big(S_{j}^{(1)}\big)_{j\geq1}$ ,

\begin{align*} \big\{u\big\{S_{k}^{(1)},2\big\} \neq u\big\{S_{k}^{(2)},1\big\}\big\} = \big\{\tau_{1,2}\in\big[\alpha T+B_{\alpha T}^{(1)},S_{k}^{(1)}\big]\big\}. \end{align*}

Now, let v be in $\big[\alpha T+B_{\alpha T}^{(1)},T+B_{T}^{(1)}\big]$ . We observe that, for all v in $\big[\alpha T+B_{\alpha T}^{(1)},T+B_{T}^{(1)}\big]$ ,

\begin{align*} \qquad\qquad\qquad \mathbb{E}\big(f_{1}\otimes f_{2}\big(B_{T,}^{(1)}&B_{T}^{(2)}\big)\mathbf{1}_{G_{1,2}(T)^{\textrm{c}}} \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T},\big(S_{j}^{(1)}\big)_{j\geq1},\tau_{1,2}=v\big) \\[5pt] &= \mathbb{E}\big(f_{1}\otimes f_{2}\big(\widehat{B}_{T,}^{(1),v}\widehat{B}_{T}^{(2),v}\big) \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T},\big(S_{j}^{(1)}\big)_{j\geq1}\big) \\[5pt] &\;\,\qquad\quad\qquad\qquad\qquad\quad\qquad\qquad\qquad\qquad \text{(because of (2.7)).} \end{align*}

And so,

\begin{align*} & \mathbb{E}\big(f_{1}\otimes f_{2}\big(B_{T,}^{(1)}B_{T}^{(2)}\big)\mathbf{1}_{G_{1,2}(T)^{\textrm{c}}} \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T}\big) \\[5pt] & =\mathbb{E}\big(\mathbb{E}\big(f_{1}\otimes f_{2}\big(B_{T,}^{(1)}B_{T}^{(2)}\big)\mathbf{1}_{G_{1,2}(T)^{\textrm{c}}} \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T},\big(S_{j}^{(1)}\big)_{j\geq1}\big) \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T}\big) \\[5pt] & = \mathbb{E}\big(\mathbb{E}\big(\mathbb{E}\big(f_{1}\otimes f_{2}\big(B_{T,}^{(1)}B_{T}^{(2)}\big)\mathbf{1}_{G_{1,2}(T)^{\textrm{c}}} \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T},\big(S_{j}^{(1)}\big)_{j\geq1},\tau_{1,2}\big) \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T},\big(S_{j}^{(1)}\big)_{j\geq1}\big) \\[5pt] &\qquad \qquad \qquad \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T}\big) \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \!\!\!\qquad\qquad \text{(keep in mind that $\widehat{B}^{(1),v}=B^{(1)}$ for all v)}\end{align*}
\begin{align*}& = \mathbb{E}\bigg(\!\mathbb{E}\bigg(\!\int_{\alpha T+B_{\alpha T}^{(1)}}^{T+B_{T}^{(1)}} \!\textrm{e}^{-\big(v-\alpha T-\widehat{B}_{\alpha T}^{(1),v}\big)} \mathbb{E}\big(f_{1}\!\otimes f_{2}\big(B_{T,}^{(1)}B_{T}^{(2)}\big)\mathbf{1}_{G_{1,2}(T)^{\textrm{c}}}\! \mid \mathcal{F}_{S(\alpha T)},\!G_{\alpha T},\!\big(S_{j}^{(1)}\big)_{j\geq1},\!\tau_{1,2}=v\big)\,\textrm{d} v \\[5pt] & \qquad \qquad \qquad \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T},\big(S_{j}^{(1)}\big)_{j\geq1}\bigg) \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T}\bigg) \\[5pt] & = \mathbb{E}\bigg(\mathbb{E}\bigg(\int_{\alpha T+B_{\alpha T}^{(1)}}^{T+B_{T}^{(1)}} \textrm{e}^{-(v-\alpha T-\widehat{B}_{\alpha T}^{(1),v})}\mathbb{E}\bigg(f_{1}\big(\widehat{B}_{T}^{(1),v}\big) f_{2}\big(\widehat{B}_{T}^{(2),v}\big) \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T},\big(S_{j}^{(1)}\big)_{j\geq1}\bigg)\,\textrm{d} v \\[5pt] & \qquad \qquad \qquad \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T},\big(S_{j}^{(1)}\big)_{j\geq1}\bigg) \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T}\bigg) \\[5pt] & = \mathbb{E}\bigg(\int_{\alpha T+B_{\alpha T}^{(1)}}^{T+B_{T}^{(1)}}\textrm{e}^{-(v-\alpha T-\widehat{B}_{\alpha T}^{(1),v})} f_{1}\big(\widehat{B}_{T}^{(1),v}\big)f_{2}\big(\widehat{B}_{T}^{(2),v}\big)\,\textrm{d} v \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T}\bigg) . \end{align*}

Let us split the above integral into two parts and multiply them by $\textrm{e}^{T-\alpha T-B_{\alpha T}^{(1)}}$ . For the first part:

(4.12) \begin{align} & \bigg|\textrm{e}^{T-\alpha T-B_{\alpha T}^{(1)}}\mathbb{E}\bigg(\int_{\alpha T+B_{\alpha T}^{(1)}}^{(T+\alpha T)/2} \textrm{e}^{-\big(v-\alpha T-\widehat{B}_{\alpha T}^{(1),v}\big)}f_{1}\big(\widehat{B}_{T}^{(1),v}\big) f_{2}\big(\widehat{B}_{T}^{(2),v}\big)\,\textrm{d} v \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T}\bigg)\bigg| \nonumber \\[5pt] &= \textrm{e}^{T-\alpha T-B_{\alpha T}^{(1)}}\bigg|\mathbb{E}\bigg(\int_{\alpha T+B_{\alpha T}^{(1)}}^{(T+\alpha T)/2} \!\textrm{e}^{-\big(v-\alpha T-\widehat{B}_{\alpha T}^{(1),v}\big)} \mathbb{E}\big(f_{1}\big(\widehat{B}_{T}^{(1),v}\big)f_{2}\big(\widehat{B}_{T}^{(2),v}\big)\! \mid\! \widehat{B}_{v}^{(1),v},\widehat{B}_{v}^{(2),v},\mathcal{F}_{S(\alpha T)},\!G_{\alpha T}\big)\textrm{d} v\nonumber \\[5pt] &\qquad \qquad \qquad \qquad \mid\mathcal{F}_{S(\alpha T)},G_{\alpha T}\bigg)\bigg| \nonumber \\[5pt] & \qquad\qquad\qquad\qquad \qquad\;\, \text{(using the fact that $\widehat{B}_{T}^{(1),v}$ and $\widehat{B}_{T}^{(2),v}$ are independent conditionally on} \nonumber \\[5pt] & \qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\text{$\big\{\widehat{B}_{v}^{(1),v},\widehat{B}_{v}^{(2),v},\mathcal{F}_{S(\alpha T)},G_{\alpha T}\big\}$ if $T\geq v-\log(\delta)$, we get,} \nonumber \\[5pt] & \;\;\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \!\!\!\!\qquad\quad\qquad \text{by Theorem 3.1, Corollary 3.1, and Fact 4.2)} \nonumber \\[5pt] & \leq \textrm{e}^{T-\alpha T-B_{\alpha T}^{(1)}} \mathbb{E}\bigg(\int_{\alpha T+B_{\alpha T}^{(1)}}^{(T+\alpha T)/2} \textrm{e}^{-(v-\alpha T-\widehat{B}_{\alpha T}^{(1)})} \nonumber \\[5pt] & \qquad \qquad \qquad \big(\Gamma_{1}\Vert f_{1}\Vert_{\infty} \textrm{e}^{-(\theta-\varepsilon')(T-v-\widehat{B}_{v}^{(1),v})_{+}} \Gamma_{1}\Vert f_{2}\Vert_{\infty} \textrm{e}^{-(\theta-\varepsilon')(T-v-\widehat{B}_{v}^{(2),v})_{+}}\big)\,\textrm{d} v \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T}\bigg) \nonumber \\[5pt] & \,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{(using Assumption 2.3)} \nonumber \\[5pt] &\leq\Gamma_{1}^{2}\Vert f_{1}\Vert_{\infty}\Vert f_{2}\Vert_{\infty}\textrm{e}^{(T-\alpha T-\log(\delta))} \int_{\alpha T}^{(T+\alpha T)/2}\textrm{e}^{-(v-\alpha T+\log(\delta))} \textrm{e}^{-2(\theta-\varepsilon')(T-v+\log(\delta))}\,\textrm{d} v \nonumber \\[5pt] & = \frac{\Gamma_{1}^{2}\Vert f_{1}\Vert_{\infty}\Vert f_{2}\Vert_{\infty}}{\delta^{2+2(\theta-\varepsilon')}} \textrm{e}^{(T-2(\theta-\varepsilon')T)}\bigg[ \frac{\textrm{e}^{(2(\theta-\varepsilon')-1)v}}{2(\theta-\varepsilon')-1}\bigg]_{\alpha T}^{(T+\alpha T)/2} \nonumber \\[5pt] & \leq \frac{\Gamma_{1}^{2}\Vert f_{1}\Vert_{\infty}\Vert f_{2}\Vert_{\infty}}{\delta^{2+2(\theta-\varepsilon')}} \frac{\exp(\!-\!(2(\theta-\varepsilon')-1)T+(2(\theta-\varepsilon')-1){(T+\alpha T)}/{2})}{2(\theta-\varepsilon')-1} \nonumber \\[9pt] & = \frac{\Gamma_{1}^{2}\Vert f_{1}\Vert_{\infty}\Vert f_{2}\Vert_{\infty}}{\delta^{2+2(\theta-\varepsilon')}} \frac{\exp(\!-\!(2(\theta-\varepsilon')-1)({(T-\alpha T)}/{2}))}{2(\theta-\varepsilon')-1}. \end{align}

For the second part, minus some other terms:

(4.13) \begin{align} & \left| \begin{array}{c} \underbrace{\textrm{e}^{T-\alpha T-B_{\alpha T}^{(1)}}\mathbb{E}\bigg(\int_{(T+\alpha T)/2}^{T+B_{T}^{(1)}} \textrm{e}^{-(v-\alpha T-B_{\alpha T}^{(1)})}f_{1}\big(\widehat{B}_{T}^{(1),v}\big) f_{2}\big(\widehat{B}_{T}^{(2),v}\big) \, \textrm{d} v \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T}\bigg)} \\[7pt] \text{ second part} \end{array}\right. \nonumber \\[7pt] & \left. \qquad \qquad \qquad \qquad -\,\begin{array}{c} \underbrace{\int_{(T+\alpha T)/2}^{T-\log(\delta)}\textrm{e}^{-(v-T)}\mathbb{E}\bigg(\mathbf{1}_{v\leq T+\overline{B}_{T}^{(1),v}} f_{1}\big(\overline{B}_{T}^{(1),v}\big)f_{2}\big(\overline{B}_{T}^{(2),v}\big)\bigg)\,\textrm{d} v} \\[7pt] ({ \heartsuit}) \end{array}\right| \nonumber \\[7pt] & = \bigg|\textrm{e}^{T-\alpha T-B_{\alpha T}^{(1)}} \mathbb{E}\bigg(\int_{(T+\alpha T)/2}^{T-\log(\delta)} \textrm{e}^{-(v-\alpha T-B_{\alpha T}^{(1)})}\mathbf{1}_{v\leq T+B_{T}^{(1)}}f_{1}\big(\widehat{B}_{T}^{(1),v}\big) f_{2}\big(\widehat{B}_{T}^{(2),v}\big) \, \textrm{d} v \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T}\bigg) \nonumber \\[7pt] & \qquad - \textrm{e}^{T-\alpha T-B_{\alpha T}^{(1)}}\mathbb{E}\bigg(\int_{(T+\alpha T)/2}^{T-\log(\delta)} \textrm{e}^{-(v-\alpha T-B_{\alpha T}^{(1)})}\mathbf{1}_{v\leq T+\overline{B}_{T}^{(1),v}}f_{1}\big(\overline{B}_{T}^{(1),v}\big) f_{2}\big(\overline{B}_{T}^{(2),v}\big)\,\textrm{d} v\bigg)\bigg| \nonumber \\[7pt] & = \textrm{e}^{T-\alpha T-B_{\alpha T}^{(1)}}\bigg|\int_{(T+\alpha T)/2}^{T-\log(\delta)} \textrm{e}^{-(v-\alpha T-B_{\alpha T}^{(1)})}\mathbb{E}\big(\mathbb{E}\big(\mathbf{1}_{v\leq T+B_{T}^{(1)}} f_{1}\big(\widehat{B}_{T}^{(1),v}\big)f_{2}(\widehat{B}_{T}^{(2),v}) \nonumber \\[7pt] & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \mid \widehat{C}_{v}^{(1),v},\mathcal{F}_{S(\alpha T)},G_{\alpha T}\big) \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T}\big)\,\textrm{d} v \nonumber \\[7pt] & \qquad \qquad -\int_{(T+\alpha T)/2}^{T-\log(\delta)}\textrm{e}^{-(v-\alpha T-B_{\alpha T}^{(1)})} \mathbb{E}\big(\mathbb{E}\big(\mathbf{1}_{v\leq T+\overline{B}_{T}^{(1),v}}f_{1}\big(\overline{B}_{T}^{(1),v}\big) f_{2}\big(\overline{B}_{T}^{(2),v}\big) \mid \overline{C}_{v}^{(1),v}\big)\big)\,\textrm{d} v\bigg|. \end{align}

We observe that, for all v in $[(T+\alpha T)/2,T-\log(\delta)]$ , once $\widehat{C}_{v}^{(1),v}$ is fixed, we can make a simulation of $\widehat{B}_{T}^{(1),v}=B_{T}^{(1)}$ and $\widehat{B}_{T}^{(2),v}$ (these processes are independent of $\mathcal{F}_{S(\alpha T)},G_{\alpha T}$ conditionally on $\widehat{C}_{v}^{(1),v}$ ). Indeed, we draw $\widehat{B}_{v}^{(1),v}$ conditionally on $\widehat{C}_{v}^{(1),v}$ (with law $\eta_{1}\big(\,\cdot\mid\widehat{C}_{v}^{(1),v}\big)$ defined in Fact 2.2), then we draw $\widehat{B}_{v}^{(2),v}$ conditionally on $\widehat{B}_{v}^{(1),v}$ and $\widehat{C}_{v}^{(1),v}$ (with law $\eta'\big(\,\cdot\mid\widehat{B}_{v}^{(1),v},\widehat{C}_{v}^{(1),v}\big)$ , see Fact 2.2). Then, $\big(\widehat{B}_{t}^{(1),v}\big)_{t\geq v}$ and $\big(\widehat{B}_{t}^{(2),v}\big)_{t\geq v}$ run their courses as independent Markov processes, until we get $\widehat{B}_{T}^{(1),v}$ , $\widehat{B}_{T}^{(2),v}$ .

In the same way (for all v in $[(T+\alpha T)/2,T-\log(\delta)]$ ), we observe that the process $\big(\overline{C}^{(1),v},\overline{B}^{(1),v}\big)$ starts at time $v-2b$ and has the same transition as $\big(C^{(1)},B^{(1)}\big)$ (see (2.6)). By Assumption 2.1, the following time exists: $S=\sup\big\{t\;:\; v-b\leq t\leq v\,,\,\overline{C}_{t}^{(1),v}=0\big\}$ . We then have $v-S=\overline{C}_{v}^{(1),v}$ . When $\overline{C}_{v}^{(1),v}$ is fixed, this entails that $\overline{B}_{v}^{(1),v}$ has the law $\eta_{1}\big(\,\cdot\mid\overline{C}_{v}^{(1),v}\big)$ . We have $\overline{B}_{v}^{(2),v}$ of law $\eta'\big(\,\cdot\mid\overline{C}_{v}^{(1),v},\overline{B}_{v}^{(1),v}\big)$ (by (2.10)). As before, we then let the process $\big(\overline{B}_{t}^{(1),v},\overline{B}_{t}^{(2),v}\big)_{t\geq v}$ run its course as a Markov process having the same transition as $\big(\widehat{B}_{t-v+kb}^{(1),kb},\widehat{B}_{t-v+kb}^{(2),kb}\big)_{t\geq v}$ until we get $\overline{B}_{T}^{(1),v}$ , $\overline{B}_{T}^{(2),v}$ .

So we get that (for all v in $[(T+\alpha T)/2,T-\log(\delta)]$ )

\begin{align*} & \mathbb{E}\big(\mathbf{1}_{v\leq T+B_{T}^{(1)}}f_{1}\big(\widehat{B}_{T}^{(1),v}\big) f_{2}\big(\widehat{B}_{T}^{(2),v}\big) \mid \widehat{C}_{v}^{(1),v},\mathcal{F}_{S(\alpha T)},G_{\alpha T}\big) = \Psi\big(\widehat{C}_{v}^{(1),v}\big), \\[5pt] & \mathbb{E}\big(\mathbf{1}_{v\leq T+\overline{B}_{T}^{(1),v}}f_{1}\big(\overline{B}_{T}^{(1),v}\big) f_{2}\big(\overline{B}_{T}^{(2),v}\big)\mid\overline{C}_{v}^{(1),v}\big) = \Psi\big(\overline{C}_{v}^{(1),v}\big) \overset{\text{law}}{=}\Psi(C_{\infty}) \end{align*}

for some function $\Psi$ , the same in both lines, such that $\Vert\Psi\Vert_{\infty}\leq\Vert f_{1}\Vert_{\infty}\Vert f_{2}\Vert_{\infty}$ (where $C_{\infty}$ is defined in Section 3). So, by Theorem 3.1 and Corollary 3.1 applied on the time interval $\big[\alpha T+B_{\alpha T}^{(1)},v\big]$ , the quantity in (4.14) can be bounded (remember that $\widehat{C}^{(1),v}=C^{(1)}$ , Section 2.5) by

\begin{align*} \textrm{e}^{T-\alpha T-B_{\alpha T}^{(1)}}\int_{(T+\alpha T)/2}^{T-\log(\delta)} \textrm{e}^{-(v-\alpha T-B_{\alpha T}^{(1)})}\Gamma_{1}\Vert f_{1}\Vert_{\infty}\Vert f_{2}\Vert_{\infty} \textrm{e}^{-(\theta-\varepsilon')(v-\alpha T-B_{\alpha T}^{(1)})}\,\textrm{d} v . \end{align*}

(Coming from Corollary 3.1 there is an integral over a set of Lebesgue measure zero in the above bound, but this term vanishes.) The above bound can in turn be bounded by

(4.14) \begin{align} \qquad\qquad& \Gamma_{1}\Vert f_{1}\Vert_{\infty}\Vert f_{2}\Vert_{\infty}\delta^{-(\theta-\varepsilon')}\textrm{e}^{T} \int_{(T+\alpha T)/2}^{T-\log(\delta)}\textrm{e}^{(\theta-\varepsilon')\alpha T}\textrm{e}^{-(\theta-\varepsilon'+1)v}\,\textrm{d} v \nonumber \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\quad\qquad\qquad\qquad\qquad\qquad\qquad\quad \text{(as $\theta-\varepsilon'+1>1$)} \nonumber \\[5pt] & \leq\Gamma_{1}\Vert f_{1}\Vert_{\infty}\Vert f_{2}\Vert_{\infty}\delta^{-(\theta-\varepsilon')} \textrm{e}^{T+\alpha T(\theta-\varepsilon')} \exp\bigg[{-}(\theta-\varepsilon'+1)\bigg(\frac{T+\alpha T}{2}\bigg)\bigg] \nonumber \\[5pt] & = \Gamma_{1}\Vert f_{1}\Vert_{\infty}\Vert f_{2}\Vert_{\infty}\delta^{-(\theta-\varepsilon')} \exp\bigg[{-}(\theta-\varepsilon'-1)\bigg(\frac{T-\alpha T}{2}\bigg)\bigg]. \end{align}

We have

(4.15) \begin{align} &\int_{\frac{T+\alpha T}{2}}^{T-\log(\delta)}\textrm{e}^{-(v-T)} \mathbb{E}\big(\mathbf{1}_{v\leq T+\overline{B}_{T}^{(1),v}}f_{1}\big(\overline{B}_{T}^{(1),v}\big) f_{2}\big(\overline{B}_{T}^{(2),v}\big)\big) \, \textrm{d} v \nonumber \\[5pt] &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\, \text{(as $\big(\overline{B}_{T}^{(1),v},\overline{B}_{T}^{(2),v}\big)$ and $\big(\overline{B}_{0}^{(1),v-T},\overline{B}_{0}^{(2),v-T}\big)$ have same law)} \nonumber \\[5pt] &= \int_{\frac{T+\alpha T}{2}}^{T-\log(\delta)}e^{-(v-T)} \mathbb{E}\big(\mathbf{1}_{v-T\leq\overline{B}_{0}^{(1),v-T}}f_{1}\big(\overline{B}_{0}^{(1),v-T}\big) f_{2}\big(\overline{B}_{0}^{(2),v-T}\big)\big) \, \textrm{d} v \nonumber \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\quad\qquad\qquad\qquad\qquad\qquad \text{(change of variable $v'=v-T$)} \nonumber \\[5pt] &= \mathbb{E}\bigg(\int_{-\left(\frac{T-\alpha T}{2}\right)}^{-\log(\delta)}\textrm{e}^{-v'} \mathbf{1}_{v'\leq\overline{B}_{0}^{(1),v'}}f_{1}\big(\overline{B}_{0}^{(1),v'}\big) f_{2}\big(\overline{B}_{0}^{(1),v'}\big)\,\textrm{d} v'\bigg) \end{align}

and

(4.16) \begin{align} & \int_{-\infty}^{-\frac{(T-\alpha T)}{2}}\textrm{e}^{-v} \big|\mathbb{E}\big(f_{1}\big(\overline{B}_{0}^{(1),v}\big) f_{2}\big(\overline{B}_{0}^{(2),v}\big)\big)\big|\,\textrm{d} v \nonumber \\[5pt] & \qquad\quad\text{(since $\overline{B}_{0}^{(1),v}$ and $\overline{B}_{0}^{(2),v}$ are independent conditionally on $\overline{B}_{v}^{(1),v}$, $\overline{B}_{v}^{(2),v}$ if $v-\log(\delta)\leq0$;} \nonumber \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\,\qquad\qquad\text{ using Theorem 3.1 and Corollary 3.1)} \nonumber \\[5pt] & \leq \int_{-\infty}^{-\frac{(T-\alpha T)}{2}}\textrm{e}^{-v}\Gamma_{1}^{2} \Vert f_{1}\Vert_{\infty}\Vert f_{2}\Vert_{\infty} \mathbb{E}\big(\textrm{e}^{-(\theta-\varepsilon')(-v-\overline{B}_{v}^{(1),v})_{+}} \textrm{e}^{-(\theta-\varepsilon')(-v-\overline{B}_{v}^{(2),v})_{+}}\big)\,\textrm{d} v \nonumber \\[5pt] &\qquad\qquad\qquad\qquad\qquad\!\!\quad\text{ (again, coming from Corollary 3.1 there is an integral over a set of}\nonumber \\ &\qquad\qquad\qquad\qquad\qquad\quad\!\!\text{ Lebesgue measure zero in the above bound, but this term vanishes)} \nonumber \\[5pt] &\leq \int_{-\infty}^{-\frac{(T-\alpha T)}{2}}\textrm{e}^{-v} \Gamma_{1}^{2}\Vert f_{1}\Vert_{\infty}\Vert f_{2}\Vert_{\infty} \textrm{e}^{-2(\theta-\varepsilon')(-v+\log(\delta))} \, \textrm{d} v \nonumber \\[5pt] & = \frac{\Gamma_{1}^{2}\Vert f_{1}\Vert_{\infty}\Vert f_{2}\Vert_{\infty}}{\delta^{2(\theta-\varepsilon')}} \frac{\exp(\!-\!(2(\theta-\varepsilon')-1){(T-\alpha T)}/{2})}{2(\theta-\varepsilon')-1}. \end{align}

Equations (4.16) and (4.17) give us (4.11). Equations (4.13) and (4.15)–(4.17) give us the desired result (see below to understand the puzzle).

Lemma 4.4. Let $k \in \{0,1,2,\dots,p\}$ . We suppose that q is even and $q=2p$ . Let $\alpha\in(q/(q+2),1)$ . We suppose that $F=f_{1}\otimes f_{2}\otimes\dots\otimes f_{q}$ , with $f_{1}, \ldots, f_{q}$ in $\mathcal{B}_\textrm{sym}^{0}(1)$ . Then,

(4.17) \begin{align} \varepsilon^{-q/2}\mathbb{E}\big(F\big(B_{T,}^{(1)}\dots,&B_{T}^{(q)}\big)\mathbf{1}_{G_{\alpha T}}\mathbf{1}_{\#\mathcal{L}_{1}=q}\big) \nonumber \\[5pt] & \underset{\varepsilon\rightarrow0}{\longrightarrow}\prod_{i=1}^{p}\int_{-\infty}^{-\log(\delta)}\textrm{e}^{-v} \mathbb{E}\big(\mathbf{1}_{v\leq\overline{B}_{0}^{(1),v}}f_{2i-1}\big(\overline{B}_{0}^{(1),v}\big) f_{2i}\big(\overline{B}_{0}^{(2),v}\big)\big)\,\textrm{d} v. \end{align}

(Remember that $T=-\log\varepsilon$ .)

Proof. By Fact 4.1, we have $T>\alpha T-\log(\delta)$ . We have (remember the definitions just before Section 4.2)

\begin{align*} G_{\alpha T}\cap\{\#\mathcal{L}_{1}=q\} = G_{\alpha T}\cap\underset{1\leq i\leq p}{\bigcap}G_{2i-1,2i}(T)^{\textrm{c}}. \end{align*}

We have (remember $T=-\log(\varepsilon)$ )

(4.18) \begin{align} & \varepsilon^{-q/2}\mathbb{E}\big(F\big(B_{T,}^{(1)}\dots,B_{T}^{(q)}\big) \mathbf{1}_{G_{\alpha T}}\mathbf{1}_{\#\mathcal{L}_{1}=q}\big) \nonumber \\[5pt] & = \textrm{e}^{pT}\mathbb{E}\Bigg(\mathbf{1}_{G_{\alpha T}}\mathbb{E}\Bigg(\prod_{i=1}^{p}f_{2i-1}\otimes f_{2i}\big(B_{T}^{(2i-1)},B_{T}^{(2i)}\big)\mathbf{1}_{G_{2i-1,2i}(T)^{\textrm{c}}} \mid \mathcal{F}_{S(\alpha T)},G_{\alpha T}\Bigg)\Bigg) \nonumber \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\!\!\! \text{(as $\big(B_{T}^{(1)},B_{T}^{(2)},\mathbf{1}_{G_{1,2}(T)}\big)$, $\big(B_{T}^{(3)},B_{T}^{(4)},\mathbf{1}_{G_{3,4}(T)}\big)$,} \dots \text{are independent} \nonumber \\[5pt] & \;\;\qquad\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{conditionally on $\mathcal{F}_{S(\alpha T)},G_{\alpha T}$ due to Fact 4.2)} \nonumber \\[5pt] &= \mathbb{E}\Bigg(\mathbf{1}_{G_{\alpha T}}\prod_{i=1}^{p}\textrm{e}^{T}\mathbb{E}\big(f_{2i-1}\otimes f_{2i}\big(B_{T}^{(2i-1)},B_{T}^{(2i)}\big)\mathbf{1}_{G_{2i-1,2i}(T)^{\textrm{c}}} \mid \mathcal{F}_{S(\alpha T)},G_{2i-1,2i}(\alpha T)\big)\Bigg) \nonumber \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\quad\qquad\text{(by Lemma 4.3 and as $\big(B^{(1)},\dots,B^{(q)}\big)$ is exchangeable)} \nonumber \\[5pt] &= \mathbb{E}\Bigg(\mathbf{1}_{G_{\alpha T}}\prod_{i=1}^{p}\textrm{e}^{\alpha T+B_{\alpha T}^{(2i-1)}} \bigg(\int_{-\infty}^{-\log(\delta)}\textrm{e}^{-v} \mathbb{E}\big(\mathbf{1}_{v\leq\overline{B}_{0}^{(1),v}}f_{2i-1}\big(\overline{B}_{0}^{(1),v}\big) f_{2i}\big(\overline{B}_{0}^{(2),v}\big)\big) \, \textrm{d} v + R_{2i-1,2i}\bigg)\Bigg), \end{align}

with (a.s.)

(4.19) \begin{equation} |R_{2i-1,2i}| \leq \Gamma_{2}\Vert f_{2i-1}\Vert_{\infty}\Vert f_{2i}\Vert_{\infty} \textrm{e}^{-(T-\alpha T){(\theta-\varepsilon'-1)}/{2}}. \end{equation}

We introduce the events (for $t\in[0,T]$ , with $u\{\cdot\}$ defined below (4.3))

\begin{align*} O_{t}=\{ \#\{u\{t,2i-1\},1\leq i\leq p\}=p\}, \end{align*}

and the tribes (for i in [q], $t\in[0,T]$ ) $\mathcal{F}_{t,i}=\sigma(u\{t,i\},\xi_{u\{t,i\}})$ . As $G_{\alpha T}=O_{\alpha T}\cap\bigcap_{1\leq i\leq p}\{u\{\alpha T,2i-1\}=u\{\alpha T,2i\}\}$ , we have

(4.20) \begin{align} \qquad\qquad & \mathbb{E}\Bigg(\mathbf{1}_{G_{\alpha T}}\prod_{i=1}^{p}\textrm{e}^{B_{\alpha T}^{(2i-1)}+\alpha T}\Bigg) \nonumber \\[5pt] &= \mathbb{E}\Bigg(\mathbf{1}_{O_{\alpha T}}\prod_{i=1}^{p}\textrm{e}^{B_{\alpha T}^{(2i-1)}+\alpha T} \mathbb{E}\Bigg(\prod_{i=1}^{p}\mathbf{1}_{u\{\alpha T,2i-1\}=u\{\alpha T,2i\}} \mid \vee_{1\leq i\leq p}\mathcal{F}_{\alpha T,2i-1}\Bigg)\Bigg) \nonumber \\[5pt] & \;\;\;\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{(by Proposition 2.1 and (2.1))} \nonumber \\[5pt] &= \mathbb{E}(\mathbf{1}_{O_{\alpha T}}). \end{align}

We then observe that

\begin{align*} O_{\alpha T}^{\textrm{c}}=\cup_{i\in[p]}\cup_{j\in[p],j\neq i}\{u\{\alpha T,2i-1\}=u\{\alpha T,2j-1\}\}, \end{align*}

and, for $i\neq j$ ,

\begin{align*} \qquad \mathbb{P}(u\{\alpha T,2i-1\}=u\{\alpha T,2j-1\}) & = \mathbb{E}\big(\mathbb{E}\big(\mathbf{1}_{u\{\alpha T,2i-1\}=u\{\alpha T,2j-1\}}\mid\mathcal{F}_{\alpha T,2i-1}\big)\big) \\[5pt] &= \mathbb{E}\big(\textrm{e}^{-\alpha T-B_{\alpha T}^{(2i-1)}}\big) \quad\qquad\text{(by Proposition 2.1 and (2.1))} \\[5pt] &\leq \textrm{e}^{-\alpha T-\log(\delta)}. \,\qquad\qquad \text{(because of Assumption (2.3))} \end{align*}

So, $\mathbb{P}(O_{\alpha T})\underset{\varepsilon\rightarrow0}{\longrightarrow}1$ . This gives us enough material to finish the proof of (4.18).

Indeed, starting from (4.19), we have

\begin{align*} & \varepsilon^{-q/2}\mathbb{E}\big(F\big(B_{T,}^{(1)}\dots,B_{T}^{(q)}\big)\mathbf{1}_{G_{\alpha T}}\mathbf{1}_{\#\mathcal{L}_{1}=q}\big) \\[5pt] & = \mathbb{E}\Bigg(\mathbf{1}_{G_{\alpha T}}\prod_{i=1}^{p}\textrm{e}^{\alpha T+B_{\alpha T}^{(2i-1)}}\Bigg) \prod_{i=1}^{p}\bigg(\int_{-\infty}^{-\log(\delta)}\textrm{e}^{-v}\mathbb{E}(\mathbf{1}_{v\leq\overline{B}_{0}^{(1),v}} f_{2i-1}\big(\overline{B}_{0}^{(1),v}\big)f_{2i}\big(\overline{B}_{0}^{(2),v}\big))\,\textrm{d} v\bigg) \\[5pt] & \quad +\mathbb{E}\Bigg(\mathbf{1}_{G_{\alpha T}}\prod_{i=1}^{p}\textrm{e}^{\alpha T+B_{\alpha T}^{(2i-1)}}R_{2i-1,2i}\Bigg) \;=\!:\;(\textrm{I})+(\textrm{II}). \end{align*}

By (4.21),

\begin{align*} (\textrm{I}) &= \mathbb{P}(O_{\alpha t})\prod_{i=1}^{p}\bigg(\int_{-\infty}^{-\log(\delta)}\textrm{e}^{-v} \mathbb{E}\big(\mathbf{1}_{v\leq\overline{B}_{0}^{(1),v}}f_{2i-1}\big(\overline{B}_{0}^{(1),v}\big) f_{2i}\big(\overline{B}_{0}^{(2),v}\big)\big)\,\textrm{d} v\bigg) \\[5pt] & \qquad \underset{\varepsilon\rightarrow0}{\longrightarrow}\prod_{i=1}^{p}\bigg(\int_{-\infty}^{-\log(\delta)}\textrm{e}^{-v} \mathbb{E}\big(\mathbf{1}_{v\leq\overline{B}_{0}^{(1),v}}f_{2i-1}\big(\overline{B}_{0}^{(1),v}\big) f_{2i}\big(\overline{B}_{0}^{(2),v}\big)\big)\,\textrm{d} v\bigg). \end{align*}

And, by (4.20),

\begin{equation*} (\textrm{II}) \leq \mathbb{P}(O_{\alpha t})\prod_{i=1}^{p}\big( \Gamma_{2}\Vert f_{2i-1}\Vert_{\infty}\Vert f_{2i}\Vert_{\infty} \textrm{e}^{-(T-\alpha T){(\theta-\varepsilon'-1)}/{2}}\big)\underset{\varepsilon\rightarrow0}{\longrightarrow}0. \end{equation*}

4.3. Convergence result

For f and g bounded measurable functions, we set

(4.21) \begin{equation} V(f,g) = \int_{-\infty}^{-\log(\delta)}\textrm{e}^{-v}\mathbb{E}\big(\mathbf{1}_{v\leq\overline{B}_{0}^{(1),v}} f\big(\overline{B}_{0}^{(1),v}\big)g\big(\overline{B}_{0}^{(2),v}\big)\big)\,\textrm{d} v.\end{equation}

For q even, we set $\mathcal{I}_{q}$ to be the set of partitions of [q] into subsets of cardinality 2. We have

(4.22) \begin{equation} \#\mathcal{I}_{q}=\frac{q!}{\left(q/2\right)!2^{q/2}}.\end{equation}

For I in $\mathcal{I}_{q}$ and t in [0, T], we introduce

\begin{align*} G_{t,I}=\{\text{for all }\{i,j\}\in I,\text{ there exists }u\in\mathcal{U}\text{ such that } \xi_{u} \lt \textrm{e}^{-t},\,\xi_{{\boldsymbol{{m}}}(u)}\geq \textrm{e}^{-t},\,A_{u}=\{i,j\}\}.\end{align*}

For t in [0, T], we define $\mathcal{P}_{t}=\cup_{I\in\mathcal{I}_{q}}G_{t,I}$ . The above event can be understood as ‘at time t, the dots are paired on different fragments’. As before, the reader has to keep in mind that $T=-\log(\varepsilon)$ , see (2.3).

Proposition 4.1. Let q be in $\mathbb{N}^{*}$ . Let $F=(f_{1}\otimes\dots\otimes f_{q})_{\textrm{sym}}$ with $f_{1}, \dots , f_{q}$ in $\mathcal{B}_{\textrm{sym}}^0(1)$ ( $(\!\cdot\!)_{\textrm{sym}}$ defined in (4.1)). If q is even ( $q=2p$ ) then

(4.23) \begin{equation} \varepsilon^{q/2}\mathbb{E}\big(F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big)\mathbf{1}_{\#\mathcal{L}_{1}=q}\big) \underset{\varepsilon\rightarrow0}{\longrightarrow}\sum_{I\in\mathcal{I}_{q}}\prod_{\{a,b\}\in I} V(f_{a},f_{b}). \end{equation}

Proof. Let $\alpha$ be in $(q/(q+2),1)$ . We have

\begin{equation*} \varepsilon^{-q/2}\mathbb{E}\big(F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big)\mathbf{1}_{\#\mathcal{L}_{1}=q}\big) = \varepsilon^{-q/2}\mathbb{E}\big(F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big)\mathbf{1}_{\#\mathcal{L}_{1}=q} (\mathbf{1}_{\mathcal{P}_{\alpha T}}+\mathbf{1}_{\mathcal{P}_{\alpha T}^{\textrm{c}}})\big). \end{equation*}

Remember that the events of the form $C_{k,l}(t)$ , $L_{t}$ are defined in Section 4.1. The set $\mathcal{P}_{\alpha T}^{\textrm{c}}$ is a disjoint union of sets of the form $C_{k,l}(\alpha T)$ (with $k\geq1$ ) and $\{L_{\alpha T}>q/2\}$ (this can be understood heuristically by: ‘if the dots are not paired on fragments then some of them are alone on their fragment, or none of them is alone on a fragment and some are a group of at least three on a fragment’). As before, the event $\{\#\mathcal{L}_{1}=q\}$ is measurable with respect to $\mathcal{L}_{2}$ (see (4.6)). So, by Lemmas 4.1 and 4.2,

\begin{align*} \lim_{\varepsilon\rightarrow0}\varepsilon^{-q/2}\mathbb{E}\big(F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big) \mathbf{1}_{\#\mathcal{L}_{1}=q}\mathbf{1}_{\mathcal{P}_{\alpha T}^{\textrm{c}}}\big)=0. \end{align*}

We compute:

\begin{align*} &\varepsilon^{-q/2}\mathbb{E}\big(F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big) \mathbf{1}_{\#\mathcal{L}_{1}=q}\mathbf{1}_{\mathcal{P}_{\alpha T}}\big) = \varepsilon^{-q/2}\mathbb{E}\bigg(F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big) \mathbf{1}_{\#\mathcal{L}_{1}=q}\sum_{I_{q}\in\mathcal{I}_{q}}\mathbf{1}_{G_{\alpha T,I_{q}}}\bigg) \\[5pt] &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;\text{(as} F \text{is symmetric and $\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big)$ is exchangeable)} \\[5pt] &= \frac{q!}{2^{q/2}({q}/{2})!}\varepsilon^{-q/2}\mathbb{E}\big(F\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big) \mathbf{1}_{\#\mathcal{L}_{1}=q}\mathbf{1}_{G_{\alpha T}}\big) \\[5pt] &= \frac{q!\varepsilon^{-q/2}}{2^{q/2}({q}/{2})!}\frac{1}{q!}\sum_{\sigma\in\mathcal{S}_{q}} \mathbb{E}\big((f_{\sigma(1)}\otimes\dots\otimes f_{\sigma(q)})\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big) \mathbf{1}_{\#\mathcal{L}_{1}=q}\mathbf{1}_{G_{\alpha T}}\big) \\[5pt] &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\qquad\qquad \text{(by Lemma 4.4)} \\[5pt] &\underset{\varepsilon\rightarrow0}{\longrightarrow}\frac{1}{2^{q/2}({q}/{2})!} \sum_{\sigma\in\mathcal{S}_{q}}\prod_{i=1}^{p}V(f_{\sigma(2i-1)},f_{\sigma(2i)}) = \sum_{I\in\mathcal{I}_{q}}\prod_{\{a,b\}\in I}V(f_{a},f_{b}). \end{align*}

5. Results

We are interested in the probability measure $\gamma_{T}$ defined by its action on bounded measurable functions $F\;:\;[0,1]\rightarrow\mathbb{R}$ by

\begin{align*} \gamma_{T}(F)=\sum_{u\in\mathcal{U}_{\varepsilon}}\xi{}_{u}F\bigg(\frac{\xi{}_{u}}{\varepsilon}\bigg).\end{align*}

We define, for all $q \in \mathbb{N}^{*}$ and F from $[0,1]^{q}$ to $\mathbb{R}$ ,

\begin{align*} \gamma_{T}^{\otimes q}(F) = \sum_{a\;:\;[q]\rightarrow\mathcal{U_{\varepsilon}}}\xi{}_{a(1)}\cdots\xi{}_{a(q)} F\bigg(\frac{\xi_{a(1)}}{\varepsilon},\dots,\frac{\xi_{a(q)}}{\varepsilon}\bigg), \\[5pt] \gamma_{T}^{\odot q}(F) = \sum_{a\;:\;[q]\hookrightarrow\mathcal{U_{\varepsilon}}}\xi_{a(1)}\cdots\xi_{a(q)} F\bigg(\frac{\xi_{a(1)}}{\varepsilon},\dots,\frac{\xi_{a(q)}}{\varepsilon}\bigg),\end{align*}

where the last sum is taken over all the injective applications a from [q] to $\mathcal{U}_\varepsilon$ . We set

\begin{align*} \Phi(F)\;:\;(y_{1},\dots,y_{q})\in\mathbb{R}^{+}\mapsto F(\textrm{e}^{-y_{1}},\dots,\textrm{e}^{-y_{q}}). \end{align*}

The law $\gamma^{\otimes q}$ is the law of q fragments picked in $\mathcal{U}_{\varepsilon}$ with replacement. For each fragment, the probability of being picked is its size. The measure $\gamma^{\odot q}$ is not a law: $\gamma^{\odot q}(F)$ is an expectation over q fragments picked in $\mathcal{U}_{\varepsilon}$ with replacement (for each fragment, the probability of being picked is its size); in this expectation, we multiply the integrand by zero if two fragments are the same (and by one otherwise). The definition of Section 2.2 says that we can define the tagged fragment by painting colored dots on the stick [0, 1] (q dots of different colors, these are the $Y_{1}, \dots , Y_{q}$ ) and then by looking on which fragments of $\mathcal{U}_{\varepsilon}$ we have these dots. So, we get (remember $T=-\log\varepsilon$ )

(5.1) \begin{align} \mathbb{E}\big(\gamma_{T}^{\otimes q}(F)\big) &= \mathbb{E}\big(\Phi(F)\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big)\big), \nonumber \\[5pt] \mathbb{E}\big(\gamma_{T}^{\odot q}(F)\big) & = \mathbb{E}\big(\Phi(F)\big(B_{T}^{(1)},\dots,B_{T}^{(q)}\big)\textbf{1}_{\#\mathcal{L}_{1}}=q\big) .\end{align}

We define, for all bounded continuous $f\;:\;\mathbb{R}^{+}\rightarrow\mathbb{R}$ ,

(5.2) \begin{equation} \gamma_{\infty}(f)=\eta(\Phi(f)).\end{equation}

Proposition 5.1. (Law of large numbers.) We remind the reader that we have Fact 3.1, and that we are under Assumptions 2.1, 2.2, 2.3, and 3.1. Let f be a continuous function from [0, 1] to $\mathbb{R}$ . Then

\begin{align*} \gamma_{T}(f)\underset{T\rightarrow+\infty}{\overset{\textrm{a.s.}}{\longrightarrow}}\gamma_{\infty}(f). \end{align*}

(Remember $T=-\log\varepsilon$ .)

Proof. We take a bounded measurable function $f\;:\;[0,1]\rightarrow\mathbb{R}$ . We define $\overline{f}=f-\eta(\Phi(f))$ . We take an integer $q\geq2$ . We introduce the notation

\begin{align*} \text{for all } g\;:\;\mathbb{R}^{+}\rightarrow\mathbb{R}\text{ and all }(x_{1},\dots,x_{q})\in\mathbb{R}^{q},\quad g^{\otimes q}(x_{1},\dots,x_{q})=g(x_{1})g(x_{2})\dots g(x_{q}). \end{align*}

We have

\begin{align*} \qquad\qquad \mathbb{E}((\gamma_{T}(f)-\eta(\Phi(f)))^{q}) &= \mathbb{E}\big(\big(\gamma_{T}\big(\overline{f}\big)\big)^{q}\big) \\[5pt] &= \mathbb{E}\big(\gamma_{t}^{\otimes q}\big(\overline{f}^{\otimes q}\big)\big)\qquad\quad\;\;\, \text{(as $\big(B^{(1)},\dots,B^{(q)}\big)$ is exchangeable)} \\[5pt] &= \mathbb{E}\big(\gamma_{t}^{\otimes q}\big(\big(\overline{f}^{\otimes q}\big)_{\text{sym}}\big)\big) \\[5pt] &\leq \Vert\overline{f}\Vert_{\infty}^{q}\varepsilon^{q/2} \bigg\{K_{1}(q)+\Gamma_{1}^{q}C_{\text{tree}}(q)\bigg(\frac{1}{\delta}\bigg)^{q}(q+1)^{2}\bigg\}. \\[5pt] &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\!\!\! \text{(by Corollary 4.2)} \end{align*}

We now take sequences $(T_{n}=\log(n))_{n\geq1}$ , $(\varepsilon_{n}=1/n)_{n\geq1}$ . We then have, for all n and for all $\iota>0$ ,

\begin{align*} \mathbb{P}([\gamma_{T_{n}}(f)-\eta(\Phi(f))]^{4}\geq\iota) \leq \frac{\Vert\overline{f}\Vert_{\infty}^{4}}{\iota n^{2}} \bigg\{K_{1}(4)+\Gamma_{1}^{4}C_{\text{tree}}(4)\bigg(\frac{1}{\delta}\bigg)^{4}\times25\bigg\}. \end{align*}

So, by the Borell–Cantelli lemma,

(5.3) \begin{equation} \gamma_{T_{n}}(f)\underset{n\rightarrow+\infty}{\overset{\text{a.s.}}{\longrightarrow}}\eta(\Phi(f)). \end{equation}

We now have a little more work to do to get to the result. Let n be in $\mathbb{N}^{*}$ . We use the decomposition (where $\mathcal{U}_{\varepsilon}$ is defined in Section 2.3 and $\sqcup$ stands for ‘disjoint union’, defined in Section 1.5) $\mathcal{U}_{\varepsilon_{n}}=\mathcal{U}_{\varepsilon_{n}}^{(1)}\sqcup\mathcal{U}_{\varepsilon_{n}}^{(2)}$ , where $\mathcal{U}_{\varepsilon_{n}}^{(1)} = \mathcal{U}_{\varepsilon_{n}}\cap\mathcal{U}_{\varepsilon_{n+1}}=\mathcal{U}_{\varepsilon_{n+1}}$ , $\mathcal{U}_{\varepsilon_{n}}^{(2)} = \mathcal{U}_{\varepsilon_{n}}\backslash\mathcal{U}_{\varepsilon_{n+1}}$ . For u in $\mathcal{U}_{\varepsilon_{n}}\backslash\mathcal{U}_{\varepsilon_{n+1}}$ , we set ${\boldsymbol{{d}}}(u)=\{v\;:\; u=\mathbf{\mathbf{m}}(v)\}$ ( $\mathbf{m}$ is defined in Section 2.1) and we observe that, for all u ( $T_{u}$ defined in (4.4)),

(5.4) \begin{equation} \sum_{v\in{\boldsymbol{{d}}}(u)}\xi_{v}=\xi_{u}. \end{equation}

We can then write

\begin{equation*} \sum_{u\in\mathcal{U}_{\varepsilon_{n}}}\xi_{u}f\bigg(\frac{\xi_{u}}{\varepsilon_{n}}\bigg) = \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}}\xi_{u}f(n\xi_{u}) + \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}}\xi_{u}f(n\xi_{u}). \end{equation*}

There exists $n_{1}$ such that, for n bigger than $n_{1}$ , $\textrm{e}^{-a}<\varepsilon_{n+1}/\varepsilon_{n}$ (remember Assumption 2.3). We suppose $n\geq n_{1}$ ; we then have, for all u in $\mathcal{U}_{\varepsilon_{n}}^{(2)}$ , $\varepsilon_{n}>\xi_{u}\geq\varepsilon_{n+1}$ and, for any v in $\mathbf{d}(u)$ , $\xi_{v}\leq\varepsilon_{n}\textrm{e}^{-a}$ , $\xi_{v}<\varepsilon_{n+1}$ . So we get

(5.5) \begin{equation} \sum_{u\in\mathcal{U}_{\varepsilon_{n+1}}}\xi_{u}f\bigg(\frac{\xi_{u}}{\varepsilon_{n+1}}\bigg) = \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}}\xi_{u}f((n+1)\xi_{u}) + \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}}\sum_{v\in{\boldsymbol{{d}}}(u)}\xi_{v}f((n+1)\xi_{v}). \end{equation}

Thus we have, for $n\geq n_{1}$ ,

(5.6) \begin{align} & \Bigg|\sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}}\sum_{v\in{\boldsymbol{{d}}}(u)}\xi_{v}f((n+1)\xi_{v}) - \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}}\xi_{u}f(n\xi_{u})\Bigg| \nonumber \\[5pt] & \qquad\qquad \leq |\gamma_{T_{n+1}}(f)-\gamma_{T_{n}}(f)| + \Bigg|\sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}}\xi_{u}f((n+1)\xi_{u}) - \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}}\xi_{u}f(n\xi_{u})\Bigg|. \end{align}

If we take $f=\operatorname{Id}$ , the terms in the equation above can be bounded:

(5.7) \begin{align} \qquad & \Bigg|\sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}}\sum_{v\in{\boldsymbol{{d}}}(u)}\xi_{v}f((n+1)\xi_{v}) - \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}}\xi_{u}f(n\xi_{u})\Bigg| \nonumber \\[5pt] & \geq \Bigg|\sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}} \Bigg(\xi_{u}f(n\xi_{u})-\sum_{v\in{\boldsymbol{{d}}}(u)}\xi_{v}f(n\xi_{v})\Bigg)\Bigg| - \Bigg|\sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}}\sum_{v\in{\boldsymbol{{d}}}(u)} (\xi_{v}f(n\xi_{v})-\xi_{v}f((n+1)\xi_{v}))\Bigg| \nonumber \\[5pt] &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\!\!\!\!\qquad\qquad\qquad \text{(by Assumption 2.3)} \nonumber \\[5pt] &\geq \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}} \Bigg(\xi_{u}f(n\xi_{u})-\sum_{v\in{\boldsymbol{{d}}}(u)}\xi_{v}f(n\xi_{u})e^{-a}\Bigg) - \Bigg|\sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}}\sum_{v\in{\boldsymbol{{d}}}(u)} (\xi_{v}f(n\xi_{v})-\xi_{v}f((n+1)\xi_{v}))\Bigg| \nonumber \\[5pt] &\geq \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}}\xi_{u}(1-\textrm{e}^{-a})\frac{n}{n+1} - \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}}\sum_{v\in{\boldsymbol{{d}}}(u)}\xi_{v}\frac{1}{n+1}, \\[5pt] &\!\!\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{(by (5.4))} \nonumber \end{align}
(5.8) \begin{align} |\gamma_{T_{n+1}}(f)-\gamma_{T_{n}}(f)| &+ \Bigg|\sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}}\xi_{u}f((n+1)\xi_{u}) - \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}}\xi_{u}f(n\xi_{u})\Bigg| \nonumber \\[5pt] & \qquad\qquad\qquad\qquad \leq |\gamma_{T_{n+1}}(f)-\gamma_{T_{n}}(f)| + \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}}\xi_{u}\frac{1}{n}. \end{align}

Let $\iota>0$ . We fix $\omega$ in $\Omega$ . By (5.3), almost surely, there exists $n_{2}$ such that, for $n\geq n_{2}$ , $|\gamma_{T_{n+1}}(f)-\gamma_{T_{n}}(f)|<\iota$ . For $n\geq n_{1}\vee n_{2}$ , we can then write

(5.9) \begin{align}\qquad\qquad &\sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}}\xi_{u} \leq \frac{n+1}{n(1-\textrm{e}^{-a})}\Bigg(\iota + \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}}\sum_{v\in{\boldsymbol{{d}}}(u)}\xi_{v}\frac{1}{n+1} + \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}}\xi_{u}\frac{1}{n}\Bigg) \nonumber \\[5pt] & \,\quad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{(by (5.6), (5.7), and (5.8))} \nonumber \\[5pt] & \qquad \leq \frac{n+1}{n(1-\textrm{e}^{-a})}\bigg(\iota+\frac{1}{n}\bigg). \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\!\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{(by (5.4))} \nonumber \end{align}

Let $n\geq n_{1}\vee n_{2}$ and $t \in (T_{n},T_{n+1})$ . We can use the decomposition

(5.10) \begin{equation} \mathcal{U}_{\varepsilon_{n}} = \mathcal{U}_{\varepsilon_{n}}^{(1)}(t)\sqcup\mathcal{U}_{\varepsilon_{n}}^{(2)}(t), \quad \text{where } \mathcal{U}_{\varepsilon_{n}}^{(1)}(t) = \mathcal{U}_{\varepsilon_{n}}\cap\mathcal{U}_{\textrm{e}^{-t}} = \mathcal{U}_{\textrm{e}^{-t}},\ \mathcal{U}_{\varepsilon_{n}}^{(2)}(t) = \mathcal{U}_{\varepsilon_{n}}\backslash\mathcal{U}_{\textrm{e}^{-t}}. \end{equation}

For u in $\mathcal{U}_{\varepsilon_{n}}\backslash\mathcal{U}_{\varepsilon_{n}}^{(1)}(t)$ , we set ${\boldsymbol{{d}}}(u,t)=\{v\in\mathcal{U}_{e^{-t}}\;:\; u=\mathbf{m}(v)\}$ . As $n\geq n_{1}$ , $\mathbf{d}(u,t)=\mathbf{d}(u)$ and we have

(5.11) \begin{equation} \sum_{v\in{\boldsymbol{{d}}}(u,t)}\xi_{v}=\xi_{u}. \end{equation}

Similar to (5.5), we have

\begin{equation*} \sum_{u\in\mathcal{U}_{e^{-t}}}\xi_{u}f\big(\textrm{e}^{t}\xi_{u}\big) = \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}(t)}\xi_{u}f\big(\textrm{e}^{t}\xi_{u}\big) + \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}(t)}\sum_{v\in{\boldsymbol{{d}}}(u,t)}\xi_{v}f\big(\textrm{e}^{t}\xi_{v}\big). \end{equation*}

We fix f continuous from [0, 1] to $\mathcal{\mathbb{R}}$ ; there exists $n_{3}\in\mathbb{N}^{*}$ such that, for all $x,y\in[0,1]$ , $|x-y|\leq1/n_{3}\Rightarrow|f(x)-f(y)|<\iota$ . Suppose that $n\geq n_{1}\vee n_{2}\vee n_{3}$ . Then, using (5.10) and (5.11), we have, for all $t\in[T_{n},T_{n+1}]$ ,

(5.12) \begin{align} \qquad & |\gamma_{t}(f)-\gamma_{T_{n}}(f)| = \Bigg|\sum_{u\in\mathcal{U}_{e^{-t}}}\xi_{u}f\big(\textrm{e}^{t}\xi_{u}\big) - \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}(t)}\xi_{u}f(n\xi_{u}) - \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}(t)}\xi_{u}f(n\xi_{u})\Bigg| \nonumber \\[5pt] &= \Bigg|\sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}(t)}\xi_{u}f\big(\textrm{e}^{t}\xi_{u}\big) + \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}(t)}\sum_{v\in{\boldsymbol{{d}}}(u,t)}\xi_{v}f\big(\textrm{e}^{t}\xi_{v}\big) - \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}(t)}\xi_{u}f(n\xi_{u}) - \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}(t)}\xi_{u}f(n\xi_{u})\Bigg| \nonumber\\[5pt] &\leq \Bigg|\sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}(t)}\xi_{u}f\big(\textrm{e}^{t}\xi_{u}\big) - \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}(t)}\xi_{u}f(n\xi_{u})\Bigg| + 2\sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}(t)}\xi_{u}\Vert f\Vert_{\infty} \nonumber \\[5pt] & \leq \sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(1)}(t)}\xi_{u}\iota + 2\sum_{u\in\mathcal{U}_{\varepsilon_{n}}^{(2)}(t)}\xi_{u}\Vert f\Vert_{\infty} \nonumber \\[5pt] &\leq \iota + 2\Vert f\Vert_{\infty}\frac{n+1}{n(1-\textrm{e}^{-a})}\bigg(\iota+\frac{1}{n}\bigg). \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\, \text{(using (5.9), and since $\mathcal{U}_{\varepsilon_{n}}^{(2)}(t)\subset\mathcal{U}_{\varepsilon_{n}}^{(2)}$)}\nonumber \end{align}

Equations (5.3) and (5.12) prove the desired result.

The set $\mathcal{B}_{\textrm{sym}}^0(1)$ is defined in Section 4.1.

Theorem 5.1. (Central limit theorem.) We remember we have Fact 3.1, and we are under Assumptions 2.1, 2.2, 2.3, and 3.1. Let $q \in \mathbb{N}^{*}$ . For functions $f_{1}, \ldots , f_{q}$ that are continuous and in $\mathcal{B}_{\textrm{sym}}^0(1)$ , we have

\begin{align*} \varepsilon^{-q/2}(\gamma_{T}(f_{1}),\dots,\gamma_{T}(f_{q})) \underset{T\rightarrow+\infty}{\overset{\text{law}}{\longrightarrow}} \mathcal{N}(0,(K(f_{i},f_{j}))_{1\leq i,j\leq q})\qquad({\varepsilon=\textrm{e}^{-T}}).\end{align*}

(K is given in (5.13).)

Proof. Let $f_{1}, \ldots , f_{q}, \mathcal{B}_{\textrm{sym}}^0(1), v_{1},\dots,v_{q}\in\mathbb{R}$ .

First, we develop the following product (remember that for $u \in \mathcal{U}_{\varepsilon}$ , $\xi_{u}/\varepsilon<1$ a.s.):

\begin{align*} \qquad\quad & \prod_{u\in\mathcal{U}_{\varepsilon}}\bigg(1+\sqrt{\varepsilon}\frac{\xi_{u}}{\varepsilon} (iv_{1}f_{1}+\dots+iv_{q}f_{q})\bigg(\frac{\xi_{u}}{\varepsilon}\bigg)\bigg) \nonumber \\[5pt] & = \exp\Bigg(\sum_{u\in\mathcal{U}_{\varepsilon}}\log\bigg[1 + \sqrt{\varepsilon}\operatorname{Id}\times(iv_{1}f_{1}+\dots+iv_{q}f_{q}) \bigg(\frac{\xi_{u}}{\varepsilon}\bigg)\bigg]\Bigg) \nonumber \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\,\qquad\qquad\qquad\qquad \text{(for $\varepsilon$ small enough)} \nonumber \\[5pt] & = \exp\Bigg(\sum_{u\in\mathcal{U_{\varepsilon}}}\sum_{k\geq1}\frac{(\!-\!1)^{k+1}}{k}\varepsilon^{k/2} (\!\operatorname{Id}\times(iv_{1}f_{1}+\dots+iv_{q}f_{q}))^{k}\bigg(\frac{\xi_{u}}{\varepsilon}\bigg)\Bigg) \nonumber \\[5pt] & = \exp\bigg(\frac{1}{\sqrt{\varepsilon}\,}\gamma_{T}(iv_{1}f_{1}+\dots+iv_{q}f_{q}) + \frac{1}{2}\gamma_{T}(\!\operatorname{Id}\times(v_{1}f_{1}+\dots+v_{q}f_{q})^{2})+R_{\varepsilon}\bigg), \end{align*}

where

\begin{align*} R_{\varepsilon} &= \sum_{k\geq3}\sum_{u\in\mathcal{U}_{\varepsilon}}\frac{(\!-\!1)^{k+1}}{k}\varepsilon^{k/2-1}\xi_{u} \bigg(\frac{\xi_{u}}{\varepsilon}\bigg)^{k-1}(iv_{1}f_{1}+\dots+iv_{q}f_{q})^{k} \bigg(\frac{\xi_{u}}{\varepsilon}\bigg) \\[5pt] &= \sum_{k\geq3}\frac{(\!-\!1)^{k+1}}{k}\varepsilon^{k/2-1}\gamma_{T} ((\!\operatorname{Id})^{k-1}(iv_{1}f_{1}+\dots+iv_{q}f_{q})^{k}), \\[5pt] |R_{\varepsilon}| &\leq\sum_{k\geq3}\frac{\varepsilon^{k/2-1}}{k}(|v_{1}|\Vert f_{1}\Vert_{\infty}+\dots+|v_{q}| \Vert f_{q}\Vert_{\infty})^{k}=O(\sqrt{\varepsilon}). \end{align*}

We have, for some constant C (using $x\in\mathbb{R}\Rightarrow|\textrm{e}^{ix}|=1$ ),

\begin{align*} \qquad & \mathbb{E}\bigg(\bigg|\exp\bigg(\frac{1}{\sqrt{\varepsilon}\,}\gamma_{T}(iv_{1}f_{1}+\dots+iv_{q}f_{q}) + \frac{1}{2}\gamma_{T}(\!\operatorname{Id}\times(v_{1}f_{1}+\dots+v_{q}f_{q})^{2})+R_{\varepsilon}\bigg) \\[5pt] & \qquad - \exp\bigg(\frac{1}{\sqrt{\varepsilon}\,}\gamma_{T}(iv_{1}f_{1}+\dots+iv_{q}f_{q}) + \frac{1}{2}\eta(\Phi(\!\operatorname{Id}\times(v_{1}f_{1}+\dots+v_{q}f_{q})^{2}))\bigg)\bigg|\bigg) \\[5pt] & \leq \mathbb{E}\bigg(C\bigg|\frac{1}{2}\gamma_{T}(\!\operatorname{Id}\times(v_{1}f_{1}+\dots+v_{q}f_{q})^{2}) - \frac{1}{2}\eta(\Phi(\!\operatorname{Id}\times(v_{1}f_{1}+\dots+v_{q}f_{q})^{2}))+R_{\varepsilon}\bigg|\bigg) \\[5pt] &\underset{\varepsilon\rightarrow0}{\longrightarrow}0. \qquad\qquad\qquad\qquad\qquad\qquad\quad\qquad\qquad\qquad\qquad\qquad\qquad\!\! \text{(by Proposition 5.1)} \end{align*}

Second, we develop the same product in a different manner. We have (the order on $\mathcal{U}$ is defined in Section 2.1),

\begin{align*} \quad\qquad & \prod_{u\in\mathcal{U}_{\varepsilon}}\bigg(1+\sqrt{\varepsilon}\frac{\xi_{u}}{\varepsilon} (iv_{1}f_{1}+\dots+iv_{q}f_{q})\bigg(\frac{\xi_{u}}{\varepsilon}\bigg)\bigg) \\[5pt] & = \sum_{k\geq0}\varepsilon^{-k/2}i^{k}\sum_{1\leq j_{1},\dots,j_{k}\leq q}v_{j_{1}}\cdots v_{j_{k}} \sum_{\substack{ u_{1},\dots,u_{k}\in\mathcal{U_{\varepsilon}} \\ u_{1}<\dots<u_{k}}} \xi_{u_{1}}\cdots\xi_{u_{k}}f_{j_{1}}\bigg(\frac{\xi_{u_{1}}}{\varepsilon}\bigg)\cdots f_{j_{k}}\bigg(\frac{\xi_{u_{k}}}{\varepsilon}\bigg) \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;\;\qquad\qquad \text{(a detailed proof can be found in Section D)} \\[5pt] & = \sum_{k\geq0}\varepsilon^{-k/2}i^{k}\sum_{1\leq j_{1},\dots,j_{k}\leq q}v_{j_{1}}\cdots v_{j_{k}} \frac{1}{k!}\gamma_{T}^{\odot k}(f_{j_{1}}\otimes\dots\otimes f_{j_{k}}). \end{align*}

We have, for all k,

\begin{align*} & \Bigg|\varepsilon^{-k/2}\sum_{1\leq j_{1},\dots,j_{k}\leq q}v_{j_{1}}\cdots v_{j_{k}} \frac{1}{k!}\mathbb{E}\big(\gamma_{T}^{\odot k}(f_{j_{1}}\otimes\dots\otimes f_{j_{k}})\big)\Bigg| \\[5pt] & \quad \qquad \leq \varepsilon^{-k/2}\times\frac{q^{k}\sup(|v_{1}|,\dots,|v_{q}|)^{k} \sup(\Vert f_{1}\Vert_{\infty},\dots,\Vert f_{q}\Vert_{\infty})^{k}}{k!}. \end{align*}

So, by Corollary 4.1, Proposition 4.1, and (5.1), we get

\begin{align*} \qquad\qquad & \mathbb{E}\Bigg(\prod_{u\in\mathcal{U}_{\varepsilon}}\bigg(1+\sqrt{\varepsilon}\frac{\xi_{u}}{\varepsilon} (iv_{1}f_{1}+\dots+iv_{q}f_{q})\bigg(\frac{\xi_{u}}{\varepsilon}\bigg)\bigg)\Bigg) \\[5pt] & \underset{\varepsilon\rightarrow0}{\longrightarrow}\sum_{\substack{k\geq0 \\ k\text{ even}}} (\!-\!1)^{k/2}\sum_{1\leq j_{1},\dots,j_{k}\leq q}\frac{1}{k!}\sum_{I\in I_{k}}\prod_{\{a,b\}\in I} V(v_{j_{a}}f_{j_{a}},v_{j_{b}}f_{j_{b}}) \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\;\!\qquad\qquad \text{(a detailed proof can be found in Section C)} \\[5pt] &= \sum_{\substack{k\geq0 \\ k\text{ even}}} \frac{(\!-\!1)^{k/2}}{2^{k/2}(k/2)!}\sum_{1\leq j_{1},\dots,j_{k}\leq q} V(v_{j_{1}}f_{j_{1}},v_{j_{2}}f_{j_{2}})\cdots V(v_{j_{k-1}}f_{j_{k-1}},v_{j_{k}}f_{j_{k}}) \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\,\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{(using (4.23))} \\[5pt] &= \sum_{\substack{k\geq0 \\ k\text{ even}}} \frac{(\!-\!1)^{k/2}}{2^{k/2}(k/2)!} \Bigg(\sum_{1\leq j_{1},j_{2}\leq q}v_{j_{1}}v_{j_{2}}V(f_{j_{1}},f_{j_{2}})\Bigg)^{k/2} \\[5pt] &= \exp\Bigg({-}\frac{1}{2}\sum_{1\leq j_{1},j_{2}\leq q}v_{j_{1}}v_{j_{2}}V(f_{j_{1}},f_{j_{2}})\Bigg). \end{align*}

In conclusion, we have

\begin{align*}& \mathbb{E}\bigg(\exp\bigg(\frac{1}{\sqrt{\varepsilon}\,}\gamma_{T}(iv_{1}f_{1}+\dots+iv_{q}f_{q})\bigg)\bigg) \\[5pt] &\qquad \underset{\varepsilon\rightarrow0}{\longrightarrow} \exp\Bigg(-\frac{1}{2}\eta(\Phi(\!\operatorname{Id}\times(v_{1}f_{1}+\dots+v_{q}f_{q})^{2})) - \frac{1}{2}\sum_{1\leq j_{1},j_{2}\leq q}v_{j_{1}}v_{j_{2}}V(f_{j_{1}},f_{j_{2}})\Bigg). \end{align*}

So we get the desired result with, for all f, g,

(5.13) \begin{equation} K(f,g)=\eta(\Phi(\!\operatorname{Id}\times fg)+V(f,g)) \end{equation}

(V is defined in (4.22)).

Appendix A. Detailed proof of a bound appearing in the proof of Lemma 4.1

Lemma A.1. We have, for any f appearing in the proof of Lemma 4.1,

\begin{align*} \mathbb{E}(\mathbf{1}_{A_{u}=f(u),\,\textrm{for}\,\textrm{all}\, u\in\mathcal{T}_{2}}\mid\mathcal{L}_{2},\mathcal{T}_{2},m_{2}) \leq \prod_{u\in\mathcal{T}_{2}\backslash\{0\}}\textrm{e}^{-(\#f(u)-1)(T_{u}-T_{{\boldsymbol{{m}}}(u)})}. \end{align*}

Proof. We want to show this by recurrence on the cardinality of $\mathcal{T}_{2}$ .

If $\#\mathcal{T}_{2}=1$ , then $\mathcal{T}_{2}=\{0\}$ and the claim is true.

Suppose now that $\#\mathcal{T}_{2}=k$ and the claim is true up to the cardinality $k-1$ . There exists v in $\mathcal{T}_{2}$ such that (v, i) is not in $\mathcal{T}_{2}$ , for any i in $\mathbb{N}^{*}$ . We set $\mathcal{T}_{2}'=\mathcal{T}_{2}\backslash\{v\}$ , $\mathcal{L}_{2}'=\mathcal{L}_{2}\backslash\{v\}$ , $m_{2}'\;:\; u\in\mathcal{T}_{2}'\rightarrow(\xi_{u},\inf\{i,i\in A_{u}\})$ . We set $f(v)=\{i_{1},\dots,i_{p}\}$ (with $i_{1}<\dots<i_{p}$ ), $f({\boldsymbol{{m}}}(v))=\{i_{1},\dots,i_{p},i_{p+1},\dots,i_{q}\}$ (with $i_{p+1}<\dots<i_{q}$ ). We suppose that $m_{2}(v)=(\xi_{v},i_{1})$ , because if $m_{2}(v)=(\xi_{v},j)$ with $j\neq i_{1}$ then $A_{v}\neq f(v)$ for all $\omega$ , and then the left-hand side of the inequality above is zero. We have,

\begin{align*} & \mathbb{E}(\mathbf{1}_{A_{u}=f(u),\,\textrm{for}\,\textrm{all}\,u\in\mathcal{T}_{2}} \mid \mathcal{L}_{2},\mathcal{T}_{2},m_{2}) \\[5pt] & = \mathbb{E}(\mathbf{1}_{A_{u}=f(u),\,\textrm{for}\,\textrm{all}\,u\in\mathcal{T}_{2}'} \mathbb{E}(\mathbf{1}_{A_{v=f(v)}} \mid \mathcal{L}_{2},\mathcal{T}_{2},m_{2},(A_{u},u\in\mathcal{T}_{2}')) \mid \mathcal{L}_{2},\mathcal{T}_{2},m_{2})\\[5pt] & = \mathbb{E}(\mathbf{1}_{A_{u}=f(u),\,\textrm{for}\,\textrm{all}\,u\in\mathcal{T}_{2}'} \mathbb{E}(\mathbf{1}_{i_{1},\dots,i_{p}\in A_{v}}\mathbf{1}_{i_{p+1},\dots,i_{q}\notin A_{v}} \mid \mathcal{L}_{2},\mathcal{T}_{2},m_{2},(A_{u},u\in\mathcal{T}_{2}')) \mid \mathcal{L}_{2},\mathcal{T}_{2},m_{2}) \\[5pt] & \qquad\qquad\qquad\qquad \text{(remember we condition on $m_{2}$, so the $\mathbf{1}_{i_{1},\dots,i_{p}}$ can be replaced by ${\mathbf{1}_{i_{2},\dots,i_{p}}}$)} \\[5pt] & = \mathbb{E}(\mathbf{1}_{A_{u}=f(u),\,\textrm{for}\,\textrm{all}\,u\in\mathcal{T}_{2}'} \mathbb{E}(\mathbf{1}_{i_{2},\dots,i_{p}\in A_{v}}\mathbf{1}_{i_{p+1},\dots,i_{q}\notin A_{v}} \mid \mathcal{L}_{2},\mathcal{T}_{2},m_{2},(A_{u},u\in\mathcal{T}_{2}')) \mid \mathcal{L}_{2},\mathcal{T}_{2},m_{2}) \\[5pt] & \leq \mathbb{E}(\mathbf{1}_{A_{u}=f(u),\,\textrm{for}\,\textrm{all}\,u\in\mathcal{T}_{2}'} \mathbb{E}(\mathbf{1}_{i_{2},\dots,i_{p}\in A_{v}} \mid \mathcal{L}_{2},\mathcal{T}_{2},m_{2},(A_{u},u\in\mathcal{T}_{2}')) \mid \mathcal{L}_{2},\mathcal{T}_{2},m_{2}) \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\!\!\! \text{(because the $(Y_{j})$ introduced in Section 2.2 are independent)} \\[5pt] & = \mathbb{E}\Bigg(\mathbf{1}_{A_{u}=f(u),\,\textrm{for}\,\textrm{all}\,u\in\mathcal{T}_{2}'} \prod_{r=2}^{p}\mathbb{E}(\mathbf{1}_{i_{r}\in A_{v}} \mid \mathcal{L}_{2},\mathcal{T}_{2},m_{2},(A_{u},u\in\mathcal{T}_{2}')) \mid \mathcal{L}_{2},\mathcal{T}_{2},m_{2}\Bigg) \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\!\!\!\text{(because of (2.1); if $v\in\mathcal{L}_{2}$ then $\prod_{r=2}^{p}\dots$ is empty and thus $=1$)} \\[5pt] & = \mathbb{E}\Bigg(\mathbf{1}_{A_{u}=f(u),\,\textrm{for}\,\textrm{all}\,u\in\mathcal{T}_{2}'} \prod_{r=2}^{p}\widetilde{\xi}_{v} \mid \mathcal{L}_{2},\mathcal{T}_{2},m_{2}\Bigg) \qquad\qquad\qquad\quad\;\;\,\text{(by (4.4) and Proposition 2.1)} \\[5pt] & = \textrm{e}^{-(\#f(v)-1)(T_{v}-T_{{\boldsymbol{{m}}}(v)})} \mathbb{E}(\mathbf{1}_{A_{u}=f(u),\,\textrm{for}\,\textrm{all}\,u\in\mathcal{T}_{2}'} \mid \mathcal{L}_{2},\mathcal{T}_{2},m_{2}) \\[5pt] &= \textrm{e}^{-(\#f(v)-1)(T_{v}-T_{{\boldsymbol{{m}}}(v)})}\mathbb{E}( \mathbb{E}(\mathbf{1}_{A_{u}=f(u),\,\textrm{for}\,\textrm{all}\,u\in\mathcal{T}_{2}'} \mid \mathcal{L}_{2}',\mathcal{T}_{2}',m_{2}') \mid \mathcal{L}_{2},\mathcal{T}_{2},m_{2}) \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\!\!\!\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{(by recurrence)} \\[5pt] & \leq \prod_{u\in\mathcal{T}_{2}\backslash\{0\}}\textrm{e}^{-(\#f(u)-1)(T_{u}-T_{{\boldsymbol{{m}}}(u)})}. \end{align*}

Appendix B. Detailed proof of a bound appearing in the proof of Corollary 4.2

Lemma B.1. Let q be in $\mathbb{N}$ . Then $\sum_{k'\in[q]}1+(q-k'-1)_{+}\leq(q+1)^{2}$ .

Proof. We have

\begin{equation*} \sum_{k'\in[q]}1+(q-k'-1)_{+} = q + \sum_{k'\in[q-2]}(q-k'-1) = q + \sum_{i=1}^{q-2}i \leq \frac{q(q+1)}{2} \leq (q+1)^{2}. \end{equation*}

Appendix C. Detailed proof of an equality appearing in the proof of Theorem 5.1

Lemma C.1. Let $q\in\mathbb{N}^{*}$ . Suppose we have q functions $g_{1}, \dots , g_{q}$ in $\mathcal{B}_{\textrm{sym}}^0(1)$ . Then, for all k even (k in $\mathbb{N}$ ),

\begin{equation*} \sum_{1\leq j_{1},\dots,j_{k}\leq q}\sum_{I\in\mathcal{I}_{k}} \prod_{\{a,b\}\in I}V(g_{j_{a}},g_{j_{b}}) = \frac{k!}{2^{k/2}(k/2)!} \sum_{1\leq j_{1},\dots,j_{k}\leq q}V(g_{j_{1}},g_{j_{2}})\cdots V(g_{j_{k-1}},g_{j_{k}}). \end{equation*}

Proof. We set

\begin{align*} \sum_{1\leq j_{1},\dots,j_{k}\leq q}\sum_{I\in\mathcal{I}_{k}} \prod_{\{a,b\}\in I}V(g_{j_{a}},g_{j_{b}}) = (\textrm{I}), \quad \sum_{1\leq j_{1},\dots,j_{k}\leq q}V(g_{j_{1}},g_{j_{2}})\cdots V(g_{j_{k-1}},g_{j_{k}}) = (\textrm{II}). \end{align*}

Suppose, for some k, we have $i_{1}, \dots , i_{k} \in [q]$ , all distinct. There exist $N_{1}$ and $N_{2}$ such that:

  • the term (I) has $N_{1}$ terms $V(g_{i_{1}},g_{i_{2}})\cdots V(g_{i_{k-1}},g_{i_{k}})$ (up to permutations; that is, we consider that $V(g_{i_{3}},g_{i_{4}})V(g_{i_{2}},g_{i_{1}})\cdots V(g_{i_{k-1}},g_{i_{k}})$ and $V(g_{i_{1}},g_{i_{2}})\cdots V(g_{i_{k-1}},g_{i_{k}})$ are the same term);

  • the term (II) has $N_{2}$ terms $V(g_{i_{1}},g_{i_{2}})\cdots V(g_{i_{k-1}},g_{i_{k}})$ (again, up to permutations).

These numbers $N_{1}$ and $N_{2}$ do not depend on $i_{1}, \dots , i_{k}$ . In the case where the indexes $i_{1}, \dots , i_{k}$ are not distinct, we can easily find the number of terms equal to $V(g_{i_{1}},g_{i_{2}})\cdots V(g_{i_{k-1}},g_{i_{k}})$ in terms (I) and (II). For example, if $i_{2}=i_{1}$ and $i_{1},\,i_{3},\dots,i_{k}$ are distinct, then

  • the term (I) has $2N_{1}$ terms $V(g_{i_{1}},g_{i_{2}})\cdots V(g_{i_{k-1}},g_{i_{k}})$ ;

  • the term (II) has $2N_{2}$ terms $V(g_{i_{1}},g_{i_{2}})\cdots V(g_{i_{k-1}},g_{i_{k}})$

(we multiply simply by the number of $\sigma$ in $\mathcal{S}_{k}$ such that $(i_{1},i_{2},\dots,i_{k})=(i_{\sigma(1)},i_{\sigma(2)},\dots,i_{\sigma(k)})$ ). We do not need to know $N_{1}$ and $N_{2}$ , but we need to know $N_{1}/N_{2}$ . By taking V(g, f) to be 1 for all g, f, we see that $N_{1}/N_{2}=\#\mathcal{I}_{k}=k!/(2^{k/2}(k/2)!)$ .

Appendix D. Detailed proof of an equality appearing in the proof of Theorem 5.1

Lemma D.1. We have $f_{1}, \ldots , f_{q}$ , $\mathcal{B}_{\textrm{sym}}^0(1)$ , $k \in \mathbb{N}$ , and $v_{1},\dots,v_{q}\in\mathbb{R}$ . Then

\begin{align*} & \sum_{1\leq j_{1},\dots,j_{k}\leq q}v_{j_{1}}\cdots v_{j_{k}}\gamma_{T}^{\odot k} (f_{j_{1}}\otimes\dots\otimes f_{j_{k}}) \\[5pt] & \qquad\qquad = k!\sum_{1\leq j_{1},\dots,j_{k}\leq q}v_{j_{1}}\cdots v_{j_{k}} \sum_{\substack{ u_{1},\dots,u_{k}\in\mathcal{U_{\varepsilon}} \\ u_{1}<\dots<u_{k}}} \xi_{u_{1}}\cdots\xi_{u_{k}}f_{j_{1}}\bigg(\frac{\xi_{u_{1}}}{\varepsilon}\bigg)\cdots f_{j_{k}}\bigg(\frac{\xi_{u_{k}}}{\varepsilon}\bigg).\end{align*}

Proof. We have

\begin{align*} & \sum_{1\leq j_{1},\dots,j_{k}\leq q}v_{j_{1}}\cdots v_{j_{k}}\gamma_{T}^{\odot k} (f_{j_{1}}\otimes\dots\otimes f_{j_{k}}) \\[5pt] & = \sum_{1\leq j_{1},\dots,j_{k}\leq q}v_{j_{1}}\cdots v_{j_{k}} \sum_{a:[k]\hookrightarrow\mathcal{U}_{\varepsilon}}\xi_{a(1)}\cdots\xi_{a(k)} f_{j_{1}}\bigg(\frac{\xi_{a(1)}}{\varepsilon}\bigg) \cdots f_{j_{k}}\bigg(\frac{\xi_{a(k)}}{\varepsilon}\bigg) \nonumber \\[5pt] &\qquad\qquad \text{(for all injections} a, \text{there is exactly one $\sigma_{a}\in\mathcal{S}_{k}$ such that $a(\sigma_{a}(1))<\cdots<a(\sigma_{a}(k))$)} \\[5pt] & = \sum_{1\leq j_{1},\dots,j_{k}\leq q}v_{j_{1}}\cdots v_{j_{k}} \sum_{a:[k]\hookrightarrow\mathcal{U}_{\varepsilon}}\xi_{a(\sigma_{a}(1))}\cdots\xi_{a(\sigma_{a}(k))} f_{j_{\sigma_{a}(1)}}\bigg(\frac{\xi_{a(\sigma_{a}(1))}}{\varepsilon}\bigg) \cdots f_{j_{\sigma_{a}(k)}}\bigg(\frac{\xi_{a(\sigma_{a}(k))}}{\varepsilon}\bigg) \nonumber \\[5pt] & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\;\;\;\;\! \text{(for $\tau\in\mathcal{S}_{k}$, we set $\mathcal{E}(\tau)=\{a\;:\;[k]\hookrightarrow\mathcal{U}_{\varepsilon}\;:\;\sigma_{a}=\tau\}$)} \nonumber \\[5pt] & = \sum_{1\leq j_{1},\dots,j_{k}\leq q}v_{j_{1}}\cdots v_{j_{k}}\sum_{\tau\in\mathcal{S}_{k}} \sum_{a\in\mathcal{E}(\tau)}\xi_{a(\tau(1))}\cdots\xi_{a(\tau(k))} f_{j_{\tau(1)}}\bigg(\frac{\xi_{a(\tau(1))}}{\varepsilon}\bigg) \cdots f_{j_{\tau(k)}}\bigg(\frac{\xi_{a(\tau(k))}}{\varepsilon}\bigg) \nonumber \\[5pt] & = \sum_{\tau\in\mathcal{S}_{k}}\sum_{1\leq j_{1},\dots,j_{k}\leq q}v_{j_{1}}\cdots v_{j_{k}} \sum_{a\in\mathcal{E}(\tau)}\xi_{a(\tau(1))}\cdots\xi_{a(\tau(k))} f_{j_{\tau(1)}}\bigg(\frac{\xi_{a(\tau(1))}}{\varepsilon}\bigg) \cdots f_{j_{\tau(k)}}\bigg(\frac{\xi_{a(\tau(k))}}{\varepsilon}\bigg) \nonumber\\[5pt] & = \sum_{\tau\in\mathcal{S}_{k}}\sum_{1\leq j_{1},\dots,j_{k}\leq q}v_{j_{\tau(1)}}\cdots v_{j_{\tau(k)}} \sum_{a\in\mathcal{E}(\tau)}\xi_{a(\tau(1))}\cdots\xi_{a(\tau(k))} f_{j_{\tau(1)}}\bigg(\frac{\xi_{a(\tau(1))}}{\varepsilon}\bigg) \cdots f_{j_{\tau(k)}}\bigg(\frac{\xi_{a(\tau(k))}}{\varepsilon}\bigg) \nonumber \\[5pt] & = \sum_{\tau\in\mathcal{S}_{k}}\sum_{1\leq j_{1},\dots,j_{k}\leq q}v_{j_{\tau(1)}}\cdots v_{j_{\tau(k)}} \sum_{\overset{u_{1},\dots,u_{k}\in\mathcal{U}_{\varepsilon}}{u_{1}<\dots<u_{k}}}\xi_{u_{1}}\cdots \xi_{u_{k}} f_{j_{\tau(1)}}\bigg(\frac{\xi_{u_{1}}}{\varepsilon}\bigg) \cdots f_{j_{\tau(k)}}\bigg(\frac{\xi_{u_{k}}}{\varepsilon}\bigg). \end{align*}

The application (‘ $\hookrightarrow$ ’ means that an application is injective)

\begin{align*} (a\;:\;[k]\rightarrow[q],\,\tau\;:\;[k]\hookrightarrow[k])\overset{\Theta}{\longrightarrow}a\circ\tau \end{align*}

is such that, for all $b\;:\;[k]\rightarrow[q]$ , $\#\Theta^{-1}(\{b\})=k!$ . So the above quantity is equal to

\begin{equation*} k!\sum_{1\leq j_{1},\dots,j_{k}\leq q}v_{j_{1}}\cdots v_{j_{k}} \sum_{\overset{u_{1},\dots,u_{k}\in\mathcal{U}_{\varepsilon}}{u_{1}<\dots<u_{k}}}\xi_{u_{1}}\cdots\xi_{u_{k}} f_{j_{1}}\bigg(\frac{\xi_{u_{1}}}{\varepsilon}\bigg) \cdots f_{j_{k}}\bigg(\frac{\xi_{u_{k}}}{\varepsilon}\bigg). \end{equation*}

References

Asmussen, S. (2003). Applied Probability and Queues, 2nd edn (Appl. Math. (New York) 51). Springer, New York.Google Scholar
Bertoin, J. (2002). Self-similar fragmentations. Ann. Inst. H. Poincaré Prob. Statist. 38, 319340.CrossRefGoogle Scholar
Bertoin, J. (2006). Random Fragmentation and Coagulation Processes (Cambridge Studies Adv. Math. 102). Cambridge University Press.Google Scholar
Bertoin, J. and Martínez, S. (2005). Fragmentation energy. Adv. Appl. Prob. 37, 553570.CrossRefGoogle Scholar
Bond, F. C. (1952). The third theory of comminution. Trans. AIME Mining Eng. 193, 484494.Google Scholar
Charles, R. J. (1957). Energy–size reduction relationships in comminution. Trans. AIME Mining Eng. 208, 8088.Google Scholar
Dawson, D. A. and Zheng, X. (1991). Law of large numbers and central limit theorem for unbounded jump mean-field models. Adv. Appl. Math. 12, 293326.CrossRefGoogle Scholar
de la Peña, V. H. and Giné, E. (1999). Decoupling. Springer, New York.CrossRefGoogle Scholar
Del Moral, P., Patras, F. and Rubenthaler, S. (2009). Tree-based functional expansions for Feynman–Kac particle models. Ann. Appl. Prob. 19, 778825.CrossRefGoogle Scholar
Del Moral, P., Patras, F. and Rubenthaler, S. (2011). Convergence of U-statistics for interacting particle systems. J. Theor. Prob. 24, 10021027.CrossRefGoogle Scholar
Del Moral, P., Patras, F. and Rubenthaler, S. (2011). A mean field theory of nonlinear filtering. In The Oxford Handbook of Nonlinear Filtering. Oxford University Press, pp. 705740.Google Scholar
Devoto, D. and Martnez, S. (1998). Truncated Pareto law and oresize distribution of ground rocks. Math. Geology 30, 661673.CrossRefGoogle Scholar
Dynkin, E. B. and Mandelbaum, A. (1983). Symmetric statistics, Poisson point processes, and multiple Wiener integrals. Ann. Statist. 11, 739745.CrossRefGoogle Scholar
Fontbona, J., Krell, N. and Martnez, S. (2010). Energy efficiency of consecutive fragmentation processes. J. Appl. Prob. 47, 543561.CrossRefGoogle Scholar
Harris, S. C., Knobloch, R. and Kyprianou, A. E. (2010). Strong law of large numbers for fragmentation processes. Ann. Inst. H. Poincaré Prob. Statist. 46, 119134.CrossRefGoogle Scholar
Hoffmann, M. and Krell, N. (2011). Statistical analysis of self-similar conservative fragmentation chains. Bernoulli 17, 395423.CrossRefGoogle Scholar
Lee, A. J. (1990). U-statistics (Statistics: Textbooks and Monographs 110). Marcel Dekker Inc., New York.Google Scholar
Meleard, S. (1998). Convergence of the fluctuations for interacting diffusions with jumps associated with Boltzmann equations. Stoch. Stoch. Rep. 63, 195225.CrossRefGoogle Scholar
Perrier, E. M. and Bird, N. R. (2002). Modelling soil fragmentation: The pore solid fractal approach. Soil and Tillage Res. 64, 9199.CrossRefGoogle Scholar
Rubenthaler, S. (2016). Central limit theorem through expansion of the propagation of chaos for Bird and Nanbu systems. Ann. Fac. Sci. Toulouse Math. (6) 25, 829873.CrossRefGoogle Scholar
Sgibnev, M. S. (2002). Stone’s decomposition of the renewal measure via Banach-algebraic techniques. Proc. Amer. Math. Soc. 130, 24252430.CrossRefGoogle Scholar
Turcotte, D. L. (1986). Fractals and fragmentation. J. Geophys. Res. 91, 19211926.CrossRefGoogle Scholar
Uchiyama, K. (1988). Fluctuations in a Markovian system of pairwise interacting particles. Prob. Theory Relat. Fields 79, 289302.CrossRefGoogle Scholar
Walker, W. H., Lewis, W. K., McAdams, W. H. and Gilliland, E. R. (1967). Principles of Chemical Engineering. McGraw-Hill, New York.Google Scholar
Weiss, N. L. (1985). SME Mineral Processing Handbook. Society of Mining Engineers of the American Institute of Mining, Metallurgical, and Petroleum Engineers, New York.Google Scholar
Figure 0

Figure 1. State of the art.

Figure 1

Figure 2. Process $B^{(1)}$ and $C^{(1)}$.

Figure 2

Figure 3. Renewal process with delay.

Figure 3

Figure 4. Processes $\widehat{B}^{(1),v}$, $\widehat{B}^{(2),v}$.

Figure 4

Figure 5. $B_{kb}^{(1)}$ and $C_{kb}^{(1)}$.

Figure 5

Figure 6. Example tree and marks.