Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-11-23T19:15:50.518Z Has data issue: false hasContentIssue false

Almost sure convergence and second moments of geometric functionals of fractal percolation

Published online by Cambridge University Press:  28 November 2023

Michael A. Klatt*
Affiliation:
Heinrich-Heine-Universität Düsseldorf
Steffen Winter*
Affiliation:
Karlsruhe Institute of Technology
*
*Postal address: Institut für Theoretische Physik II: Weiche Materie, Heinrich-Heine-Universität Düsseldorf, 40225 Düsseldorf, Germany; Experimental Physics, Saarland University, Center for Biophysics, 66123 Saarbrücken, Germany. Present address: Institut für KI Sicherheit, Deutsches Zentrum für Luft- und Raumfahrt, Wilhelm-Runge-Straße 10, 89081 Ulm, Germany; Institut für Materialphysik im Weltraum, Deutsches Zentrum für Luft- und Raumfahrt, 51170 Köln, Germany; Department of Physics, Ludwig-Maximilians-Universität, Schellingstraße 4, 80799 Munich, Germany. Email address: [email protected]
**Postal address: Karlsruhe Institute of Technology, Department of Mathematics, Englerstr. 2, 76128 Karlsruhe, Germany. Email address: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We determine almost sure limits of rescaled intrinsic volumes of the construction steps of fractal percolation in ${\mathbb R}^d$ for any dimension $d\geq 1$. We observe a factorization of these limit variables which allows one, in particular, to determine their expectations and covariance structure. We also show the convergence of the rescaled expectations and variances of the intrinsic volumes of the construction steps to the expectations and variances of the limit variables, and we give rates for this convergence in some cases. These results significantly extend our previous work, which addressed only limits of expectations of intrinsic volumes.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Let $p\in(0,1]$ and $M\in{\mathbb N}$ , $M\geq 2$ . Divide the unit cube $J\;:\!=\;[0,1]^d$ of ${\mathbb R}^d$ into $M^d$ subcubes of side length $1/M$ . Keep each of these subcubes independently with probability p and discard it otherwise. Then repeat this construction independently for each of the retained cubes, producing a random collection of cubes of side length $1/M^2$ , and so on. For $n\in{\mathbb N}$ , denote by $F_n$ the union of the cubes retained in the nth step of this construction. The sets $F_n$ form a decreasing sequence of random compact subsets of J, and its limit

(1.1) \begin{align} F\;:\!=\; \bigcap_{n\in{\mathbb N}_0} F_{n}\end{align}

is known as fractal percolation or Mandelbrot percolation; cf. e.g. [Reference Chayes, Chayes and Durrett10, Reference Mandelbrot20]. It is easily seen that for any $p<1$ , F has a positive probability of being empty, and it is well known that F is in fact almost surely empty if p is too small, i.e., if $p\leq 1/M^d$ . For $p>1/M^d$ , however, there is a positive probability (depending on p, M, and d) that $F\neq \emptyset$ , and conditioned on F being non-empty, the Hausdorff dimension and equally the Minkowski dimension of F are almost surely given by the number

(1.2) \begin{align} D\;:\!=\;\frac{\log(M^d p)}{\log(M)};\end{align}

see e.g. [Reference Chayes, Chayes and Durrett10]. Many properties of this simple model have been studied, including e.g. its connectivity [Reference Broman and Camia5, Reference Broman and Camia6, Reference Chayes, Chayes and Durrett10], its visibility (or behavior under projections) [Reference Arhosalo1, Reference Rams and Simon26, Reference Rams and Simon27], its porosity [Reference Berlinkov and Järvenpää3, Reference Chen, Ojala, Rossi and Suomala12], path properties [Reference Broman, Camia, Joosten and Meester7, Reference Chayes11], and very recently its (un-)rectifiability [Reference Buczolich8].

Intrinsic volumes are a basic tool in stochastic geometry and other fields of mathematics, providing essential geometric information about complex structures. We refer to [Reference Schneider28, Chapter 4] or [Reference Schneider and Weil29, Chapter 14.2] for their definition and properties; see also the summary in our previous paper [Reference Klatt and Winter18, pp. 1087–1088]. Let $V_k(F_n)$ , $k=0,1,\ldots, d$ , denote the intrinsic volumes of the random set $F_n$ in ${\mathbb R}^d$ . Note that they are well defined, since each $F_n$ is a finite union of cubes and thus polyconvex almost surely. In [Reference Klatt and Winter18], we studied the expected intrinsic volumes ${\mathbb E} V_k(F_n)$ of the construction steps $F_n$ and in particular their limiting behavior as $n\to\infty$ . More precisely, we established the existence of the limit functionals

$$ \overline{\mathcal V}_k(F)\;:\!=\;\lim_{n\to\infty} M^{n(k-D)} {\mathbb E} V_k(F_{n}) $$

and derived formulas to compute them; see also Theorem 2.1 below. Moreover, we investigated possible connections of these functionals with percolation thresholds; see [Reference Klatt and Winter18].

In this article we continue our investigation of the random variables $V_k(F_n)$ and their limiting behavior as $n\to\infty$ . We are interested in the question of which further properties of these random geometric functionals, beyond convergence of expectations, can be derived. Here we will prove in particular that the random variables $V_k(F_n)$ , when properly rescaled, converge almost surely, as $n\to\infty$ , to some nontrivial limit, and we will determine the expectations and covariances of the limit variables. Moreover, for $k=d$ and $k=d-1$ (i.e., for volume and surface area), we derive expansions for the expectations and variances of the functionals $V_k(F_n)$ of the nth approximations. This will enable us not only to show convergence of these expectations and variances (when suitably rescaled) as $n\to\infty$ to the expectation and variance of the limit variable, but also to obtain the rates for this convergence.

Outline. In the next two sections we will formulate our main results. The almost sure convergence of the rescaled intrinsic volumes and some consequences are discussed in Section 2, while some additional results regarding the finite approximations are addressed in Section 3. In order to prepare for the proofs, in Section 4 we review random iterated function systems as well as Nerman’s renewal theorem for branching random walks. Sections 57 are devoted to the proofs. In Section 5 the main result of Section 2 is proved, and in Section 6 the expectation and variance of the volume of the finite approximations $F_n$ are discussed. Finally, in Section 7, the expectation and variance of the surface area of $F_n$ are addressed.

2. Almost sure convergence of rescaled intrinsic volumes

Let F be a fractal percolation on $J=[0,1]^d$ with parameters $M\in{\mathbb N}_{\geq 2}$ and $p\in(0,1]$ . Recall that the nth construction step $F_n$ is a random union of cubes of side length $1/M^n$ , which we will call the basic cubes of level n in the sequel. Let us first focus on the (random) numbers $N_n$ of basic cubes of level n contained in $F_n$ . For convenience, we also set $F_0\;:\!=\;J$ to be the unit cube and $N_0\;:\!=\;1$ . The sequence of random variables $N_n$ , $n\in{\mathbb N}_0$ , forms a Galton–Watson process with a binomial offspring distribution with parameters $M^d$ and p. Indeed, by the assumed independence in the subdivision-and-retention procedure, the number of preserved subcubes of any existing cube of any level is a sum of $M^d$ independent Bernoulli random variables with parameter p, and is thus $\text{Bin}(M^d,p)$ -distributed. In particular, we have $N_1\sim\text{Bin}(M^d,p)$ , and hence ${\mathbb E} N_1=M^dp$ and $\mathrm{Var}(N_1)=M^dp(1-p)$ . From this, one can easily deduce (see e.g. [Reference Athreya and Ney2, Chapter I.A.2]) that the mean and variance of $N_n$ are given by ${\mathbb E} N_n=({\mathbb E} N_1)^n=(M^dp)^n$ and

(2.1) \begin{align} \mathrm{Var}(N_n)&=\frac{\mathrm{Var}(N_1)}{{\mathbb E} N_1-1} ({\mathbb E} N_1)^{n-1}(({\mathbb E} N_1)^n-1) \notag\\[5pt] &=\frac{1-p}{M^dp-1} \big(M^dp\big)^n\big((M^dp)^n-1\big),\end{align}

provided ${\mathbb E} N_1\neq 1$ , and $\mathrm{Var}(N_n)=n\cdot \mathrm{Var}(N_1)$ in the case ${\mathbb E} N_1=1$ , i.e. $p=1/M^d$ .

Furthermore, it is well known (see e.g. [Reference Athreya and Ney2, Theorem I.6.1, p. 9]) that the sequence $W_n\;:\!=\;N_n/{\mathbb E} N_n$ , $n\in{\mathbb N}$ , is a martingale with respect to the filtration naturally induced by the construction steps of the process (i.e., the sequence $(\mathcal{F}_n)_n$ , where $\mathcal{F}_n$ is the $\sigma$ -algebra generated by $N_0,N_1,\ldots,N_n$ ). Since $W_n\geq 0$ , this implies that there exists a random variable $W_\infty$ such that

(2.2) \begin{align}\lim_{n\to\infty} W_n=W_\infty \qquad \text{ almost surely. }\end{align}

Moreover, for $p>M^{-d}$ , we have ${\mathbb E} N_1>1$ (and $\mathrm{Var}(N_1)<\infty$ ), and therefore [Reference Athreya and Ney2, Theorem I.6.2] implies

(2.3) \begin{align} {\mathbb E} W_\infty=1 \quad \text{ and } \quad \mathrm{Var}(W_\infty)=\frac{\mathrm{Var}(N_1)}{({\mathbb E} N_1)^2-{\mathbb E} N_1}=\frac{1-p}{M^dp-1}.\end{align}

In [Reference Klatt and Winter18], we studied the limiting behavior of expected intrinsic volumes ${\mathbb E} V_k(F_n)$ of the construction steps $F_n$ , as $n\to\infty$ , and found that these functionals converge for all parameter combinations M, p when suitably rescaled. We recall the main result concerning this convergence. Denote the basic cubes of level 1 by $J_1,\ldots, J_{M^d}$ , and for each $j\in\Sigma\;:\!=\;\{1,\ldots,M^d\}$ and each $n\in{\mathbb N}$ , let $F_n^j$ be the union of those basic cubes of level n that are contained in the union $F_n$ and subcubes of $J_j$ ; see (4.5) for a formal definition.

Theorem 2.1. ([Reference Klatt and Winter18, Theorem 1.1].) Let F be a fractal percolation on $[0,1]^d$ with parameters $M\in{\mathbb N}_{\geq 2}$ and $p\in(0,1]$ . Let D be as in (1.2). Then, for each $k\in\{0,\ldots,d\}$ , the limit

$$ \overline{\mathcal V}_k(F)\;:\!=\;\lim_{n\to\infty} M^{n(k-D)} {\mathbb E} V_k(F_{n}) $$

exists and is given by the expression

(2.4) \begin{align} V_k([0,1]^d) +\sum_{T\subset\Sigma,|T|\geq 2}(\!-\!1)^{|T|-1} \sum_{n=1}^\infty M^{n(k-D)} {\mathbb E} V_k\Bigg(\bigcap_{j\in T} F^j_{n}\Bigg). \end{align}

Thus, the expectations ${\mathbb E} V_k(F_n)$ of the random variables $V_k(F_n)$ converge as $n\to\infty$ , when rescaled with the sequence $M^{n(k-D)}$ . It is natural to ask whether the random variables $V_k(F_n)$ themselves also converge. In view of the convergence behavior of their expectations, it is likely to be a good idea to rescale them in the same way by $M^{n(k-D)}$ . Therefore, we define the random variables

\begin{align*} Z^k_n\;:\!=\;M^{(k-D)n} V_k(F_n), \qquad n\in{\mathbb N}, \quad k\in\{0,1,\ldots,d\}.\end{align*}

The following statement is our main result. It establishes for each $k\in\{0,\ldots,d\}$ the almost sure convergence of the sequence $(Z^k_n)$ as $n\to\infty$ for all parameter combinations (p, M) for which a nontrivial limit set F exists, i.e. whenever $p>M^{-d}$ .

Theorem 2.2. Let F be a fractal percolation on $[0,1]^d$ with parameters $M\in{\mathbb N}_{\geq 2}$ and $p\in(M^{-d},1]$ . Then, for each $k\in\{0,\ldots,d\}$ , the random variables $Z^k_n$ converge almost surely, as $n\to\infty$ , to some random variable $Z^k_\infty$ . Moreover, $Z^k_\infty$ is given by

(2.5) \begin{align} Z^k_\infty=\overline{\mathcal V}_k(F)\cdot W_\infty; \end{align}

i.e., $Z^k_\infty$ factorizes into a deterministic part $\overline{\mathcal V}_k(F)$ , given by Theorem 2.1, and a random part $W_\infty$ , which is the martingale limit given by (2.2).

The factorization of the limit variable $Z^k_\infty$ in (2.5) separates the probabilistic effects from the geometric information. The random variations in the limit set F depend only on the underlying Galton–Watson process, i.e., on the number of cubes that survive, but not on their positions or mutual intersections. Such geometric information is captured by the purely deterministic factors $\overline{\mathcal V}_k(F)$ , which depend on k and describe some almost sure geometric property of the limit set F.

Fortunately, the expectation and variance of $W_\infty$ are well known; see (2.3). This allows us to derive from the above statement the complete covariance structure of the random variables $Z^0_\infty, Z^1_\infty,\ldots,Z^d_\infty$ .

Corollary 2.1. For each $k\in\{0,\ldots,d\}$ , the limit variable $Z^k_\infty$ has mean $\overline{\mathcal V}_k(F)$ and positive and finite variance given by

(2.6) \begin{align} \mathrm{Var}(Z^k_\infty)=(\overline{\mathcal V}_k(F))^2\cdot \mathrm{Var}(W_\infty)= (\overline{\mathcal V}_k(F))^2\cdot\frac{1-p}{M^d p-1}. \end{align}

Moreover, for any $k,\ell\in\{0,\ldots,d\}$ ,

(2.7) \begin{align} \mathrm{Cov}(Z^k_\infty,Z^\ell_\infty)=\overline{\mathcal V}_k(F)\cdot \overline{\mathcal V}_{\ell}(F)\cdot\mathrm{Var}(W_\infty)= \overline{\mathcal V}_k(F)\cdot \overline{\mathcal V}_{\ell}(F)\cdot\frac{1-p}{M^d p-1}. \end{align}

Proof. This follows immediately from combining (2.5) with (2.3).

For plots of the variances and covariances of $Z^k_{\infty}$ , see Figure 1. The proof of Theorem 2.2 is given in Section 5 below. We will employ a renewal theorem for branching random walks due to Nerman [Reference Nerman23], which has previously been used in the setting of random self-similar sets by Gatzouras [Reference Gatzouras15] and Zähle [Reference Zähle30]. In these papers the functionals studied depend on a continuous parameter (the radius of a parallel set grown around the limit set F) which is sent to zero, while in our case the approximation of F is by a discrete sequence, for which we derive below a variant of the renewal theorem; cf. Proposition 4.1. While a factorization similar to that of Theorem 2.2 is observed in these previous papers, our somewhat simpler setting allows for explicit computation of the limits. Note also that we do not need any additional integrability or regularity conditions, such as the ones imposed e.g. in Zähle [Reference Zähle30].

Figure 1. Semi-logarithmic plots of the variances and the absolute values of the covariances of $Z^k_{\infty}$ in ${\mathbb R}^2$ as functions of p for different values of $M\leq 20$ (indicated by the color scale); see (2.6) and (2.7). For $M\to\infty$ , all variances and covariances converge to zero.

Remark 2.1. From the factorization in Theorem 2.2 it is clear that further progress concerning the distributional properties of the limit variables $Z^k_\infty$ depends solely on understanding the distribution of $W_\infty$ , as the expectations $\overline{\mathcal V}_k(F)$ are known already; cf. Theorem 2.1 and [Reference Klatt and Winter18] for more explicit expressions in ${\mathbb R}^2$ and ${\mathbb R}^3$ . It is clear that ${\mathbb P}(W_\infty=0)={\mathbb P}(F=\emptyset)$ is strictly positive for all $p\neq 1$ , meaning that the distribution of $W_\infty$ (and thus of the $Z^k_\infty$ ) has an atom at 0. From the more advanced theory of branching processes, it follows that the distribution of $W_\infty$ is absolutely continuous on the open interval $(0,\infty)$ ; see [Reference Athreya and Ney2, Corollary I.12.1].

Remark 2.2. In [Reference Klatt and Winter18] we addressed the question of whether percolation thresholds can be approximated by the roots or critical points of mean intrinsic volumes $\overline{\mathcal V}_k(F_p)$ (as functions of p). The question was motivated by observations in [Reference Klatt, Schröder-Turk and Mecke17, Reference Neher, Mecke and Wagner22] for many classical percolation models. Similar questions have been asked for second moments. Indeed, Last and Ochsenreither [Reference Last and Ochsenreither19] observed that the variance of the expected Euler characteristic of Poisson–Voronoi percolation in the plane has a maximum at $\frac 12$ , which equals the percolation threshold in this model. Simulations in [Reference Klatt, Schröder-Turk and Mecke17] suggest that for various variants of continuum percolation in ${\mathbb R}^2$ (Boolean models of rectangles), some minima of the variance of the Euler characteristic and of the covariance of the Euler characteristic and perimeter are good approximations for the percolation thresholds in these models. For fractal percolation in the plane we observe that (for any M) the local maximum of the variance of the expected Euler characteristic (as a function of p) may provide a rather close lower bound for the (unknown) percolation threshold $p_c(M)$ , but we have not tried to prove this. Recall from [Reference Chayes and Chayes9] that, as $M\to\infty$ , $p_c(M)\searrow p_{c,NN}$ , the threshold of nearest-neighbor site percolation. A comparison to the value $p_{c,NN}\approx 0.59274621(13)$ provided by simulations (see e.g. [Reference Newman and Ziff24]) indicates that for large M this local maximum could be a tight lower bound for $p_c(M)$ ; cf. Figure 2. However, there are at least two arguments contrary to such a close relation: (1) the observed factorization suggests that there is no additional geometric information in the second moments; (2) as discussed at the end of Section 2 in [Reference Klatt and Winter18] (based on a result in [Reference Broman, Camia, Joosten and Meester7]), the dust (i.e., the points not contained in large clusters) may have a dominant influence on the functionals $\overline{\mathcal V}_k(F_p)$ . That is, it is possible that in the supercritical regime these functionals do not see the backbone but only the dust.

Figure 2. Semi-logarithmic plots of the variance of $Z^0_{\infty}$ rescaled by $M^2$ as functions of p for $M=2$ (left), $M=3$ (center), and $M=1000$ (right). The shaded areas for $M=2$ and $M=3$ indicate rigorously known bounds on the percolation threshold [Reference Don13], as in [Reference Klatt and Winter18, Figure 4]. The vertical line for $M=1000$ indicates an empirical estimate of $p_{c,NN}$ provided by simulations [Reference Newman and Ziff24].

3. Additional results: analysis of the finite approximations

A priori it is not clear whether the almost sure convergence $Z^k_n\to Z^k_\infty$ obtained in Theorem 2.2 implies the convergence of the expectations ${\mathbb E} Z^k_n$ to the expectation ${\mathbb E} Z^k_\infty$ of the limit variable, as $n\to\infty$ , or whether any higher moments converge. However, for the expectations, this convergence can easily be deduced from the above results. Indeed, by Theorem 2.1, ${\mathbb E} Z^k_n\to \overline{\mathcal V}_k(F)$ , and by Corollary 6.1, the latter number equals ${\mathbb E} Z^k_\infty$ . In [Reference Klatt and Winter18] we observed that the convergence, as $n\to\infty$ , of the rescaled expected intrinsic volumes $M^{(k-D)n} {\mathbb E} V_k(F_n)=Z^k_n$ to the limit functionals $\overline{\mathcal V}_k(F)$ is very fast; more precisely, their difference decays exponentially. In [Reference Klatt and Winter18, Remark 5.2] we explicitly determined the speed of convergence in some cases, which can now be interpreted as the speed of the convergence ${\mathbb E} Z^k_n\to {\mathbb E} Z^k_\infty$ . For $d=2$ and $k=0$ , for instance, one has ${\mathbb E} Z^0_n-{\mathbb E} Z^0_\infty\sim c (p/M)^n$ as $n\to\infty$ (where c is explicitly known). But what about higher moments? In Corollary 6.1 we have obtained the variances of the limit variables $Z^k_{\infty}$ . But do the variances $\mathrm{Var}(Z^k_n)$ converge to $\mathrm{Var}(Z^k_\infty)$ ? We provide here some explicit expressions for the variances of the nth approximations for some of the functionals, namely for volume and surface area. They allow us to deduce that the variances of the rescaled functionals $Z^k_n=M^{(k-D)n} V_k(F_n)$ also converge exponentially fast to the variance of the limit variable, and we provide explicit rates. We start with the volume.

Theorem 3.1. Let F be a fractal percolation on $[0,1]^d$ with parameters $M\in{\mathbb N}_{\geq 2}$ and $p\in(0,1]$ . Then, for each $n\in{\mathbb N}$ , ${\mathbb E} V_d(F_n)=p^{n}$ and

(3.1) \begin{align} \mathrm{Var}(V_d(F_n))=\begin{cases} \frac{1-p}{M^dp-1} \Big(p^{2n}-\Big(\frac{p}{M^d}\Big)^n\Big),& p\neq M^{-d},\\[5pt] (1-p)\cdot n\cdot p^{2n},& p= M^{-d}. \end{cases} \end{align}

Observe that, for any $n\in{\mathbb N}$ , the variance in (3.1) is continuous in p even at the value $p=M^{-d}$ . Indeed, for any p it can be written as

$$\mathrm{Var}(V_d(F_n))=p^{2n}(1-p) \sum_{i=0}^{n-1} (M^dp)^{i-n}.$$

From Theorem 3.1 we can deduce the following rates of convergence for the expectation and variance of the rescaled volume $Z^d_n=M^{(d-D)n} V_d(F_n)$ to the expectation and variance, respectively, of the limit $Z^d_\infty$ .

Corollary 3.1. If $p>M^{-d}$ , then, as $n\to\infty$ , ${\mathbb E} Z^d_n =1 \to 1={\mathbb E} Z^d_\infty$ and

\begin{align*} \mathrm{Var}(Z^d_\infty) - \mathrm{Var}(Z^d_n) = \frac{1-p}{M^dp-1} (M^dp)^{-n}\to 0. \end{align*}

If $p\leq M^{-d}$ , then $Z^d_n\to 0$ almost surely, as $n\to\infty$ , while ${\mathbb E} Z^d_n=1\to 1\neq 0={\mathbb E} Z^d_n$ and $\mathrm{Var}(Z^d_n)\to\infty$ .

Proof. First, let $p>M^{-d}$ . Since $M^D=M^dp$ , we infer from Theorem 3.1 that for any $n\in{\mathbb N}$ , ${\mathbb E} Z_n^d=p^{-n}{\mathbb E} V_d(F_n)=1$ and

$$ \mathrm{Var}(Z^d_n)=\mathrm{Var}(p^{-n} V_d(F_n))=p^{-2n} \mathrm{Var}(V_d(F_n))=\frac{1-p}{M^dp-1} \Big(1-\Big(M^dp\Big)^{-n}\Big). $$

Moreover, by Theorem 2.1, we have $\overline{\mathcal V}_d(F)=1$ , and thus we conclude from Corollary 6.1 and (2.3) that ${\mathbb E} Z^d_\infty=\overline{\mathcal V}_d(F) {\mathbb E} W_\infty=1$ and $\mathrm{Var}(Z^d_\infty)=\frac{1-p}{M^dp-1}$ . From this the asserted convergence of the expectation and variance are obvious.

If $p\leq M^{-d}$ , then $F=\emptyset$ almost surely. Moreover, it is clear from the definition of F that $F(\omega)=\emptyset$ , for some $\omega\in\Omega$ , if and only if there is some $m=m(\omega)\in{\mathbb N}$ such that $F_m(\omega)=\emptyset$ , and in this case one has $Z^d_n(\omega)\to 0$ as $n\to\infty$ . Hence $Z_n^d\to 0$ almost surely as $n\to \infty$ , as stated. Now the last assertions are obvious from (3.1).

In the terminology of numerical analysis, Corollary 3.1 says that the variances converge linearly with rate $(M^dp)^{-1}<1$ whenever $p>M^{-d}$ . This corresponds to our observations in simulations of the variables $Z_n^d$ for different parameters p and M. In Section 6 we will provide two proofs of Theorem 3.1. The first one uses a standard branching process argument, based on the observation that the volume of $F_n$ in this model is directly related to the number of offspring in the nth generation of the associated Galton–Watson process. The second one uses a recursion argument, which generalizes to other intrinsic volumes and is a warm-up for the more involved discussion of the surface area.

For the surface area of $F_n$ , we obtain the following explicit expressions. Here we concentrate on the case $p>M^{-d}$ , when the limit set F is nontrivial. Recall that $V_{d-1}$ is, in fact, half the surface area.

Theorem 3.2. Let F be a fractal percolation on $[0,1]^d$ with parameters $M\in{\mathbb N}_{\geq 2}$ and $p\in(0,1]$ . Then, for each $n\in{\mathbb N}$ ,

(3.2) \begin{align} {\mathbb E} V_{d-1}(F_n)&= \bar c_1 \cdot (Mp)^n \bigg(1+\frac{M-1}{1-p}\bigg(\frac pM\bigg)^{n+1}\bigg), \end{align}

where $\bar c_1\;:\!=\;\frac {dM(1-p)}{M-p}=\overline{\mathcal V}_{d-1}(F)$ . Moreover, if $p>M^{-d}$ , then, as $n\to\infty$ ,

(3.3) \begin{align} \mathrm{Var}(V_{d-1}(F_n))=&\bar c_2\cdot (Mp)^{2n} + \begin{cases} O((Mp^3)^n) & \textit{if } p^2>1/M^{d-1},\\[5pt] O((Mp^3)^n \cdot n) & \textit{if } p^2=1/M^{d-1},\\[5pt] O(\left(\frac{p}{M^{d-2}}\right)^n) & \textit{if } p^2<1/M^{d-1}, \end{cases} \end{align}

where $\bar c_2\;:\!=\;\bar c_1^2\cdot \frac{1-p}{M^dp-1}=(\overline{\mathcal V}_{d-1}(F))^2 \mathrm{Var}(W_\infty)$ .

Corollary 3.2. Let $p>M^{-d}$ . Then, for any $n\in{\mathbb N}$ ,

\begin{align*} {\mathbb E} Z^{d-1}_\infty - {\mathbb E} Z^{d-1}_n = \frac {dp(M-1)}{M-p} \bigg(\frac pM\bigg)^{n},\end{align*}

and as $n\to\infty$ ,

\begin{align*} \mathrm{Var}(Z^{d-1}_\infty) - \mathrm{Var}(Z^{d-1}_n) = \begin{cases} O\left(\left(\frac pM\right)^n\right) & \textit{if } p^2>1/M^{d-1},\\[5pt] O\left(\left(\frac pM\right)^n \cdot n\right) & \textit{if } p^2=1/M^{d-1},\\[5pt] O\left(\left(M^{d}p\right)^{-n}\right) & \textit{if } p^2<1/M^{d-1}. \end{cases} \end{align*}

In particular, $\mathrm{Var}(Z^{d-1}_n) \to \mathrm{Var}(Z^{d-1}_\infty)$ as $n\to\infty$ .

Proof. Since $M^{d-1-D}=(Mp)^{-1}$ , we infer from Theorem 3.2 that

$$ {\mathbb E} Z^{d-1}_n=(Mp)^{-n}{\mathbb E} V_{d-1}(F_n)=\bar c_1 \bigg(1+\frac{M-1}{1-p}\bigg(\frac pM\bigg)^{n+1}\bigg) $$

for any $n\in{\mathbb N}$ . As noted above, Theorem 2.1 and Corollary 6.1 imply that ${\mathbb E} Z^{d-1}_n \to {\mathbb E} Z^{d-1}_\infty$ , as $n\to\infty$ , for any $p>M^{-d}$ , and therefore we conclude that ${\mathbb E} Z^{d-1}_\infty=\bar c_1$ (which can also be deduced directly from Theorem 2.1). The stated formula for the difference ${\mathbb E} Z^{d-1}_\infty - {\mathbb E} Z^{d-1}_n$ is now obvious. Moreover, from (3.3) we infer that

\begin{align*} \mathrm{Var}(Z^{d-1}_n) &=(Mp)^{-2n} \mathrm{Var}(V_{d-1}(F_n)) =\bar c_2 + \begin{cases} O\left(\left(\frac pM\right)^n\right) & \text{if } p^2>1/M^{d-1},\\[5pt] O\left(\left(\frac pM\right)^n \cdot n\right) & \text{if } p^2=1/M^{d-1},\\[5pt] O\left(\left(M^{d}p\right)^{-n}\right) & \text{if } p^2<1/M^{d-1}, \end{cases} \end{align*}

which implies in particular that $\mathrm{Var}(Z^{d-1}_n)\to \bar c_2$ as $n\to\infty$ . From Corollary 6.1 we see that $\mathrm{Var}(Z^{d-1}_\infty)=(\overline{\mathcal V}_k(F))^2\cdot \mathrm{Var}(W_\infty)=\bar c_2$ . Hence $\mathrm{Var}(Z^{d-1}_n)\to \mathrm{Var}(Z^{d-1}_\infty)$ as $n\to\infty$ , and the stated order of convergence for the difference follows at once.

4. Random iterated function systems and a renewal theorem for branching random walks

Fractal percolation in ${\mathbb R}^d$ can be viewed as a random self-similar set, i.e., as a random compact set generated by a random iterated function system (RIFS) $\mathcal{S}$ .

The general definition of such RIFSs is as follows; cf. [Reference Falconer14, Reference Graf16, Reference Mauldin and Williams21]. Let $J\subset{\mathbb R}^d$ be a compact set such that $J=\overline{\textsf{int}(J)}$ , and let $\textsf{Sim}$ be some family of contracting similarity mappings on J. Then an RIFS $\mathcal{S}$ on J is defined to be a random subset of $\textsf{Sim}$ which is almost surely finite and satisfies the uniform open set condition with respect to $\textsf{int}(J)$ . That is, there exist a random variable $\nu$ with values in ${\mathbb N}_0=\{0\}\cup{\mathbb N}$ , and random elements $\Phi_i\in\textsf{Sim}$ , $i=1,\ldots,\nu$ , such that $\mathcal{S}=\{\Phi_1,\Phi_2,\ldots, \Phi_\nu\}$ if $\nu>0$ , and $\mathcal{S}\;:\!=\;\emptyset$ if $\nu=0$ . Moreover, $\mathcal{S}$ is said to satisfy the uniform open set condition (UOSC) with respect to $\textsf{int}(J)$ if

(4.1) \begin{align} \bigcup_{i=1}^\nu \Phi_i(\textsf{int}(J))\subset \textsf{int}(J) \quad \text{ and } \quad \Phi_i(\textsf{int}(J))\cap \Phi_j(\textsf{int}(J))=\emptyset, \quad i\neq j,\end{align}

holds with probability 1. Additionally, we assume throughout that $\nu$ satisfies

\begin{align*} 0<{\mathbb E}\nu<\infty.\end{align*}

Given an RIFS $\mathcal{S}$ , a random fractal set F can be associated to it by constructing a Galton–Watson tree on the code space ${\mathbb N}^*\;:\!=\;\bigcup_{n=0}^\infty {\mathbb N}^n$ of all finite words with letters in ${\mathbb N}$ . Here ${\mathbb N}^0\;:\!=\;\{\varepsilon\}$ , where $\varepsilon$ is the empty word. For any word $\sigma\in{\mathbb N}^n$ , $|\sigma|\;:\!=\;n$ will denote its length. Moreover, for $\sigma,\tau\in{\mathbb N}^*$ , we write $\sigma\tau$ for the concatenation of $\sigma$ and $\tau$ . For each $\sigma\in{\mathbb N}^*$ , let $\mathcal{S}_\sigma$ be a copy of the RIFS $\mathcal{S}$ generated in its own probability space $(\Omega_\sigma,\mathcal{A}_\sigma,{\mathbb P}_\sigma)$ . Let $(\Omega,\mathcal{A},{\mathbb P})\;:\!=\;\bigotimes_{\sigma\in{\mathbb N}^*} (\Omega_\sigma,\mathcal{A}_\sigma,{\mathbb P}_\sigma)$ be the common probability space in which all these RIFSs are independent. Recall that $\mathcal{S}_\sigma$ contains a random number $\nu_\sigma$ of maps. To distinguish them, let $I_\sigma\subseteq {\mathbb N}$ be a set of indices with cardinality $|I_\sigma|=\nu_\sigma$ . It is convenient to denote the maps in $\mathcal{S}_\sigma$ by $\Phi_{\sigma i}$ , $i\in I_\sigma$ . Note that $\nu_\sigma$ may be 0, in which case $I_\sigma=\emptyset$ . (In general, one could choose $I_\sigma=\{1,\ldots,\nu_\sigma\}$ here without loss of generality, but later in the case of fractal percolation it will be much more convenient to use different index sets.) We build a random tree $\mathcal{T}$ in ${\mathbb N}^*$ as follows: set $\mathcal{T}_0\;:\!=\;\{\varepsilon\}$ , and for $n\in{\mathbb N}_0$ , define $\mathcal{T}_{n+1}\;:\!=\;\emptyset$ if $\mathcal{T}_n=\emptyset$ , and

$$\mathcal{T}_{n+1}\;:\!=\;\{\sigma i\;:\; \sigma\in\mathcal{T}_n, i\in I_\sigma\}$$

if $\mathcal{T}_n\neq\emptyset$ . Finally, let

$$\mathcal{T}_*\;:\!=\;\bigcup_{n=0}^\infty \mathcal{T}_n$$

be the vertex set of the tree $\mathcal{T}$ , and define the edge set by

$$E(\mathcal{T})\;:\!=\;\{(\sigma,\sigma i)\;:\;\sigma\in\mathcal{T}_*, i\in I_\sigma\}.$$

The tree $\mathcal{T}$ can be interpreted as the population tree of a Galton–Watson process in which $\mathcal{T}_n$ represents the nth generation and $\sigma i\in\mathcal{T}_{n+1}$ , $i\in I_\sigma$ , are the descendants of an individual $\sigma\in\mathcal{T}_n$ . The distribution of $\nu$ is called the offspring distribution of the process. For any finite word $\sigma\in{\mathbb N}^n$ and any $k\in{\mathbb N}$ , $k\leq n$ , write $\sigma|k$ for the word consisting of the first k letters of $\sigma$ . With this notation, the self-similar random set F associated with the RIFS $\mathcal{S}$ is defined by

(4.2) \begin{align} F\;:\!=\;\bigcap_{n=1}^\infty \bigcup_{\sigma\in\mathcal{T}_n} J_\sigma,\end{align}

where

(4.3) \begin{align} J_\sigma\;:\!=\;\overline{\Phi}_\sigma(J)\;:\!=\;\Phi_{\sigma|1}\circ \Phi_{\sigma|2}\circ\ldots\circ\Phi_{\sigma|n-1}\circ \Phi_\sigma(J)\end{align}

are the cylinder sets of the construction.

Fractal percolation as a random self-similar set. Before we continue the general discussion of RIFS, let us briefly indicate how fractal percolation F on $[0,1]^d$ with parameters $M\in{\mathbb N}_{\geq 2}$ and $p\in(0,1]$ fits into this setting. Choose $J\;:\!=\;[0,1]^d$ as the basic set. Recall the basic cubes $J_1,\ldots,J_{M^d}$ of side length $1/M$ into which J is subdivided in the first step of the construction of F. Let $\textsf{Sim}'\;:\!=\;\{\varphi_1,\ldots, \varphi_{M^d}\}$ , where, for $j=1,\ldots, M^d$ , $\varphi_j$ is the similarity which maps J to $J_j$ (rotation- and reflection-free, for simplicity and uniqueness). Let $\mathcal{S}$ be the random subset of $\textsf{Sim}'$ such that each of the maps $\varphi_j$ is contained in $\mathcal{S}$ with probability p independently of all the other maps in $\textsf{Sim}'$ . It is obvious that $\mathcal{S}$ satisfies the UOSC with respect to the interior of J. Indeed, $\mathcal{S}$ is a random subset of $\textsf{Sim}'$ , and any subset of $\textsf{Sim}'$ satisfies the condition (4.1) with respect to $\textsf{int}(J)$ . Now fractal percolation with parameters $M\in{\mathbb N}_{\geq 2}$ and $p\in(0,1]$ is the random self-similar set F (defined by (4.2)) generated by this particular RIFS $\mathcal{S}$ . Note that $\mathcal{S}$ contains at most $M^d$ maps. One can therefore reduce the code space to $\Sigma^*\;:\!=\;\bigcup_{n\in{\mathbb N}_0} \Sigma^n$ , where $\Sigma\;:\!=\;\{1,2,\ldots,M^d\}$ . If we choose the index set $I_\sigma$ to contain exactly those indices $i\in\Sigma$ for which $\varphi_i\in \mathcal{S}_\sigma$ , and if we set $\Phi_{\sigma i}\;:\!=\;\varphi_i$ , then the cylinder sets $J_\sigma$ , $\sigma=\sigma_1\ldots\sigma_n\in \Sigma^n$ , defined in (4.3) are more conveniently given by

(4.4) \begin{align} J_\sigma\;:\!=\;\varphi_{\sigma_1}\circ \ldots\circ\phi_{\sigma_n}(J).\end{align}

They correspond to the basic cubes of level n used in the previous sections. Note also that for $n=1$ this is consistent with the above notation $J_j$ , $j\in\Sigma$ , for the first-level cubes. In the language of the tree and the associated sets considered above, the construction steps $F_{n}$ , $n\in{\mathbb N}$ , of the fractal percolation process are given by

$$F_{n}=\bigcup_{\sigma\in\mathcal{T}_n} J_\sigma.$$

For each $j\in\Sigma$ and each $n\in{\mathbb N}$ , the random set $F_n^j$ introduced just before Theorem 2.1 is then given by

(4.5) \begin{align} F_n^j=\bigcup_{{\sigma\in\mathcal{T}_n},{\sigma|1=j}} J_\sigma.\end{align}

A branching random walk associated with F . Let us go back to general RIFSs. To any random self-similar set F, a branching random walk $\{S_\sigma\;:\; \sigma\in{\mathbb N}^*\}$ can naturally be associated. It controls the size of the cylinder sets of F (i.e., of the basic cubes in the case of fractal percolation), by keeping track of the contraction ratios applied during the construction. In general, it is defined recursively by setting $S_\varepsilon\;:\!=\;0$ and, for $n\in{\mathbb N}$ and any $\sigma=\sigma_1\ldots\sigma_n\in\mathcal{T}_n$ ,

$$S_{\sigma}\;:\!=\;S_{\sigma|n-1}+\log r^{-1}_{\sigma},$$

where $r_\sigma$ is the contraction ratio of the similarity $\Phi_\sigma$ . It is convenient to set $S_{\sigma}\;:\!=\;\infty$ for $\sigma\in {\mathbb N}^*\setminus\mathcal{T}$ . Then $\{S_\sigma\;:\; \sigma\in{\mathbb N}^*\}$ is a branching random walk with positive step sizes. In the case of fractal percolation F in ${\mathbb R}^d$ with subdivision parameter $M\geq 2$ , all contraction ratios (of all $\Phi_\sigma$ , $\sigma\in\mathcal{T})$ equal $1/M$ . Therefore, we have in this case, for any $n\in{\mathbb N}$ ,

$$S_\sigma=\begin{cases} n\cdot \log M, & \sigma\in\mathcal{T}_n,\\[5pt] +\infty & \text{otherwise. }\end{cases}$$

Let $\xi$ be the random measure on ${\mathbb R}$ defined by

(4.6) \begin{align} \xi(B)\;:\!=\;\sum_{\sigma\in\mathcal{T}_1} \textbf{1}_B(S_\sigma)\end{align}

for any Borel set $B\subset{\mathbb R}$ , and let $\mu\;:\!=\;{\mathbb E}[\xi(\!\cdot\!)]$ be the intensity measure of $\xi$ . The random measure $\xi$ is called lattice if $\mu$ is concentrated on $\lambda {\mathbb Z}$ for some $\lambda>0$ (we also say $\xi$ is lattice with lattice constant $\lambda$ in this case), and $\xi$ is called nonlattice otherwise. Observe that in the case of fractal percolation with subdivision parameter M, the intensity measure $\mu$ is concentrated on the value $\log M$ , and so $\xi$ is clearly lattice with lattice constant $\lambda=\log M$ .

For a general RIFS $\mathcal{S}$ , recall that $\nu=\#\mathcal{S}=\#\mathcal{T}_1$ . Assume from now on that

$$1<{\mathbb E}\nu<\infty.$$

Denote by $D\in{\mathbb R}$ the number determined uniquely by the equation

$${\mathbb E}\Bigg(\sum_{i=1}^\nu r_i^D\Bigg)=1.$$

Here $r_i$ denotes the contraction ratio of the mapping $\Phi_i$ in $\mathcal{S}$ . It is well known that conditioned on $F\neq\emptyset$ the Hausdorff and Minkowski dimensions of F equal D almost surely; cf. [Reference Falconer14, Reference Graf16, Reference Mauldin and Williams21, Reference Patzschke25]. Note that in the case of fractal percolation in ${\mathbb R}^d$ with parameters M and p, the assumption ${\mathbb E}\nu>1$ equals the condition $p>M^{-d}$ , and the above D equals the number given in (1.2). Furthermore, we set

$$m(D)\;:\!=\;{\mathbb E}\Bigg(\sum_{i\in\mathcal{T}_1} |\log r_i| r_i^D\Bigg).$$

To the branching random walk $\{S_\sigma\;:\; \sigma\in{\mathbb N}^*\}$ we can associate the following nonnegative martingale $(W_n)_{n\in{\mathbb N}_0}$ , which in the case of fractal percolation specializes to the one defined after Equation (2.1):

(4.7) \begin{align} W_n\;:\!=\; \sum_{\tau\in\mathcal{T}_n} e^{-D S_\tau}, \quad n\in{\mathbb N}_0.\end{align}

Indeed, for fractal percolation with parameters M and p, we have $S_\sigma=n \log M$ for $\sigma\in\mathcal{T}_n$ , and $|\mathcal{T}_n|=N_n$ , so that $W_n = \sum_{\tau\in\mathcal{T}_n} M^{-D n} = (M^dp)^{-n}N_n = N_n/{\mathbb E} N_n$ .

Also, in general, $(W_n)_n$ is a nonnegative martingale (with respect to the filtration $(\hat{\mathcal{F}}_n)_n$ , where $\hat{\mathcal{F}}_n$ is the product $\sigma$ -algebra generated by the $\mathcal{F}_\sigma$ with $|\sigma|\leq n$ ) with ${\mathbb E} W_n=1$ for each $n\in{\mathbb N}$ , and by the martingale convergence theorem, the almost sure limit

(4.8) \begin{align} W_\infty\;:\!=\;\lim_{n\to\infty} W_n\end{align}

is well defined. Moreover, by Biggins’s theorem [Reference Biggins4] (see also [Reference Gatzouras15, Theorem 3.3]), $W_\infty$ is nontrivial, i.e., ${\mathbb P}(W_\infty=0)<1$ , if and only if ${\mathbb E}[W_1 \log^+W_1]<\infty$ . (Note that this condition is satisfied for fractal percolation for all parameters such that $p>M^{-d}$ .)

In [Reference Gatzouras15], a renewal theorem for branching processes $Z_t$ , $t\in{\mathbb R}$ , associated to the random walk $S=\{S_\sigma\;:\; \sigma\in{\mathbb N}^*\}$ is formulated, which is based on and extends a result of Nerman [Reference Nerman23]. First we recall the theorem from [Reference Gatzouras15]; see also [Reference Zähle30]. Then we reformulate this statement in order to apply it to some discrete processes.

Recall that the underlying probability space is $(\Omega,{\mathcal{F}},{\mathbb P}) =\prod_{\sigma\in{\mathbb N}^*} (\Omega_\sigma,{\mathcal{F}}_\sigma,{\mathbb P}_\sigma)$ , where the components $(\Omega_\sigma,{\mathcal{F}}_\sigma,{\mathbb P}_\sigma)$ are identical and each of them generates an RIFS which, loosely speaking, determines the offspring of $F_\sigma$ , i.e. the sets $F_{\sigma i}$ , $i=1,\ldots,\nu_\sigma$ . The tree $\mathcal{T}_*$ induces a tree structure on the elements of $\Omega$ , and it is easy to see that any subtree rooted at some $\tau\in\mathcal{T}_*$ and containing all words starting with $\tau$ is equivalent in distribution to the full tree $\mathcal{T}_*$ . Using only the components of $\omega\in\mathcal{T}_*$ determined by this subtree, we can generate a random self-similar set $F^{(\tau)}$ which is equal in distribution to F. More formally, define for each $\tau\in{\mathbb N}^*$ the shift operator $\theta_\tau\;:\;\Omega\to\Omega$ by

\begin{align*} (\theta_\tau\omega)_\sigma\;:\!=\; \omega_{\tau\sigma}.\end{align*}

Since the images $\theta_\tau(\omega)$ are again in $\Omega$ , we can define for any random variable X on $\Omega$ (taking values in some arbitrary space E) a whole family of random variables $\{X^{(\tau)}\;:\; \tau\in{\mathbb N}^*\}$ by $X^{(\tau)}(\omega)\;:\!=\;X(\theta_\tau\omega)$ , $\omega\in\Omega$ . They have the property that $X^{(\tau)}\overset{d}{=} X$ for any $\tau\in{\mathbb N}^*$ . In particular, we define the random set $F^{(\tau)}$ by

(4.9) \begin{align} F^{(\tau)}(\omega)\;:\!=\;F(\theta_\tau\omega), \qquad \omega\in\Omega.\end{align}

It satisfies $F^{(\tau)}\overset{d}{=} F$ for any $\tau\in{\mathbb N}^*$ . Similarly, we define for any stochastic process $Y\;:\!=\;\{Y_t\;:\; t\in{\mathbb R}\}$ on $(\Omega,{\mathcal{F}},{\mathbb P})$ a family of independent and identically distributed (i.i.d.) copies of Y by $Y_t^{(\tau)}(\omega)\;:\!=\; Y_t(\theta_\tau\omega)$ , $\omega\in\Omega, \tau\in{\mathbb N}^*$ . Then the branching process associated with S and Y is defined by

(4.10) \begin{align} Z_t\;:\!=\;\sum_{\sigma\in\mathcal{T}_*} Y^{(\sigma)}_{t-S_\sigma}.\end{align}

Now we are ready to recall the relevant part of the Nerman–Gatzouras renewal theorem.

Theorem 4.1. ([Reference Gatzouras15, Theorem 3.4, lattice case].)

Let $\{Y_t\;:\; t\in{\mathbb R}\}$ be a stochastic process on $(\Omega,{\mathcal{F}},{\mathbb P})(=\prod_{\sigma\in{\mathbb N}^*} (\Omega_\sigma,{\mathcal{F}}_\sigma,{\mathbb P}_\sigma))$ , which is continuous almost everywhere with probability 1 and takes values in the space of functions which vanish on $(\!-\!\infty,0)$ . Assume there exists a non-increasing and integrable function $h\;:\;[0,\infty)\to(0,\infty)$ such that

(4.11) \begin{align} {\mathbb E}\Bigg[\sup_{t\geq 0} \frac{e^{-Dt}|Y_t|}{h(t)}\Bigg]<\infty.\end{align}

Assume further that the random measure $\xi$ (defined in (4.6)) is lattice with lattice constant $\lambda>0$ , and let $s\in[0,\lambda)$ . Then, as $n\to\infty$ , almost surely

\begin{align*} e^{-Dn\lambda} Z_{s+\lambda n}= e^{-Dn\lambda} \sum_{\sigma\in\mathcal{T}_*} Y^{(\sigma)}_{s+n\lambda-S_\sigma}\longrightarrow \frac{\lambda W_\infty}{m(D)} \sum_{n=0}^\infty e^{-Dn\lambda}{\mathbb E}[Y_{n\lambda+s}].\end{align*}

There is also a corresponding statement for the nonlattice case, which we omit here as we will not use it. The following discrete version of the previous theorem will be the main tool in the proof of our main result Theorem 2.2. We only formulate it for the special case that the random measure $\xi$ is concentrated on a single value $\lambda>0$ , which is the only case we need here. The statement can easily be generalized to any lattice random self-similar set (but the formulas are not as neat). Note that the assumption means that all contraction ratios are the same and equal $1/\Lambda=e^{-\lambda}$ almost surely. It implies in particular that $D= \log {\mathbb E}[\nu]/\log \Lambda$ and $m(D)=\lambda$ .

Proposition 4.1. Let $\{Y_n\;:\; n\in{\mathbb N}\}$ be a (discrete-time) stochastic process on $(\Omega,{\mathcal{F}},{\mathbb P})$ . Assume the random measure $\xi$ is almost surely concentrated on some $\lambda>0$ , and set $\Lambda\;:\!=\;e^\lambda$ . Suppose there exists a non-increasing and summable sequence $(h_n)_{n\in{\mathbb N}}$ such that

(4.12) \begin{align} {\mathbb E}\Bigg[\sup_{n\in{\mathbb N}} \frac{\Lambda^{-Dn}|Y_n|}{h_n}\Bigg]<\infty.\end{align}

Then almost surely, as $n\to\infty$ ,

(4.13) \begin{align} \Lambda^{-Dn} \sum_{\sigma\in\mathcal{T}_*} Y^{(\sigma)}_{n-|\sigma|}\to {W_\infty} \sum_{n=0}^\infty \Lambda^{-Dn}{\mathbb E}[Y_{n}].\end{align}

Proof. With the intention of applying Theorem 4.1, we consider the process $\tilde{Y}$ defined by $\tilde{Y}_t\;:\!=\;Y_n$ for $t\in I_n\;:\!=\;[n\log \Lambda, (n+1)\log \Lambda)$ , $n\in{\mathbb N}_0$ , and $\tilde{Y}_t\;:\!=\;0$ for $t< 0$ . Note that for any $t\in I_n$ ,

$$ e^{-Dt} |\tilde{Y}_t|\leq e^{-Dn\log \Lambda} |Y_n|=\Lambda^{-Dn} |Y_n|. $$

Define the function $h\;:\;[0,\infty)\to(0,\infty)$ by $h(t)\;:\!=\;h_n$ for $t\in I_n$ , $n\in{\mathbb N}$ . The assumed properties of the sequence $(h_n)$ imply that h is non-increasing and integrable. Moreover, for any $t\geq 0$ we have

\begin{align*} \frac{e^{-Dt}|\tilde{Y}_t|}{h(t)}\leq \frac{\Lambda^{-Dn}|{Y}_n|}{h_n},\end{align*}

and thus (for any realization $\omega\in\Omega$ )

\begin{align*} \sup_{t\geq 0}\frac{e^{-Dt}|\tilde{Y}_t|}{h(t)}=\sup_{n\in{\mathbb N}_0}\sup_{t\in I_n}\frac{e^{-Dt}|\tilde{Y}_t|}{h(t)}\leq \sup_{n\in{\mathbb N}_0}\frac{\Lambda^{-Dn}|{Y}_n|}{h_n}.\end{align*}

Since, by assumption, the random variable on the right-hand side has a finite expectation, so has the one on the left-hand side. Thus, the process $\tilde{Y}$ satisfies the condition (4.11) in Theorem 4.1, and we can apply this theorem to $\tilde{Y}$ . (Note also that $\tilde Y$ vanishes on $(\!-\!\infty,0)$ and is continuous almost everywhere with probability 1.) Recalling that $\lambda=\log \Lambda$ , we conclude in particular that (for $s=0$ ) the rescaled sum

$$e^{-Dn\lambda} \sum_{\sigma\in\mathcal{T}_*} \tilde{Y}^{(\sigma)}_{n\lambda-S_\sigma}=\Lambda^{-Dn} \sum_{\sigma\in\mathcal{T}_*} \tilde{Y}^{(\sigma)}_{(n-|\sigma|)\log \Lambda}=\Lambda^{-Dn} \sum_{\sigma\in\mathcal{T}_*} Y^{(\sigma)}_{n-|\sigma|}$$

converges almost surely, as $n\to \infty$ , to the random variable

\begin{align*}\frac{\lambda W_\infty}{m(D)} \sum_{n=0}^\infty e^{-Dn\lambda}{\mathbb E}[\tilde{Y}_{n\lambda}]= W_\infty \sum_{n=0}^\infty \Lambda^{-Dn}{\mathbb E}[Y_{n}]. \end{align*}

5. Proof of Theorem 2.2

We introduce some further notation. For $\tau\in \mathcal{T}_*$ , recall from (4.9) the definition of the random set $F^{(\tau)}$ and the fact that it satisfies $F^{(\tau)}\overset{d}{=} F$ . Furthermore, we will write $F^{[\tau]}\;:\!=\;\overline{\Phi}_\tau(F^{(\tau)})$ for the corresponding scaled copy of $F^{(\tau)}$ in F. We use $F^{(\tau)}_n$ for the nth approximation of $F^{(\tau)}$ , in particular $F^{(\tau)}_0=J$ , and similarly, for any $\tau\in\mathcal{T}_*$ and $n\geq |\tau|$ , we let $F^{[\tau]}_n\;:\!=\;\overline{\Phi}_\tau(F^{(\tau)}_{n-|\tau|})$ . To see the relationship to the notation $F^j_n$ introduced in (4.5), let $F^\tau_n\;:\!=\;\bigcup_{\sigma\in\mathcal{T}_n, \sigma||\tau|=\tau} J_\sigma$ for $n\geq|\tau|$ . Then $F^{\tau}_n=F^{[\tau]}_n\cap \tilde J_\tau$ , where $\tilde J_\tau$ is the random set which equals the cube $J_\tau$ provided $\tau\in\mathcal{T}_*$ and is empty otherwise. Note that in particular

(5.1) \begin{align} F_n=\bigcup_{j=1}^{M^d} F_n^j=\bigcup_{j\in\mathcal{T}_1} F_n^{[j]}.\end{align}

Now we are ready to provide a proof of Theorem 2.2.

Proof of Theorem 2.2. Fix $k\in\{0,\ldots,d\}$ . In order to express the functionals $Z^k_n$ in the form of the left-hand side of (4.13), we set $\widehat{Z}_n\;:\!=\;M^{nk} V_k(F_n)$ . We will also write $\widehat{Z}^{(\sigma)}_n\;:\!=\;M^{nk} V_k(F^{(\sigma)}_n)$ for the corresponding functionals of the shifted random set $F^{(\sigma)}$ , $\sigma\in\mathcal{T}_*$ . (To simplify the notation, we suppress the dependence on k here.) Observe that, by the inclusion–exclusion principle, we can decompose the random variables $\widehat{Z}_n$ for any $n\in{\mathbb N}$ as follows:

\begin{align*} \widehat{Z}_n&=M^{nk} V_k(F_n)=M^{nk} V_k\Bigg(\bigcup_{j\in\mathcal{T}_1} {F}_n^{[j]}\Bigg)\\[5pt] &= M^{nk}\sum_{j\in\mathcal{T}_1} V_k(F_n^{[j]})+M^{nk}\sum_{T\subset{\mathcal{T}}_1, |T|\geq 2} (\!-\!1)^{|T|-1} V_k\Bigg(\bigcap_{j\in T} F_n^{[j]}\Bigg).\end{align*}

For any $n\in{\mathbb N}$ , we define $Y_n$ to be the second of the two summands above, and we set $Y_0\;:\!=\;0$ (which is consistent). Using the relation $V_k(F_n^{[j]})=M^{-k}V_k(F^{(j)}_{n-1})$ (recall that $F^{[j]}_n=\phi_j(F^{(j)}_{n-1})$ , where $\phi_j$ has contraction ratio $1/M$ , and that $V_k$ is homogeneous of order k), we infer that

\begin{align*} \widehat{Z}_n&=\sum_{j\in\mathcal{T}_1} M^{(n-1)k} V_k(F^{(j)}_{n-1})+ Y_n=\sum_{j\in\mathcal{T}_1} \widehat{Z}^{(j)}_{n-1}+ Y_n.\end{align*}

Now each of the variables $\widehat{Z}^{(j)}_{n-1}$ , $j\in\mathcal{T}_1$ , can be decomposed in the very same manner, which yields $\widehat{Z}^{(j)}_{n-1}=\sum_{\sigma\in\mathcal{T}_2, \sigma|1=j} \widehat{Z}^{(\sigma)}_{n-2} + Y_{n-1}^{(j)}$ for $n\in{\mathbb N}_{\geq 2}$ . For $n=1$ we simply get $\widehat{Z}^{(j)}_{n-1}=V_k(F^{(j)}_0)=V_k(J)=q_{d,k}$ for any $j\in\mathcal{T}_1$ . Iterating this procedure, we end up with one term $Y^{(\sigma)}_\ell$ for each finite word $\sigma\in\mathcal{T}_*$ of length $|\sigma|$ up to $n-1$ , i.e. we obtain

(5.2) \begin{align} \widehat{Z}_n&=\sum_{\sigma\in\mathcal{T}_n} \widehat{Z}^{(\sigma)}_0+\sum_{\sigma\in\mathcal{T}_*, |\sigma|<n} Y^{(\sigma)}_{n-|\sigma|}.\end{align}

Here the first summand simplifies to $q_{d,k}|\mathcal{T}_n|=q_{d,k} N_n$ , since, for each $\sigma\in\mathcal{T}_n$ , $F^{(\sigma)}_0$ equals the unit cube J and so $\widehat{Z}^{(\sigma)}_0= V_k(J)=q_{d,k}$ .

We wish to study the limit of $Z^k_n=M^{-Dn} \widehat{Z}_n$ as $n\to\infty$ , which we can do by studying separately the limits of the two sequences given by the two summands in (5.2). The limiting behavior of the first resulting sequence $(M^{-Dn} q_{d,k}N_n)$ is obvious. Since $M^{Dn}=(M^{d}p)^{n}={\mathbb E} N_n$ , we conclude that

(5.3) \begin{align} M^{-Dn} q_{d,k}N_n=q_{d,k} W_n \to q_{d,k} W_\infty \quad \text{ almost surely, as } n\to\infty.\end{align}

Thus the first summand shows the desired factorization, with the first factor being deterministic and the second one being $W_\infty$ . The second sequence is of a form which allows us to employ Proposition 4.1. In order to do so we need to verify that our process $(Y_n)_n$ satisfies the assumptions of this proposition. More precisely, we need to verify that there is a non-increasing and summable sequence $(h_n)_{n\in{\mathbb N}}$ such that

(5.4) \begin{align} {\mathbb E}\Bigg[\sup_{n\in{\mathbb N}} \frac{M^{-Dn}|Y_n|}{h_n}\Bigg]<\infty.\end{align}

This can be derived from the following result, Proposition 5.1, whose proof is postponed to the end of the section.

Proposition 5.1. Let F be a fractal percolation in $[0,1]^d$ with parameters $M\in{\mathbb N}_{\geq 2}$ and $p\in(0,1]$ . Then there exists a non-increasing and summable sequence $(h_n)$ such that, for any subset $T\subset\{1,2,\ldots, M^d\}$ with $|T|\geq 2$ ,

(5.5) \begin{align} {\mathbb E}\Bigg[\sup_{n\in{\mathbb N}} \frac{M^{(k-D)n}}{h_n}\Bigg|V_k\Bigg(\bigcap_{j\in T} F^{[j]}_n\Bigg)\Bigg|\Bigg]<\infty.\end{align}

Let us now show that this statement implies the condition (5.4). First observe that, since $\mathcal{T}_1$ is a random subset of $\Sigma=\{1,\ldots,M^d\}$ , we have for any $n\in{\mathbb N}$

\begin{align*} |Y_n|&=M^{nk}\Bigg|\sum_{T\subset{\mathcal{T}}_1, |T|\geq 2} (\!-\!1)^{|T|-1} V_k\Bigg(\bigcap_{j\in T} F_n^{[j]}\Bigg)\Bigg|\\[5pt] &\leq M^{nk}\sum_{T\subset{\mathcal{T}}_1, |T|\geq 2} \Bigg|V_k\Bigg(\bigcap_{j\in T} F_n^{[j]}\Bigg)\Bigg| \leq\sum_{T\subset \Sigma, |T|\geq 2} M^{nk} \Bigg|V_k\Bigg(\bigcap_{j\in T} F_n^{[j]}\Bigg)\Bigg|.\end{align*}

Therefore, Proposition 5.1 implies that there is some non-increasing and summable sequence $(h_n)$ such that

\begin{align*} {\mathbb E}\Bigg[\sup_{n\in{\mathbb N}} \frac{M^{-Dn}}{h_n}|Y_n|\Bigg] &\leq {\mathbb E}\Bigg[\sup_{n\in{\mathbb N}} \frac{M^{-Dn}}{h_n} \sum_{T\subset \Sigma, |T|\geq 2} M^{nk} \Bigg|V_k\Bigg(\bigcap_{j\in T} F_n^{[j]}\Bigg)\Bigg|\Bigg]\\[5pt] &\leq {\mathbb E}\Bigg[\sum_{T\subset \Sigma, |T|\geq 2} \sup_{n\in{\mathbb N}} \frac{M^{(k-D)n}}{h_n} \Bigg|V_k\Bigg(\bigcap_{j\in T} F_n^{[j]}\Bigg)\Bigg|\Bigg]\\[5pt] &\leq \sum_{T\subset \Sigma, |T|\geq 2} {\mathbb E}\Bigg[\sup_{n\in{\mathbb N}} \frac{M^{(k-D)n}}{h_n} \Bigg|V_k\Bigg(\bigcap_{j\in T} F_n^{[j]}\Bigg)\Bigg|\Bigg]<\infty,\end{align*}

verifying the condition (5.4). Hence we can apply Proposition 4.1 to the sequence of the second summands in (5.2). We infer that, as $n\to\infty$ ,

(5.6) \begin{align} M^{-Dn} \sum_{\sigma\in\mathcal{T}_*} Y^{(\sigma)}_{n-|\sigma|}\to {W_\infty} \cdot \sum_{n=1}^\infty M^{-Dn}{\mathbb E}[Y_{n}],\end{align}

where, for any $n\in{\mathbb N}$ ,

\begin{align*} {\mathbb E}[Y_{n}]&=M^{kn}{\mathbb E}\Bigg[\sum_{T\subset{\mathcal{T}}_1, |T|\geq 2} (\!-\!1)^{|T|-1} V_k\Bigg(\bigcap_{j\in T} F_n^{[j]}\Bigg)\Bigg]\\[5pt] &=M^{kn}{\mathbb E}\Bigg[\sum_{T\subset\Sigma, |T|\geq 2} (\!-\!1)^{|T|-1} V_k\Bigg(\bigcap_{j\in T} F_n^{j}\Bigg)\Bigg].\end{align*}

Here the last equality is due to (5.1). Using this last representation, we get

\begin{align*} \sum_{n=1}^\infty M^{-Dn}{\mathbb E}[Y_{n}] &=\sum_{n=1}^\infty M^{(k-D)n}\sum_{T\subset\Sigma, |T|\geq 2} (\!-\!1)^{|T|-1} {\mathbb E}\Bigg[V_k\Bigg(\bigcap_{j\in T} F_n^{j}\Bigg)\Bigg]\\[5pt] &= \sum_{T\subset\Sigma, |T|\geq 2} (\!-\!1)^{|T|-1} \sum_{n=1}^\infty M^{(k-D)n} {\mathbb E}\Bigg[V_k\Bigg(\bigcap_{j\in T} F_n^{j}\Bigg)\Bigg],\end{align*}

where the interchange of the summations in the last equality is justified as long as all the (finitely many) series in the last expression converge. But this convergence is shown in [Reference Klatt and Winter18, Proposition 5.1] for any $p\in(0,1]$ . Recall that this is an expression for the limit of the second summands in (5.2). Combining it with the limit of the first summands in (5.3), we see by comparison with (2.4) that

\begin{align*} Z^k_\infty= \lim_{n\to\infty} Z_n^k= \overline{\mathcal V}_k(F)\cdot W_\infty \text{ almost surely},\end{align*}

as asserted in Theorem 2.2. This completes the proof.

Proof of Proposition 5.1. First observe that, for any index set $T\subset\Sigma$ , the set $C\;:\!=\;\bigcap_{j\in T} J_j$ is a cube with side length $1/M$ and some dimension $u\leq d-1$ . If $u=0$ the estimate in (5.5) is obviously satisfied. So assume $u\geq 1$ . For any $n\in{\mathbb N}$ , the set $\bigcap_{j\in T} F_n^{[j]}$ is a subset of C and thus at most u-dimensional. Note that in each of the sets $F_n^{[j]}$ , only those level-n cubes which intersect C are relevant for the intersection $\bigcap_{j\in T} F_n^{[j]}$ . This means we can also model the independent random sets $F^{[j]}$ , $j\in T$ , by independent u-dimensional fractal percolations $K^{\{j\}}$ , $j\in T$ , constructed on the cube C (with the same parameters p and M as F). That is, we have $F^{[j]}\cap C\overset{d}{=} K^{\{j\}}$ for each $j\in T$ , and therefore $F_n^{[j]}\cap C\overset{d}{=} K_n^{\{j\}}$ for each $n\in{\mathbb N}$ . In particular, this implies $\bigcap_{j\in T} F_n^{[j]}\overset{d}{=} \bigcap_{j\in T} K_n^{\{j\}}$ . Choose some index $j_1$ from T and denote by $N^u_n$ the number of level-n cubes contained in $K_n^{\{j_1\}}$ . Recall that the sequence $(N^u_n)$ forms a Galton–Watson process with binomial offspring distribution $\text{Bin}(M^u, p)$ . Now we can apply [Reference Klatt and Winter18, Lemma 7.2], according to which there is some constant $c_{u,k}$ (independent of n) such that

\begin{align*} \Bigg|V_k\Bigg(\bigcap_{j\in T} F_n^{[j]}\Bigg)\Bigg|\overset{d}{=} \Bigg|V_k\Bigg(\bigcap_{j\in T} K_n^{\{j\}}\Bigg)\Bigg|\leq C_k^{\textrm{var}}\Bigg(\bigcap_{j\in T} K_n^{\{j\}}\Bigg) \leq c_{u,k} M^{-kn} N^u_n. \end{align*}

Note that we are dealing here with unions of level-n subcubes of C and not of the unit cube. However, this only changes the constant of the lemma by a factor of $M^{-k}$ , since we can scale the whole situation by a factor of M to take place in a unit-size cube. Multiplying by $M^{(k-D)n}$ and setting $W^{u}_n\;:\!=\;N^{u}_n/{\mathbb E} N^{u}_n$ , where ${\mathbb E} N^{u}_n=({\mathbb E} N^{u}_1)^n= M^{un}p^n$ , we get

\begin{align*} M^{(k-D)n} \Bigg|V_k\Bigg(\bigcap_{j\in T} F_n^{[j]}\Bigg)\Bigg|\leq c_{u,k} M^{(u-D)n}p^n W^{u}_n= c_{u,k} M^{(u-d)n} W^{u}_n. \end{align*}

Now, choosing e.g. $h_n\;:\!=\;M^{-n/2}$ , $n\in{\mathbb N}$ (which obviously forms a non-increasing and summable sequence), we conclude from the above inequality that

(5.7) \begin{align}{\mathbb E}\Bigg[\sup_{n\in{\mathbb N}} \frac{M^{(k-D)n}}{h_n} \Bigg|V_k\Bigg(\bigcap_{j\in T} F_n^{[j]}\Bigg)\Bigg|\Bigg]\leq c_{u,k} {\mathbb E}\left[\sup_{n\in{\mathbb N}} M^{-\alpha n} W^{u}_n\right], \end{align}

where $\alpha\;:\!=\;d-u-\frac 12$ . Note that $\alpha\geq\frac 12$ , since $u\leq d-1$ . Now observe that the sequence of random variables $X_m\;:\!=\;\sup_{n\in\{1,\ldots,m\}} M^{-\alpha n} W^{u}_n$ , $m\in{\mathbb N}$ , is non-decreasing and converges as $m\to\infty$ to $\sup_{n\in{\mathbb N}} M^{-\alpha n} W^{u}_n$ . Moreover, for any $m\in{\mathbb N}$ , we have

\begin{align*} {\mathbb E}\left[X_m\right]&\leq {\mathbb E}\Bigg[\sum_{\ell=1}^n M^{-\alpha \ell} W^{u}_\ell \Bigg]=\sum_{\ell=1}^n M^{-\alpha \ell} {\mathbb E}\left[W^{u}_\ell\right]=\frac{1-M^{-\alpha n}}{M^{\alpha}-1}\leq \frac{1}{M^{\alpha}-1},\end{align*}

which, by monotone convergence, implies that the right-hand side of (5.7) is bounded (by $c_{u,k}$ times the latter constant, in which $\alpha\geq 1/2$ ). Finally, note that the chosen sequence $(h_n)$ is independent of the set T, and hence it can be used for all sets $T\subseteq\Sigma$ , $|T|\geq 2$ . Moreover, there are only finitely many choices for the constant $c_{u,k}$ (which depends on T via the dimension u of the resulting cube C). This shows the finiteness of the expectation in (5.5) and completes the proof.

6. Variance of the volume of $F_n$

Theorem 3.1 provides exact expressions for the expectation and variance of the volume of $F_n$ . We discuss two proofs of this result. A standard argument provides a short and elegant proof of the particular case of the volume, which does not generalize to other intrinsic volumes. The second one uses a recursion argument and avoids branching process techniques. It is ultimately based on the inclusion–exclusion principle and provides the idea of how to prove analogous results for the other intrinsic volumes.

First proof of Theorem 3.1. Observe that the volume $V_d(F_n)$ equals the number $N_n$ of cubes in the union $F_n$ times the volume of a single cube of level n, i.e. $V_d(F_n)=M^{-dn} N_n$ , $n\in{\mathbb N}_0$ . If we recall the expectation and variance of $N_n$ from (2.1), this implies ${\mathbb E}(V_d(F_n))= M^{-dn} {\mathbb E}(N_n) = p^n$ and

\begin{align*} \mathrm{Var}(V_d(F_n))&=M^{-2dn} \mathrm{Var}(N_n)=\begin{cases} \frac{1-p}{M^dp-1} (M^{-d}p)^n\left((M^dp)^n-1\right) & \text{ if } p\neq 1/M^d,\\[5pt] M^{-2dn}\cdot n\cdot (1-p) & \text { if } p=1/M^d, \end{cases}\end{align*}

from which the formula (3.1) follows at once.

For the other intrinsic volumes $V_k(F_n)$ , $k=0,\ldots,d-1$ , a similar argument will not work, since the mutual intersections of the cubes are essential for determining the functionals. Therefore we want to discuss a different proof of Theorem 3.1, which provides an idea of how to proceed in the general case. It is based on simple recursions. We start by introducing some additional notation and some useful observations needed for the proof.

Recall the definition of the code space $\Sigma^*$ and the encoding of the basic cubes $J_\sigma$ from (4.4). Observe that $J_{\sigma \omega}\subset J_\sigma$ for all $\sigma,\omega\in\Sigma^*$ . It will be convenient to consider all basic cubes and not only those whose ancestors survived previous steps of the construction. The construction steps $F_n$ can also be characterized as follows.

For $n\in{\mathbb N}$ and $\sigma\in\Sigma_n$ , let $Y_\sigma$ be the 0–1 random variable which models the decision of whether the subcube $J_\sigma$ is kept in the nth step of the construction of F. Observe that $\{Y_\sigma\;:\;\sigma\in\Sigma^*\}$ is a family of i.i.d. Bernoulli variables with parameter p. For $n\in{\mathbb N}$ , we define the nth layer $L_n$ to be the random set given by

\begin{align*} L_n\;:\!=\;\bigcup_{\sigma\in\Sigma^n, Y_\sigma=1} J_\sigma, \end{align*}

and the nth construction step $F_n$ is then given by

\begin{align*} F_n=\bigcap_{m=1}^n L_m. \end{align*}

For convenience set $F_0\;:\!=\;L_0\;:\!=\;J$ , which is a deterministic set. The sets $F^j_{n}$ (defined in (4.5)) can also be characterized by $F^j_n=\bigcap_{m=1}^n L_m^j$ , where

$$L_m^j\;:\!=\;\bigcup_{\sigma\in\Sigma^m, \sigma|1=j, Y_\sigma=1} J_\sigma.$$

It will also be convenient to have notation for the resulting set in the nth step if we ignore the first step of the construction. To this end, we define for any pair of indices $\ell,n\in{\mathbb N}$ , with $\ell\leq n$ , the sets

\begin{align*} F_{\ell,n}\;:\!=\;\bigcap_{m=\ell}^n L_m \qquad \text{ and } \qquad F_{\ell,n}^j\;:\!=\;\bigcap_{m=\ell}^n L_m^j, \quad j=1,\ldots, M^d. \end{align*}

Note that $F_{1,n}=F_n$ and $F_{n,n}=L_n$ (and similarly, $F_{1,n}^j=F_n^j$ and $F_{n,n}^1=L_n^j$ ). The most important observation is that $F_{2,n}^j$ is a scaled version of $F_{n-1}$ ; that is, for any $j\in\Sigma$ , we have

(6.1) \begin{align} F_{2,n}^j\overset{d}{=}\varphi_j(F_{n-1}). \end{align}

When we study the intrinsic volumes of the $F_n$ , it will convenient to separate the effect of the first step from the effect of the later steps.

Lemma 6.1. For any $n\in{\mathbb N}$ , $k\in\{0,\ldots,d\}$ , and $j\in\Sigma$ ,

\begin{align*} V_k(F_n^j)\overset{d}{=} M^{-k} V_k(F_{n-1}) \cdot Y, \end{align*}

where Y is a Bernoulli variable with parameter p independent of $F_{n-1}$ . In particular, this implies that

(6.2) \begin{align} {\mathbb E} V_k(F_n^j)& =\frac p {M^{k}} {\mathbb E} V_k(F_{n-1}) \quad \text{ and } \end{align}

(6.3) \begin{align} \mathrm{Var} \big(V_k\big(F_n^j\big)\big)&=\frac{p}{M^{2k}}\left(\mathrm{Var}( V_k(F_{n-1}))+ (1-p)({\mathbb E} V_k(F_{n-1}))^2\right). \end{align}

Proof. On the one hand, by (6.1) and the invariance properties of the intrinsic volumes, we have for any $n\in{\mathbb N}$

(6.4) \begin{align} V_k\big(F_{2,n}^j\big)\overset{d}{=} V_k(\varphi_j(F_{n-1}))=M^{-k} V_k(F_{n-1}). \end{align}

On the other hand,

(6.5) \begin{align} V_k\big(F_{n}^j\big)=V_k\big(L_1^j\cap F_{2,n}^j\big)=V_k\big(F_{2,n}^j\big)\cdot Y_j, \end{align}

where $Y_j$ is the Bernoulli variable which determines whether $J_j$ is kept in the first step. Observe that $Y_j$ is independent of $F_{2,n}^j$ . Combining both equations proves the first assertion. The expressions for the expectation and variance can be deduced from the first equation, taking into account the independence of Y and $F_{2,n}^j$ and recalling that ${\mathbb E} Y={\mathbb E} Y^2=p$ and $\mathrm{Var}(Y)=p(1-p)$ .

Now we are ready to provide an alternative proof of Theorem 3.1 based on a direct recursion, as previously stated.

Second proof of Theorem 3.1. Recall that $F_n=\bigcup_{j=1}^{M^d} F_n^j$ and that from the perspective of volume this union is disjoint. Hence $V_d(F_n)=\sum_j V_d(F_n^j)$ . Observe that this decomposes $V_d(F_n)$ into a sum of independent random variables. According to Lemma 6.1, the sets $F_n^j$ satisfy ${\mathbb E} V_d(F_n^j)= \frac p{M^{d}} {\mathbb E} V_d(F_{n-1})$ , which implies that

$${\mathbb E} V_d(F_n)=\sum_j p M^{-d} {\mathbb E} V_d(F_{n-1})= p \cdot{\mathbb E} V_d(F_{n-1}) $$

for any $n\in{\mathbb N}$ , where ${\mathbb E} V_d(F_0)=1$ . This is a recursion relation which implies immediately that ${\mathbb E} V_d(F_n)=p^n$ . Furthermore, Lemma 6.1 also yields a recursion equation for the variance of $V_d(F_n)$ ,

\begin{align*} \mathrm{Var}(V_d(F_n))&=\mathrm{Var}\Bigg(\sum_{j=1}^{M^d} V_d(F_n^j)\Bigg)=\sum_{j=1}^{M^d}\mathrm{Var}\left(V_d(F_n^j)\right)\\[5pt] &=\sum_{j=1}^{M^d} \frac{p}{M^{2d}}\left(\mathrm{Var}( V_d(F_{n-1}))+ (1-p)({\mathbb E} V_d(F_{n-1}))^2\right)\\[5pt] &= \frac{p}{M^{d}}\cdot \mathrm{Var}( V_d(F_{n-1})) + \frac{1-p}{M^{d}p}\cdot p^{2n}, \end{align*}

where we have used the independence of the $V_d(F_n^j)$ for the second equality. By induction, this leads to

\begin{align*} \mathrm{Var}(V_d(F_n))&=\frac{p}{M^{d}} \mathrm{Var}(V_d(F_0))+\frac{1-p}{M^{d}p}\cdot \sum_{\ell=0}^{n-1} \bigg(\frac{p}{M^{d}}\bigg)^{\ell} p^{2(n-\ell)}, \end{align*}

where the first summand vanishes, since $F_0=[0,1]^2$ is deterministic, and the second summand simplifies to the expression stated in Theorem 3.1.

Remark 6.1. For the variances of the other intrinsic volumes, similar (but more involved) recursions can be formulated. The decomposition $F_n=\bigcup_j F_n^j$ is still useful, but because of the inclusion–exclusion formula, it will lead to additional terms taking care of the intersection structure. Moreover, the functionals $V_k(F_n^j)$ are not independent anymore, and their covariances have to be taken into account. In the last section we work this out for the surface area $V_{d-1}(F_n)$ .

The following statement is a generalization of Theorem 3.1 to the volume of the intersection of several fractal percolations generated independently on the same basic cube J. It will be used in the computations for the surface area, where some of the intersection terms can be interpreted as lower-dimensional volumes.

Corollary 6.1. Let $\ell\in{\mathbb N}$ and let $F^{\{1\}},\ldots, F^{\{\ell\}}$ be independent fractal percolations on $[0,1]^d$ with the same parameters $M\in{\mathbb N}_{\geq 2}$ and $p\in(0,1]$ . Then, for each $n\in{\mathbb N}$ ,

$$ V_d(F_n^{\{1\}}\cap\ldots\cap F_n^{\{\ell\}})\overset{d}{=} V_d(F'_n), $$

where F is a fractal percolation on $[0,1]^d$ with parameters M and $p^\ell$ . In particular,

\begin{align*} {\mathbb E} V_d(F_n^{\{1\}}\cap\ldots\cap F_n^{\{\ell\}})&=p^{\ell n} \qquad \text{ and }\\[5pt] \mathrm{Var}(V_d(F_n^{\{1\}}\cap\ldots\cap F_n^{\{\ell\}}))&=\begin{cases} \frac{1-p^\ell}{M^dp^\ell-1} \left(p^{2\ell n}-\left(\frac{p^\ell}{M^d}\right)^n\right),& p^\ell\neq M^{-d},\\[5pt] (1-p^\ell)\cdot n\cdot p^{2\ell n},& p^\ell= M^{-d}. \end{cases} \end{align*}

Proof. Let $N'_{\!\!n}$ denote the number of basic cubes of level n contained in the intersection $I_n\;:\!=\;F_n^{\{1\}}\cap\ldots\cap F_n^{\{\ell\}}$ . Note that a basic cube $J_\sigma$ , $\sigma=\sigma_1\ldots\sigma_n\in\Sigma^n$ , of level n is contained in $I_n$ if and only if $J_{\sigma_1\ldots\sigma_{n-1}}$ is contained in $I_{n-1}$ and if, for each $i=1,\ldots,\ell$ , $Y_\sigma^{\{i\}}=1$ , i.e. $J_\sigma$ survives in $F_n^{\{i\}}$ . This implies that for each basic cube of level $n-1$ contained in $I_{n-1}$ , the random number of descendants (i.e. of subcubes of level n contained in $I_n$ ) is $\text{Bin}(M^d, p^\ell)$ -distributed, and hence $(N'_{\!\!n})$ forms a Galton–Watson process that is equivalent to the one associated to a fractal percolation F on $[0,1]^d$ with parameters M and $p^\ell$ . Now the assertion follows from the fact that $V_d(I_n)$ equals $N'_{\!\!n}$ times $M^{-dn}$ , the volume of a single basic cube of level n, which is also the volume $V_d(F'_n)$ . The formulas for the expectation and variance then follow directly from Theorem 3.1.

7. Expectation and variance of the surface area of $F_n$

For computing the expectation and variance of the surface area of the nth step $F_n$ of fractal percolation, one has to study those intersections $F_n^j\cap F_n^\ell$ for which $J_j$ and $J_\ell$ are direct neighbors. We call two basic cubes of level n direct neighbors if they are distinct and share a common facet. For the computations the following statement will be helpful. It will be combined below with Corollary 6.1.

Lemma 7.1. For any $n\in{\mathbb N}$ , $k\in\{0,\ldots,d\}$ , and $j,\ell\in\Sigma$ such that $J_j$ and $J_\ell$ are direct neighbors,

\begin{align*} V_k\big(F_n^j\cap F_n^\ell\big)\overset{d}{=} M^{-k} V_k\Big(K_{n-1}^{\{1\}}\cap K_{n-1}^{\{2\}}\Big) \cdot Y, \end{align*}

where $K^{\{1\}}$ and $K^{\{2\}}$ are independent fractal percolations in $[0,1]^{d-1}$ (with the same parameters M and p as F), $K_{n}^{\{1\}}$ and $K_{n}^{\{2\}}$ are their nth construction steps, and Y is a Bernoulli variable with parameter $p^2$ independent of $K^{\{1\}}$ , $K^{\{2\}}$ .

In particular, one obtains that

\begin{align*} {\mathbb E} V_k\big(F_n^j\cap F_n^\ell\big)& =\frac{p^2}{M^{k}} {\mathbb E} V_k\big(K_{n-1}^{\{1\}}\cap K_{n-1}^{\{2\}}\big) \quad \text{ and } \\[5pt] \mathrm{Var} \big(V_k\big(F_n^j\cap F_n^\ell\big)\big)&=\frac{p^2}{M^{2k}}\left(\mathrm{Var}\big( V_k\big(K_{n-1}^{\{1\}}\cap K_{n-1}^{\{2\}}\big)\big)\right.\\[5pt] &\left.\qquad\qquad + (1-p^2)\big({\mathbb E} V_k\big(K_{n-1}^{\{1\}}\cap K_{n-1}^{\{2\}}\big)\big)^2\right). \end{align*}

Proof. First observe that, similarly as in the proof of Lemma 6.1,

(7.1) \begin{align} V_k\big(F_{n}^j\cap F_n^\ell\big)=V_k\big(L_1^j\cap F_{2,n}^j\cap L_1^\ell\cap F_{2,n}^\ell\big)=V_k\big(F_{2,n}^j\cap F_{2,n}^\ell\big)\cdot Y_j\cdot Y_\ell, \end{align}

where $Y_j$ and $Y_\ell$ are the Bernoulli variables which determine whether $J_j$ and $J_\ell$ , respectively, are kept in the first step. Note that $Y_j, Y_\ell, F_{2,n}^j, F_{2,n}^\ell$ are independent and that the product $Y_j\cdot Y_\ell$ is a Bernoulli variable with parameter $p^2$ . Now observe that, by (6.1), and with $K^{\{i\}}$ as in the statement, for any $n\in{\mathbb N}$ , $F_{2,n}^j\cap F_{2,n}^\ell\overset{d}{=} \varphi(K^{\{1\}}_{n-1}\cap K^{\{2\}}_{n-1})$ , where $\varphi\;:\;{\mathbb R}^{d-1}\to {\mathbb R}^d$ is the similarity with contraction ratio $1/M$ mapping $[0,1]^{d-1}$ to $J_j\cap J_\ell$ . This implies in particular that

(7.2) \begin{align} V_k\big(F_{2,n}^j\cap F_{2,n}^\ell\big)\overset{d}{=} V_k\big(\varphi\big(K^{\{1\}}_{n-1}\cap K^{\{2\}}_{n-1}\big)\big)=M^{-k} V_k\big(K^{\{1\}}_{n-1}\cap K^{\{2\}}_{n-1}\big). \end{align}

Combining both equations proves the first assertion. The expressions for the expectation and variance follow easily.

Now we have all the ingredients needed to start the proof of Theorem 3.2.

Proof of Theorem 3.2. Fix $n\in{\mathbb N}$ . For $j,\ell\in\Sigma$ , let $X_n\;:\!=\;V_{d-1}(F_n)$ , $X_n^{j}\;:\!=\;V_{d-1}(F_n^j)$ , and

\begin{align*} X_n^{j,\ell}\;:\!=\;V_{d-1}(F_n^j\cap F_n^\ell). \end{align*}

Note that all these random variables are nonnegative. The variables $X_n^j$ , $j\in\Sigma$ are i.i.d. Note also that $X_n^{j,\ell}=0$ whenever the corresponding cubes $J_j$ and $J_\ell$ are not direct neighbors. By the inclusion–exclusion principle, we have

(7.3) \begin{align} X_n&=\sum_{j\in\Sigma} X_n^j - \sum_{\{j,\ell\}\subset\Sigma} X_n^{j,\ell}. \end{align}

For the expectation this implies

\begin{align*} {\mathbb E}(X_n)&=\sum_{j\in\Sigma} {\mathbb E}\big(X_n^j\big) - \sum_{\{i,\ell\}\subset\Sigma} {\mathbb E}\big(X_n^{i,\ell}\big). \end{align*}

By Lemma 6.1, the first sum simplifies to $Mp {\mathbb E}(X_{n-1})$ . In the second term it is enough to sum over those pairs $\{i,\ell\}$ for which the corresponding cubes $J_i$ and $J_\ell$ are direct neighbors. There are $d (M-1) M^{d-1}$ such pairs in dimension d. All other summands are zero. Now observe that to direct neighbors Lemma 7.1 can be applied, which in combination with Corollary 6.1 yields

(7.4) \begin{align} {\mathbb E}\big(X_n^{i,\ell}\big)=\frac{p^2}{M^{d-1}} {\mathbb E} V_{d-1}\Big(K_{n-1}^{\{1\}}\cap K_{n-1}^{\{2\}}\Big)=\frac{p^2}{M^{d-1}} p^{2(n-1)}=\frac{p^{2n}}{M^{d-1}}. \end{align}

Therefore we conclude that for any $n\in{\mathbb N}$ ,

\begin{align*} {\mathbb E}(X_n)&=Mp {\mathbb E}(X_{n-1})-d (M-1) M^{d-1}\cdot M^{1-d} p^{2n}=Mp {\mathbb E}(X_{n-1})-d (M-1)p^{2n}, \end{align*}

where ${\mathbb E}(X_0)=V_{d-1}(J)=d$ . This is a recursion relation for the expectation and leads to

\begin{align*} {\mathbb E}(X_n)&=(Mp)^n {\mathbb E}(X_{0})-d (M-1) \sum_{i=1}^n (Mp)^{n-i} p^{2i}\\[5pt] &=d (Mp)^n \bigg(1-\frac{(M-1)p}{M-p}\bigg[1-\bigg(\frac pM\bigg)^n\bigg]\bigg), \end{align*}

which is equivalent to the expression stated in Equation (3.2), completing the proof of this equation.

For the computation of the variance of $X_n$ , the following statement is useful.

Lemma 7.2. For $i=1,2$ , let $Z_i,Z'_{\!\!i}$ be stochastically independent random variables. Then

\begin{align*} \mathrm{Cov}(Z_1\cdot Z'_{\!\!1}, Z_2\cdot Z'_{\!\!2})=\mathrm{Cov}(Z_1,Z_2)\cdot {\mathbb E}(Z'_{\!\!1}\cdot Z'_{\!\!2})+{\mathbb E} Z_1\cdot {\mathbb E} Z_2\cdot \mathrm{Cov}(Z'_{\!\!1}, Z'_{\!\!2}). \end{align*}

In particular, $\mathrm{Var}(Z_1\cdot Z'_{\!\!1})=\mathrm{Var}(Z_1)\cdot {\mathbb E}(Z'_{\!\!1}\cdot Z'_{\!\!1})+({\mathbb E} Z_1)^2\cdot \mathrm{Var}(Z'_{\!\!1}).$

Proof. A straightforward computation that uses the assumed independence yields the first formula, and the second one is an easy consequence obtained by setting $Z_1=Z_2$ and $Z'_{\!\!1}=Z'_{\!\!2}$ .

Now we are ready to compute the variance of $X_n$ . Our starting point is again the central equation (7.3), which allows us to split the variance $\mathrm{Var}(X_n)$ into several terms which will then be treated separately. Equation (7.3) implies that

(7.5) \begin{align} \notag \mathrm{Var}(X_n)&=\mathrm{Var}\left(\sum_{j\in\Sigma} X_n^j\right) + \mathrm{Var}\left( \sum_{\{i,\ell\}\subset\Sigma} X_n^{i,\ell}\right)-2\cdot\mathrm{Cov}\left(\sum_{j\in\Sigma} X_n^j,\sum_{\{i,\ell\}\subset\Sigma} X_n^{i,\ell}\right)\\[5pt] &= \sum_{j\in\Sigma}\mathrm{Var}\big(X_n^j\big) + \sum_{\{i,\ell\}\subset\Sigma} \mathrm{Var}\big(X_n^{i,\ell}\big)+ \sum_{ \{i,\ell\}\neq\{i',\ell'\}} \mathrm{Cov}\Big(X_n^{i,\ell},X_n^{i',\ell'}\Big)\\[5pt] \notag &\qquad -2\sum_{j\in\Sigma}\sum_{\{i,\ell\}\subset\Sigma}\mathrm{Cov}\big(X_n^j,X_n^{i,\ell}\big), \end{align}

where we have used the independence of the $X_n^j$ in the first sum. Note that the sum in the third term extends over all pairs $(\{i,\ell\},\{i',\ell'\})$ of subsets of $\Sigma$ with two elements such that $\{i,\ell\}\neq\{i',\ell'\}$ . Since the $X_n^j$ are identically distributed, we get for the first term above

\begin{align*} \sum_{j\in\Sigma}\mathrm{Var}(X_n^j)&=M^d\cdot \mathrm{Var}\big(X_n^1\big)=M^d \frac{p}{M^{2(d-1)}}\left(\mathrm{Var}( X_{n-1})+ (1-p)({\mathbb E} X_{n-1})^2\right)\\[5pt] &=\frac{p}{M^{d-2}}\mathrm{Var}( X_{n-1})+ \frac{p(1-p)}{M^{d-2}}({\mathbb E} X_{n-1})^2, \end{align*}

where for the second equality we employed Equation (6.2) of Lemma 6.1. By Equation (3.2), we have

\begin{align*} ({\mathbb E} X_{n-1})^2&=(Mp)^{2n} \frac {d^2(1-p)^2}{p^2(M-p)^2} \left(1+\frac{M-1}{1-p}\left(\frac pM\right)^{n}\right)^2 \end{align*}

and thus

(7.6) \begin{align} \sum_{j\in\Sigma}\mathrm{Var}(X_n^j)&=\frac{p}{M^{d-2}}\mathrm{Var}( X_{n-1})+ \frac {d^2(1-p)^3}{M^{d-2}p(M-p)^2}\times\\[5pt] \notag &\quad\times(Mp)^{2n} \left(1+ \frac{M-1}{1-p}\left(\frac pM\right)^n\right)^2. \end{align}

Inserting this expression into Equation (7.5) yields a recursion relation for the variance $\mathrm{Var}(X_n)$ . In order for this to be of use, it remains to find explicit expressions for the other terms in (7.5). The discussion below of the three remaining terms will show that all of them are of some order lower than $(Mp)^{2n}$ and thus do not contribute to the first term in (3.3). Moreover, it will become clear that for certain parameter combinations the last term in (7.5) provides a contribution to the second term in (3.3).

Analysis of the second term in (7.5). First recall that $X_n^{j,\ell}=V_{d-1}(F_n^j\cap F_n^\ell)=0$ whenever the corresponding cubes $J_j$ and $J_\ell$ are not direct neighbors, which implies $\mathrm{Var}(X_n^{j,\ell})=0$ in this case. Recall also that there are $d(M-1)M^{d-1}$ pairs $\{j,\ell\}$ such that $J_j$ and $J_\ell$ are direct neighbors. By Lemma 7.1, we get for all such pairs $\{j,\ell\}$ that $\mathrm{Var}(X_n^{j,\ell})$ equals

\begin{align*} &\frac{p^2}{M^{2(d-1)}}\Big(\mathrm{Var}\Big( V_{d-1}\Big(K_{n-1}^{\{1\}}\cap K_{n-1}^{\{2\}}\Big)\Big)+ (1-p^2)\Big({\mathbb E} V_{d-1}\Big(K_{n-1}^{\{1\}}\cap K_{n-1}^{\{2\}}\Big)\Big)^2\Big), \end{align*}

where $K^{\{1\}}, K^{\{2\}}$ are independent $(d-1)$ -dimensional fractal percolations. Hence, by Corollary 6.1, we have $({\mathbb E} V_{d-1}(K_{n-1}^{\{1\}}\cap K_{n-1}^{\{2\}}))^2=p^{4(n-1)}$ and, for $p^2\neq M^{1-d}$ ,

\begin{align*} \mathrm{Var}\Big( V_{d-1}\Big(K_{n-1}^{\{1\}}\cap K_{n-1}^{\{2\}}\Big)\Big)&=p^{4(n-1)} \frac{1-p^2}{M^{d-1}p^2-1}\Bigg(1-\bigg(\frac 1{M^{d-1}p^2}\bigg)^{n-1}\Bigg).\end{align*}

For $p^2= M^{1-d}$ , a different expression for the variance is provided by Corollary 6.1. Inserting these expressions into the above formula yields

(7.7) \begin{align} \mathrm{Var}(X_n^{j,\ell})&=\frac{1-p^2}{M^{d-1}}\cdot p^{4n} \begin{cases} \frac{1}{M^{d-1}p^2-1} \left(1 - \left(M^{d-1}p^{2}\right)^{-n}\right), &M^{d-1}p^2\neq 1,\\[5pt] n, &M^{d-1}p^2= 1. \end{cases} \end{align}

This provides explicit expressions for the second term in (7.5).

Analysis of the third term in (7.5). First note that the covariances of the form $\mathrm{Cov}\Big(X_n^{i,\ell},X_n^{i',\ell'}\Big)$ appearing in the third term vanish whenever $\{i,\ell\}\cap\{i',\ell'\}=\emptyset$ , simply because in this case the variables $X_n^{i,\ell}$ and $X_n^{i',\ell'}$ are determined by disjoint subfamilies of the basic random variables $Y_\sigma$ , $\sigma\in\Sigma^*$ , and are therefore independent. Since also $\{i,\ell\}\neq\{i',\ell'\}$ , all relevant terms in the third sum refer to pairs of the form $(\{i,\ell\},\{i,\ell'\})$ with $\ell\neq\ell'$ , where $J_\ell$ and $J_{\ell'}$ are both direct neighbors of $J_i$ . Now observe that the covariance $\mathrm{Cov}\Big(X_n^{i,\ell},X_n^{i,\ell'}\Big)$ depends on the relative position of $J_\ell$ and $J_{\ell'}$ , since this determines which of the basic variables $Y_\sigma$ determining $F_n^i$ are relevant for both quantities $X_n^{i,\ell}$ and $X_n^{i,\ell'}$ . Since $J_i\cap J_\ell$ and $J_i\cap J_{\ell'}$ are facets of $J_i$ , and any two facets of a cube either are disjoint (if they lie on opposite sides, i.e. in parallel hyperplanes) or intersect in a $(d-2)$ -face (i.e. in a cube of dimension $d-2$ ), we are left with exactly these two cases. The remaining task is now to determine how many covariance terms there are for each of these two types, and to provide an explicit expression for each of them. We start with the easier one, for which the cubes $J_\ell$ , $J_i$ , and $J_{\ell'}$ have to be arranged in a row (in this order) in one of the 2d directions. (Note that because of the independent summation through all pairs $\{i,\ell\}$ and all pairs $\{i,\ell'\}$ , it is a different contribution if $\ell$ and $\ell'$ are interchanged.) There are $2d(M-2)M^{d-1}$ such terms. (For $M=2$ this situation is not possible, and accordingly the number is zero.) The only basic variable which affects both quantities $X_n^{i,\ell}$ and $X_n^{i,\ell'}$ is $Y_i$ ; i.e. only the first construction step affects this covariance term. To compute it, we separate the influence of the first construction step from the rest (just as in Lemma 6.1) and use the independence. We have, for any $n\in{\mathbb N}$ ,

\begin{align*} \mathrm{Cov}\Big(X_n^{i,\ell},X_n^{i,\ell'}\Big)&= \mathrm{Cov}\left(V_{d-1}\Big(F_{2,n}^i\cap L_1^i\cap F_n^\ell\Big),V_{d-1}\Big(F_{2,n}^i\cap L_1^{i}\cap F_n^{\ell'}\Big)\right)\\[5pt] &=\mathrm{Cov}\left(V_{d-1}\Big(F_{2,n}^i\cap F_n^\ell\Big)\cdot Y_i,V_{d-1}\Big(F_{2,n}^i\cap F_n^{\ell'}\Big)\cdot Y_i\right). \end{align*}

Setting $\widetilde{X}_n\;:\!=\;V_{d-1}(F_{2,n}^i\cap F_n^\ell)$ and $\widetilde{X}_n'\;:\!=\;V_{d-1}\Big(F_{2,n}^i\cap F_n^{\ell'}\Big)$ and noting that these two variables are independent, identically distributed, and independent of $Y_i$ , with mean given by

(7.8) \begin{align} {\mathbb E} X_n^{i,\ell}= {\mathbb E}\Big(V_{d-1}\Big(F_{2,n}^i\cap F_n^\ell\Big)\cdot Y_i\Big)={\mathbb E} \widetilde{X}_n\cdot {\mathbb E} Y_i={\mathbb E} \widetilde{X}_n\cdot p, \end{align}

we obtain, using the first formula in Lemma 7.2, that

\begin{align*} \mathrm{Cov}\Big(X_n^{i,\ell},X_n^{i,\ell'}\Big)&= \mathrm{Cov}\left(\widetilde{X}_n,\widetilde{X}_n'\right) {\mathbb E}(Y_i\cdot Y_i)+ {\mathbb E} \widetilde{X}_n\cdot {\mathbb E} \widetilde{X}_n'\cdot \mathrm{Cov}(Y_i,Y_i)\\[5pt] &=\left({\mathbb E} \widetilde{X}_n\right)^2 \mathrm{Var}(Y_i) =\frac {1-p}{p} \big({\mathbb E} X_n^{i,\ell}\big)^2. \end{align*}

By Equation (7.4), this yields for the $2d(M-2)M^{d-1}$ terms for which the corresponding cubes $J_\ell$ , $J_i$ , and $J_{\ell'}$ are arranged in a row, and for any $n\in{\mathbb N}$ ,

(7.9) \begin{align} \mathrm{Cov}\Big(X_n^{i,\ell},X_n^{i,\ell'}\Big)&= \frac {1-p}{p} \left(\frac{p^{2n}}{M^{d-1}}\right)^2=\frac {1-p}{M^{2(d-1)}}p^{4n-1}. \end{align}

Now let us look at the second type of covariance term occurring in the third sum in (7.5), for which the cubes involved, namely $J_i$ , $J_\ell$ , and $J_{\ell'}$ , intersect in a $(d-2)$ -face of $J_i$ . First let us determine the number of such terms. It is convenient to determine first the number of $(d-2)$ -dimensional cubes at which this intersection can happen. There are $\binom{d}{2}(M-1)^2M^{d-2}$ such cubes. (Indeed, choose $d-2$ out of d directions to span the direction space L of (the affine hull of) the cube (or equivalently, choose 2 out of d to span the orthogonal complement of L). For each direction space there are $(M-1)^2M^{d-2}$ such cubes: there are $(M-1)^2$ in each 2-dimensional ‘layer’, and there are $M^{d-2}$ such layers.) Each such $(d-2)$ -cube C is surrounded by four d-dimensional cubes; hence there are four choices for $J_i$ . Then the two cubes which are direct neighbors of $J_i$ must be $J_\ell$ and $J_{\ell'}$ , and the only choice we can make is which one is $J_\ell$ . Hence for each $(d-2)$ -cube there are eight possible choices, and so the total number of terms of the second type in the third sum of (7.5) is $8\binom{d}{2}(M-1)^2M^{d-2}$ . It remains to compute the covariance for this type.

Setting (in analogy with the first type)

$$ \widehat{X}_n\;:\!=\;V_{d-1}\big(F_{2,n}^i\cap F_{2,n}^\ell\big) \text{ and } \widehat{X}'_{\!\!n}\;:\!=\;V_{d-1}\Big(F_{2,n}^i\cap F_{2,n}^{\ell'}\Big) $$

and noting that these two variables are independent, identically distributed, and independent of $Y_i$ , $Y_\ell$ , and $Y_{\ell'}$ , we obtain for the mean

(7.10) \begin{align} {\mathbb E} X_n^{i,\ell}= {\mathbb E}(V_{d-1}(F_{2,n}^i\cap F_{2,n}^\ell)\cdot Y_i\cdot Y_\ell)={\mathbb E} \widehat{X}_n\cdot {\mathbb E} Y_i\cdot {\mathbb E} Y_\ell={\mathbb E} \widehat{X}_n\cdot p^2, \end{align}

and for the covariance, using the first formula in Lemma 7.2 and (7.4),

(7.11) \begin{align} \mathrm{Cov}\Big(X_n^{i,\ell},X_n^{i,\ell'}\Big)&= \mathrm{Cov}\left(\widehat{X}_n\cdot Y_i Y_\ell,\widehat{X}'_{\!\!n}\cdot Y_iY_{\ell'}\right) \notag\\[5pt] &=\mathrm{Cov}\left(\widehat{X}_n,\widehat{X}'_{\!\!n}\right) {\mathbb E}(Y_i^2 Y_\ell Y_{\ell'})+ {\mathbb E} \widehat{X}_n\cdot {\mathbb E} \widehat{X}'_{\!\!n}\cdot \mathrm{Cov}(Y_i Y_\ell,Y_i Y_{\ell'})\notag\\[5pt] &=\mathrm{Cov}\left(\widehat{X}_n,\widehat{X}'_{\!\!n}\right) \cdot p^3+\left({\mathbb E} \widehat{X}_n\right)^2 p^3(1-p) \notag\\[5pt] &=\mathrm{Cov}\left(\widehat{X}_n,\widehat{X}'_{\!\!n}\right) \cdot p^3+\frac{1-p}{M^{2(d-1)}}\cdot p^{4n-1}. \end{align}

But this time $\widehat{X}_n$ and $\widehat{X}'_{\!\!n}$ are not independent, so the covariance term does not vanish. In order to derive a recursion for the left-hand side, notice that the intersection $J_i\cap J_\ell$ is $(d-1)$ -dimensional and intersects $M^{d-1}$ of the second-level cubes $J_{ij}$ contained in $J_i$ . The intersection $J_i\cap J_\ell\cap J_{\ell'}$ is $(d-2)$ -dimensional and intersects $M^{d-2}$ of the second-level cubes $J_{ij}$ contained in $J_i$ . Denote the latter cubes by $J_{i1}, J_{i2},\ldots, J_{iM^{d-2}}$ , and let R be the union of all the remaining second-level cubes contained in $J_i$ ; i.e., set $R\;:\!=\;\cup\{J_{ij}\;:\;j\in\Sigma\setminus\{1,\ldots,M^{d-2}\}\}$ . Observe that

\begin{align*} \widehat{X}_n= V_{d-1}(F_{2,n}^i\cap F_{2,n}^\ell)=\underbrace{V_{d-1}(F_{2,n}^i\cap R\cap F_{2,n}^\ell)}_{\;=:\;\widetilde{R}}+\sum_{k=1}^{M^{d-2}} \underbrace{V_{d-1}(F_{2,n}^i\cap J_{ik}\cap F_{2,n}^\ell)}_{\;=:\;\widetilde{Q}_k}, \end{align*}

where all the terms on the right are independent. The same holds with $\ell$ replaced by $\ell'$ , for which we denote the corresponding terms by $\widetilde{R}'$ and $\widetilde{Q}'_{\!\!k}$ , respectively. What happens in $J_{ik}$ is now a scaled version of what happens in $J_i$ . More precisely, we have, for any $k\in\{1,\ldots, M^{d-2}\}$ ,

(7.12) \begin{align} \widetilde{Q}_k & \overset{d}{=} M^{-d+1}V_{d-1}\big(F_{n-1}^i\cap F_{n-1}^\ell\big)=M^{-d+1} X_{n-1}^{i,\ell}. \end{align}

(Note that the corresponding sets are not equal in distribution, owing to intersections with neighboring cubes, which are fortunately of lower dimension and thus do not contribute to the intrinsic volume of order $d-1$ .) Taking into account the dependence structure of $\widetilde{R},\widetilde{R}',\widetilde{Q}_k,\widetilde{Q}_k$ and (7.12), we conclude that

\begin{align*} \mathrm{Cov}\left(\widehat{X}_n,\widehat{X}'_{\!\!n} \right)&=\mathrm{Cov}\left(\widetilde{R}+\sum_k \widetilde{Q}_k,\widetilde{R}'+\sum_k \widetilde{Q}'_{\!\!k}\right)\\[5pt] &=\sum_k \mathrm{Cov}\left(\widetilde{Q}_k,\widetilde{Q}'_{\!\!k}\right)\\[5pt] &= M^{d-2} \cdot \mathrm{Cov}\left(M^{1-d}\cdot X_{n-1}^{i,\ell},M^{1-d}\cdot X_{n-1}^{i,\ell'}\right)\\[5pt] &= M^{-d} \mathrm{Cov}\left(X_{n-1}^{i,\ell},X_{n-1}^{i,\ell'}\right). \end{align*}

Plugging this into (7.11) yields the desired recursion for the covariance of the second type:

(7.13) \begin{align} \mathrm{Cov}\Big(X_n^{i,\ell},X_n^{i,\ell'}\Big)&= \frac {p^3}{M^d}\cdot \mathrm{Cov}\Big(X_{n-1}^{i,\ell},X_{n-1}^{i,\ell'}\Big) + \frac{1-p}{M^{2(d-1)}}\cdot p^{4n-1},\qquad n\in{\mathbb N}, \end{align}

where $\mathrm{Cov}\Big(X_0^{i,\ell},X_0^{i,\ell'}\Big)\;:\!=\;0$ . We conclude that, for $n\in{\mathbb N}$ ,

(7.14) \begin{align} \mathrm{Cov}\Big(X_n^{i,\ell},X_n^{i,\ell'}\Big) &= \frac{1-p}{M^{2(d-1)}}\cdot p^{4n-1} \sum_{k=0}^{n-1} \left(\frac 1{M^dp}\right)^k \notag\\[5pt] &=\begin{cases} \frac{1-p}{M^{d-2}(M^d p-1)}\cdot p^{4n}\left(1-(M^{d}p)^{-n}\right), & M^dp\neq 1, \\[5pt] \frac{1-p}{M^{2(d-1)}}\cdot p^{4n-1} \cdot n, & M^dp= 1. \end{cases} \end{align}

Analysis of the last term in (7.5). First note that the covariances $\mathrm{Cov}(X_n^{j},X_n^{i,\ell})$ in the last term vanish whenever $j\notin\{i,\ell\}$ , because the variables $X_n^{j}$ and $X_n^{i,\ell}$ are determined by disjoint subfamilies of the basic random variables $Y_\sigma$ , $\sigma\in\Sigma^*$ , in this case and are therefore independent. Hence, all relevant terms in the last sum refer to pairs $(j, \{i,\ell\})$ such that $J_i$ and $J_\ell$ are direct neighbors and $j\in\{i,\ell\}$ . Let us first determine the number of such terms. For each pair $\{i,l\}$ there are two choices for j (namely, $j=i$ and $j=\ell$ ), and there are $d(M-1)M^{d-1}$ pairs such that $J_i$ and $J_\ell$ are direct neighbors. Hence there are $2d(M-1)M^{d-1}$ such terms. To find an expression for this covariance, we choose $j=i$ without loss of generality. Again we start by separating the influence of the first construction step from the rest. Setting $\widehat{X}_n\;:\!=\;V_{d-1}(F_{2,n}^i\cap F_{2,n}^\ell)$ and $\check{X}_n\;:\!=\;V_{d-1}(F_{2,n}^i)$ we infer, using the first formula in Lemma 7.2,

\begin{align*} \mathrm{Cov}(X_n^{i},X_n^{i,\ell}) &=\mathrm{Cov}\left(V_{d-1}(F_{2,n}^i)\cdot Y_i,V_{d-1}(F_{2,n}^i\cap F_{2,n}^{\ell})\cdot Y_iY_\ell\right)\\[5pt] &=\mathrm{Cov}\left(\check{X}_n,\widehat{X}_n\right) {\mathbb E}\left(Y_i^2Y_\ell\right)+ {\mathbb E} V_{d-1}(F_{2,n}^i)\cdot {\mathbb E} \widehat{X}_n\cdot \mathrm{Cov}(Y_i, Y_i Y_\ell)\\[5pt] &=\mathrm{Cov}\left(\check{X}_n,\widehat{X}_n\right)\cdot p^2 + {\mathbb E} V_{d-1}(F_{2,n}^i) \cdot {\mathbb E} \widehat{X}_n\cdot p^2(1-p). \end{align*}

Now recall from (7.10) that ${\mathbb E} \widehat{X}_n\cdot p^2={\mathbb E} X_n^{i,\ell}$ , for which an explicit expression is provided in (7.4). Moreover, by (6.4), we have ${\mathbb E} \check{X}_n=M^{1-d} {\mathbb E} V_{d-1}(F_{n-1})$ , and for the latter an expression is given by (3.2). Hence, setting $\gamma_n\;:\!=\;\mathrm{Cov} (\check{X}_{n},\widehat{X}_{n})$ , $n\in{\mathbb N}$ (and noting that $\gamma_1=0$ ), we infer that for any $n\in{\mathbb N}$ ,

(7.15) \begin{align} \mathrm{Cov}\big(X_n^{i},X_n^{i,\ell}\big) &= \gamma_n \cdot p^2 + \big(Mp^3\big)^{n} \frac {d(1-p)^2}{M^{2(d-1)}p(M-p)} \left(1+\frac{M-1}{1-p}\left(\frac pM\right)^{n}\right). \end{align}

This time again, the covariance term $\gamma_n$ on the right does not vanish. But we can derive a recursion for $\gamma_n$ using an approach similar to the one used for the second type in the third term. The intersection $J_i\cap J_\ell$ is $(d-1)$ -dimensional and intersects $M^{d-1}$ of the second-level cubes $J_{ij}$ contained in $J_i$ . Denote the latter cubes by $Q_1, Q_2,\ldots, Q_{M^{d-1}}$ and let R be the union of all the remaining second-level cubes contained in $J_i$ , i.e. $R=\cup\{J_{ij}\;:\;j\in\Sigma, J_{ij}\neq Q_m $ for all $m\}$ . Observe that $V_{d-1}(F_{2,n}^i\cap R\cap F_{2,n}^\ell)=0$ and therefore

\begin{align*} \widehat{X}_n= V_{d-1}(F_{2,n}^i\cap F_{2,n}^\ell)=\sum_{k=1}^{M^{d-1}} \underbrace{V_{d-1}(F_{2,n}^i\cap Q_k\cap F_{2,n}^\ell)}_{\;=:\;\widehat{Q}_k}, \end{align*}

where the terms on the right are independent. What happens in $Q_k$ is now again a scaled version of what happens in $J_i$ . More precisely, we have, for $k\in\{1,\ldots, M^{d-1}\}$ ,

\begin{align*} \widehat{Q}_k & \overset{d}{=} M^{1-d}V_{d-1}(F_{n-1}^i\cap F_{n-1}^\ell)=M^{1-d} X_{n-1}^{i,\ell}. \end{align*}

Observe that for any $n\in{\mathbb N}$ ,

(7.16) \begin{align} \gamma_n&= \mathrm{Cov}\left(\check{X}_{n},\sum_{k=1}^{M^{d-1}} \widehat{Q}_k\right) =\sum_{k=1}^{M^{d-1}} \mathrm{Cov}\left(\check{X}_{n},\widehat{Q}_k\right). \end{align}

Now we split $\check{X}_{n}=V_{d-1}(F_{2,n}^i)$ in a similar way using the decomposition $J_i=R\cup\bigcup_k Q_k$ . It is clear that the main contribution of the kth term above will come from $\mathrm{Cov}(V_{d-1}(F_{2,n}^i\cap Q_k),\widehat{Q}_k)$ , but there are further contributions this time which are due to the fact that the sets $F_{2,n}^i\cap Q_j$ are full-dimensional, so that their intersections cannot be neglected. In fact, in order to separate the contributions of the cubes $Q_k$ we need more refined notation. Without loss of generality, we can assume that the basic cubes $J_{ij}$ contained in $J_i$ are enumerated in such a way that $J_{ik}=Q_k$ for $k=1,\ldots,M^{d-1}$ . Let $F_{2,n}^{ij}$ be the union of those cubes $J_\sigma$ of level n (i.e. $\sigma=\sigma_1\ldots\sigma_n\in\Sigma^n$ ) contained in $F_{2,n}$ such that $\sigma_1=i$ and $\sigma_2=j$ , $j\in\Sigma$ , and let $F_{2,n}^{i,R}\;:\!=\;\bigcup_{j>M^{d-1}} F_{2,n}^{ij}$ . By the inclusion–exclusion principle, and ignoring lower-dimensional intersections, we have

\begin{align*} \check{X}_{n}=V_{d-1}\Big(F_{2,n}^{i,R}\Big)+ \sum_{j=1}^{M^{d-1}} \bigg[ &V_{d-1}\Big(F_{2,n}^{ij}\Big)- V_{d-1}\Big(F_{2,n}^{ij}\cap F_{2,n}^{i,R}\Big)\bigg]\\[5pt] &-\sum_{\{j,m\}:Q_j\sim Q_m} V_{d-1}\Big(F_{2,n}^{ij}\cap F_{2,n}^{im}\Big), \end{align*}

where $Q_j\sim Q_m$ in the last sum means that $Q_j$ and $Q_m$ are direct neighbors. Now we insert this representation into the kth covariance term in (7.16) and sort out which of the terms still have some of the basic random variables in common with $\widehat{Q}_k$ and which are independent. First observe that for any k, $\mathrm{Cov}(V_{d-1}(F_{2,n}^{i,R}),\widehat{Q}_k)=0$ , since the variables are determined by disjoint families of the basic random variables of the process. For a similar reason, the covariances $\mathrm{Cov}(V_{d-1}(F_{2,n}^{ij}),\widehat{Q}_k)$ and $\mathrm{Cov}(V_{d-1}(F_{2,n}^{ij}\cap F_{2,n}^{i,R}),\widehat{Q}_k)$ vanish whenever $j\neq k$ . Finally, $\mathrm{Cov}(V_{d-1}(F_{2,n}^{ij}\cap F_{2,n}^{im}),\widehat{Q}_k)=0$ whenever $k\notin\{j,m\}$ . Hence we get

(7.17) \begin{align} \mathrm{Cov}\left(\check{X}_{n},\widehat{Q}_k\right) &=\mathrm{Cov}\Big(V_{d-1}\Big(F_{2,n}^{ik}\Big),\widehat{Q}_k\Big)-\mathrm{Cov}\Big(V_{d-1}\Big(F_{2,n}^{ik}\cap F_{2,n}^{i,R}\Big),\widehat{Q}_k\Big)\notag\\[5pt] &\qquad - \sum_{m:Q_m\sim Q_k} \mathrm{Cov}\Big(V_{d-1}\Big(F_{2,n}^{ik}\cap F_{2,n}^{im}\Big),\widehat{Q}_k\Big). \end{align}

In order to derive a recursion, note that analogously to Lemma 6.1, we have

\begin{align*} &\left(V_{d-1}\Big(F_{3,n}^{ik}\Big),V_{d-1}\Big(F_{3,n}^i\cap Q_k\cap F_{2,n}^\ell\Big)\right)\\[5pt] &\qquad\overset{d}{=} M^{1-d} \left(V_{d-1}\Big(F_{2,n-1}^i\Big), V_{d-1}\Big(F_{2,n-1}^i\cap F_{2,n-1}^\ell\Big)\right)=M^{1-d} \left(\check{X}_{n-1},\widehat{X}_{n-1}\right),\end{align*}

which implies for the first term in the right-hand side of (7.17)

\begin{align*} \mathrm{Cov}&\Big(V_{d-1}\Big(F_{2,n}^{ik}\Big),\widehat{Q}_k\Big) = \mathrm{Cov}\Big(V_{d-1}\Big(F_{3,n}^{ik}\Big)\cdot Y_{ik},V_{d-1}\Big(F_{3,n}^{i}\cap Q_k\cap F_{2,n}^\ell\Big)\cdot Y_{ik}\Big)\\[5pt] &=\mathrm{Cov}\Big(V_{d-1}\Big(F_{3,n}^{ik}\Big),V_{d-1}\Big(F_{3,n}^{i}\cap Q_k\cap F_{2,n}^\ell\Big)\Big)\cdot {\mathbb E}\big(Y_{ik}^2\big)\\[5pt] &\qquad +{\mathbb E} V_{d-1}(F_{3,n}^{ik}) {\mathbb E} V_{d-1}(F_{3,n}^{i}\cap Q_k\cap F_{2,n}^\ell)\mathrm{Var}(Y_{ik})\\[5pt] &= M^{2(1-d)} \mathrm{Cov}(\check{X}_{n-1},\widehat{X}_{n-1})\cdot p +M^{2(1-d)} {\mathbb E}\check{X}_{n-1}\cdot {\mathbb E} \widehat{X}_{n-1}\cdot p(1-p)\\[5pt] &=\frac p{M^{2(d-1)}} \cdot \gamma_{n-1} +\frac{1-p}{M^{3(d-1)}} p^{2n-3} \cdot {\mathbb E}\check{X}_{n-1}.\end{align*}

Here we have used (7.10) and (7.4) to replace ${\mathbb E} \widehat{X}_{n-1}$ in the last equality. Similarly, employing (6.2) and (3.2) of Theorem 3.2, we can replace ${\mathbb E}\check{X}_{n-1}$ with some explicit expression. We conclude that

(7.18) \begin{align} \mathrm{Cov}(V_{d-1}&(F_{2,n}^{ik}),\widehat{Q}_k) =\frac p{M^{2(d-1)}} \cdot \gamma_{n-1} +\frac{1-p}{M^{3(d-1)}} p^{2n-2} \cdot {\mathbb E} V_{d-1}(F_{n-2}) \notag \\[5pt] &=\frac p{M^{2(d-1)}} \gamma_{n-1} + (Mp^3)^{n} \frac {d(1-p)^2}{M^{4d-3}p^4(M-p)} \left(1+\frac{M-1}{1-p}\left(\frac pM\right)^{n-1}\right). \end{align}

Inserting this into (7.17) and combining it with (7.16) yields the desired recursion relation for $\gamma_n$ , provided we can compute the two missing terms in (7.17). For the second term in (7.17), $\mathrm{Cov}(V_{d-1}(F_{2,n}^{ik}\cap F_{2,n}^{i,R}),\widehat{Q}_k)$ , observe that we can replace $F_{2,n}^{i,R}$ by $F_{2,n}^{ik'}$ without changing the random variable $V_{d-1}(F_{2,n}^{ik}\cap F_{2,n}^{i,R})$ , where k is the unique index such that $J_{ik'}$ is the unique direct neighbor of $J_{ik}$ contained in R. Indeed, all the other cubes in R have lower-dimensional intersections with $F_{2,n}^{ik}$ . Similarly, in $\widehat{Q}_k=V_{d-1}(F_{2,n}^i\cap Q_k\cap F_{2,n}^\ell)$ we can replace the set $F_{2,n}^i\cap Q_k$ by $F_{2,n}^{ik}$ and the set $F_{2,n}^\ell$ by $F_{2,n}^{\ell \ell'}$ , where $\ell'$ is the unique index such that the cubes $J_{ik}$ and $J_{\ell \ell'}$ are direct neighbors. Hence,

\begin{align*} \mathrm{Cov}\Big(V_{d-1}\Big(F_{2,n}^{ik}\cap F_{2,n}^{i,R}\Big),\widehat{Q}_k\Big)&=\mathrm{Cov}\Big(V_{d-1}\Big(F_{2,n}^{ik}\cap F_{2,n}^{ik'}\Big),V_{d-1}\Big(F_{2,n}^{ik}\cap F_{2,n}^{\ell \ell'}\Big)\Big).\end{align*}

Note that the cubes $J_{ik'}$ , $J_{ik}$ , and $J_{\ell \ell'}$ lie in a row, and therefore we are in the situation of the first type of covariance studied for the third term (cf. (7.9) and the paragraph before this equation), but one level further down. More precisely, we have

\begin{align*} \left(V_{d-1}\Big(F_{2,n}^{ik}\cap F_{2,n}^{ik'}\Big),\right.&\left.V_{d-1}\Big(F_{2,n}^{ik}\cap F_{2,n}^{\ell \ell'}\Big)\right)\\[5pt] &\overset{d}{=} M^{1-d} \left(V_{d-1}\Big(F_{n-1}^{i}\cap F_{n-1}^{i'}\Big),V_{d-1}\Big(F_{n-1}^{i}\cap F_{n-1}^{\ell}\Big)\right),\end{align*}

where i is the (unique) index such that $J_{i'}$ , $J_i$ , and $J_\ell$ lie in a row. (Note that i might not exist if $J_i$ touches the boundary of J, but then one can shift the situation or think of $F_{n-1}^{i'}$ as being part of a second fractal percolation in the corresponding unit-size square neighboring J. This will not change the stated distributional relation.) Employing (7.9), we conclude that for any $n\in{\mathbb N}$ , $n\geq 2$ ,

(7.19) \begin{align} \mathrm{Cov}\Big(V_{d-1}\Big(F_{2,n}^{ik}\cap F_{2,n}^{i,R}\Big),\widehat{Q}_k\Big) &=M^{2(1-d)}\mathrm{Cov}\Big(X_{n-1}^{i,i'},X_{n-1}^{i,\ell}\Big)=\frac {1-p}{M^{4(d-1)}}p^{4n-5}, \end{align}

while this covariance vanishes for $n\leq 1$ .

The last sum in (7.17) consists of $2(d-1)$ terms of the form $\mathrm{Cov}(V_{d-1}(F_{2,n}^{ik}\cap F_{2,n}^{im}),\widehat{Q}_k)$ , for which the corresponding cubes $J_{ik}$ and $J_{im}$ are direct neighbors. As for the previous term, we can use the relation $\widehat{Q}_k=V_{d-1}\Big(F_{2,n}^{ik}\cap F_{2,n}^{\ell\ell'}\Big)$ , where $\ell'$ was the unique index such that the cubes $J_{ik}$ and $J_{\ell \ell'}$ are direct neighbors. Hence,

\begin{align*} \mathrm{Cov}\big(V_{d-1}\big(F_{2,n}^{ik}\cap F_{2,n}^{im}\big),\widehat{Q}_k\big)&=\mathrm{Cov}\Big(V_{d-1}\Big(F_{2,n}^{ik}\cap F_{2,n}^{im}\Big),V_{d-1}\Big(F_{2,n}^{ik}\cap F_{2,n}^{\ell \ell'}\Big)\Big),\end{align*}

where now the cubes $J_{im}$ and $J_{\ell\ell'}$ have a non-empty intersection. Thus we are in the situation of the second type of covariance studied for the second term (cf. (7.12) and the paragraph before this equation), just one level further down. More precisely, we have

\begin{align*} \left(V_{d-1}\big(F_{2,n}^{ik}\cap F_{2,n}^{im}\big),\right.&\left.V_{d-1}\Big(F_{2,n}^{ik}\cap F_{2,n}^{\ell \ell'}\Big)\right)\\[5pt] &\overset{d}{=} M^{1-d} \left(V_{d-1}\Big(F_{n-1}^{i}\cap F_{n-1}^{i'}\Big),V_{d-1}\big(F_{n-1}^{i}\cap F_{n-1}^{\ell}\big)\right),\end{align*}

where now i is one of the indices such that $J_{i'}$ and $J_i$ are direct neighbors and $J_{i'}$ and $J_\ell$ are distinct and have a non-empty intersection. Employing (7.12), we conclude that for $p>M^{-d}$ and any $n\in{\mathbb N}$ , $n\geq 2$ ,

(7.20) \begin{align} \sum_{m: Q_k\sim Q_m}\mathrm{Cov}(&V_{d-1}\big(F_{2,n}^{ik}\cap F_{2,n}^{im}\big),\widehat{Q}_k) =2(d-1)M^{2(1-d)}\mathrm{Cov}\Big(X_{n-1}^{i,i'},X_{n-1}^{i,\ell}\Big) \notag\\[5pt] &= \frac{2(d-1)(1-p)}{M^{3d-4}(M^d p-1)}\cdot p^{4(n-1)}\left(1-(M^{d}p)^{-n+1}\right), \end{align}

while this covariance vanishes for $n\leq 1$ .

Now, plugging the expressions derived in (7.18), (7.19), and (7.20) into (7.17) and summing over all $k=1,...,M^{d-1}$ , we obtain the desired recursion for the covariance $\gamma_n$ in (7.16). For $n\geq 2$ and $M^dp\neq 1$ , we have

\begin{align*} \gamma_n &=\frac p{M^{d-1}} \cdot \gamma_{n-1} + (Mp^3)^{n} \frac {d(1-p)^2}{M^{3d-2}(M-p)p^4} \left(1+\frac{M-1}{1-p}\left(\frac pM\right)^{n-1}\right)\\[5pt] &\qquad -\frac {1-p}{M^{3(d-1)}}p^{4n-5}-\frac{2(d-1)(1-p)}{M^{2d-3}(M^d p-1)}\cdot p^{4(n-1)}\left(1-(M^{d}p)^{-n+1}\right)\\[5pt] &=\frac p{M^{d-1}} \cdot \gamma_{n-1} + \tilde c_1\cdot (Mp^3)^{n} +\tilde c_2\cdot p^{4n} + \tilde c_3 \cdot \left(M^{-d}p^3\right)^n, \end{align*}

where $\tilde c_1\;:\!=\;\frac {d(1-p)^2}{M^{3d-2}p^4(M-p)} $ , $\tilde c_3\;:\!=\;\frac{2(d-1)(1-p)}{M^{d-3}p^3(M^d p-1)}$ and

\begin{align*} \tilde c_2&\;:\!=\;\frac{d(M-1)(1-p)}{M^{3(d-1)}p^5(M-p)}-\frac {1-p}{M^{3(d-1)}p^5}-\frac{2(d-1)(1-p)}{M^{2d-3}(M^d p-1)p^4}\\[5pt] &=\frac{1-p}{M^{3(d-1)}p^5}\left(d\frac{M-1}{M-p}-1-2(d-1)\frac{M^dp}{M^dp-1}\right). \end{align*}

By induction, we get from the above recursion that for $n\geq 2$ ,

\begin{align*} \gamma_n =& (Mp^3)^n \tilde c_1\frac{M^dp^2}{M^dp^2-1} +p^{4n} \tilde c_2 \frac{M^{d-1}p^3}{M^{d-1}p^3-1} +\left(\frac{p^{3}}{M^d}\right)^{n} \tilde c_3 \frac{p^2}{p^2-M}\\[5pt] &-\left(\frac{p}{M^{d-1}}\right)^{n}\underbrace{\left(\tilde c_1 \frac{(M^{d}p^2)^2}{M^dp^2-1}+ \tilde c_2 \frac{(M^{d-1}p^3)^2}{M^{d-1}p^3-1}+ \tilde c_3\frac{(M^{-1}p^2)^2}{M^{-1}p^2-1}\right)}_{\;=:\;\tilde c_4}, \end{align*}

where we have used that $\gamma_1=0$ , and where we have excluded the case $p^2=M^{-d}$ , in which the constant in the first term has to be replaced by $\tilde c_1(n-1)$ and the first summand in $\tilde c_4$ by 0. In the sequel, we will only need an asymptotic expression for $\gamma_n$ as $n\to\infty$ . It turns out that which term is the leading one depends on the parameter combination. For $p>M^{-d}$ we obtain, as $n\to\infty$ ,

\begin{align*} \gamma_n &=\begin{cases} \tilde c_1\cdot \frac{M^{d}p^2}{M^{d}p^2-1} (Mp^3)^{n}+ O(\max\{p^{4}, p/M^{d-1}\}^n) & \text{ if } p>M^{-d/2},\\[5pt] \tilde c_1 \cdot (Mp^3)^{n} \cdot (n-1)+ O((Mp^3)^{n}) & \text{ if } p^2= M^{-d},\\[5pt] \tilde c_4 \cdot (p/M^{d-1})^n + O((Mp^3)^n) & \text{ if } p< M^{-d/2}. \end{cases} \end{align*}

Now we can complete the analysis of the fourth term. Inserting the last expression into Equation (7.15) and noting that the second term in (7.15) is always of order $O((Mp^{3})^n)$ , we conclude that for any $p>M^{-d}$ , as $n\to\infty$ ,

(7.21) \begin{align} \mathrm{Cov}(X_n^{i},X_n^{i,\ell})&= \begin{cases} \hat c_1\cdot(Mp^3)^{n}+ O(\max\{p^{4}, p/M^{d-1}\}^n) & \text{if } p>M^{-d/2},\\[5pt] \tilde c_1 \cdot (Mp^3)^{n} \cdot (n-1)+ O((Mp^3)^{n})& \text{if } p^2= M^{-d},\\[5pt] \tilde c_4 \cdot (p/M^{d-1})^n + O((Mp^3)^n)& \text{if } p< M^{-d/2}, \end{cases} \end{align}

where $ \hat c_1\;:\!=\;\frac{d(1-p)}{M^{2(d-1)}p(M-p)} (1+\frac{1-p}p\frac{1}{M^dp^2-1})$ .

Computation of the variance of $X_n=V_{d-1}(F_n)$ . Now that we have determined all the terms on the right-hand side of (7.5), we can go back to this equation and put all the pieces together. Setting $\beta_n\;:\!=\;\mathrm{Var}(X_n)$ and combining (7.6) with (7.5), we get

(7.22) \begin{align} \notag \beta_n &= \frac{p}{M^{d-2}}\beta_{n-1} + \frac {d^2(1-p)^3}{M^{d-2}p(M-p)^2} (Mp)^{2n} \left(1+ \frac{M-1}{1-p}\left(\frac pM\right)^n\right)^2\\[5pt] & \quad + \sum_{\{i,\ell\}\subset\Sigma}\hspace{-2mm} \mathrm{Var}(X_n^{i,\ell})+ \sum_{ \{i,\ell\}\neq\{i',\ell'\}}\hspace{-4mm} \mathrm{Cov}\Big(X_n^{i,\ell},X_n^{i',\ell'}\Big) -2\sum_{j\in\Sigma}\sum_{\{i,\ell\}\subset\Sigma}\mathrm{Cov}\left(X_n^j,X_n^{i,\ell}\right). \end{align}

Observe from (7.7) that the first sum in the second line above satisfies

\begin{align*} \sum_{\{i,\ell\}\subset\Sigma} \mathrm{Var}(X_n^{i,\ell})&= \begin{cases} O(p^{4n}), &M^{d-1}p^2>1,\\[5pt] O(p^{4n}\cdot n), &M^{d-1}p^2= 1,\\[5pt] O((p^2/M^{d-1})^n), &M^{d-1}p^2< 1, \end{cases} \end{align*}

as $n\to\infty$ . So the order of the leading term depends on whether p is below, at, or above the value $M^{-\frac{d-1}2}$ . In any case this term is certainly dominated by $p^{2n}$ as $n\to\infty$ (i.e., $\sum_{\{i,\ell\}\subset\Sigma} \mathrm{Var}(X_n^{i,\ell})=o(p^{2n})$ ) and thus vanishes rapidly for $n\to\infty$ . Equations (7.9) and (7.14) show that the situation for the second sum in (7.22) is easier. For $p>M^{-d}$ and any $n\in{\mathbb N}$ , we have

\begin{align*} \sum_{ \{i,\ell\}\neq\{i',\ell'\}} \mathrm{Cov}\Big(X_n^{i,\ell},X_n^{i',\ell'}\Big) &=\frac {2d(M-2)(1-p)}{M^{d-1}p}p^{4n} + 8\binom{d}{2}(M-1)^2\times \\[5pt] &\quad \frac{1-p}{M^d p-1}\cdot p^{4n}\left(1-(M^{d}p)^{-n}\right), \end{align*}

and thus, as $n\to\infty$ ,

\begin{align*} \sum_{ \{i,\ell\}\neq\{i',\ell'\}} \mathrm{Cov}\Big(X_n^{i,\ell},X_n^{i',\ell'}) &= O(p^{4n}\Big). \end{align*}

For the last sum in (7.22), we infer from (7.21) that in all three cases this term is certainly dominated by $(Mp)^{2n}$ as $n\to\infty$ . Thus, the second term in (7.22) (of order $(Mp)^{2n}$ ) is always the leading one. But we also need the correct second-order term in order to say something about the speed of convergence of $\mathrm{Var}(Z_n^{d-1})$ . The recursion for $\beta_n$ will produce a second term of order $(p/M^{d-2})^n$ , but it turns out that for some parameters there are others of higher order which we will therefore have to take into account in the recursion. Note that for $p^2\geq1/M^{d-1}$ we have $Mp^3\geq p/M^{d-2}$ , and thus we include in the recursion the corresponding term of order $(Mp^3)^n$ , which is the term of highest order among the remaining ones for these parameters. This is not necessary for $p^2<1/M^{d-1}$ . More precisely, we have for any $p>M^{-d}$ , as $n\to\infty$ ,

\begin{align*} \beta_n=\frac{p}{M^{d-2}}\cdot \beta_{n-1} + \tilde{c}\cdot(Mp)^{2n}+\begin{cases} \hat c_1\cdot(Mp^3)^{n}+O(q^n) &\text{if } p^2\geq M^{-(d-1)},\\[5pt] O((Mp^3)^n) & \text{if } p^2< M^{-(d-1)}, \end{cases}\end{align*}

where $\tilde{c}\;:\!=\;\frac {d^2(1-p)^3}{M^{d-2}p(M-p)^2}$ , $\hat c_1$ is the same as in (7.21), and q is some number strictly less than $Mp^3$ (depending on the parameter combination). By induction, this yields

\begin{align*} \beta_n=\tilde{c} \frac{M^{d}p}{M^{d}p-1}(Mp)^{2n} + \begin{cases} \hat c_1 \frac{M^{d-1}p^2}{M^{d-1}p^2-1}(Mp^3)^n+ O(s^n) & \text{if } p^2>1/M^{d-1},\\[5pt] \hat c_1 (Mp^3)^n \cdot n+O((Mp^3)^n) & \text{if } p^2=1/M^{d-1},\\[5pt] \bar c \left(\frac{p}{M^{d-2}}\right)^n +O((Mp^3)^n) & \text{if } p^2<1/M^{d-1},\\[5pt] \end{cases} \end{align*}

as $n\to\infty$ , where $\tilde c$ and $\hat c_1$ are as above, $s<Mp^3$ is some number, and $\bar c\in{\mathbb R}$ is some constant which we do not give explicitly here, as doing so would require us to determine all the terms in the above recursion exactly. Here we have used that $\beta_0=\mathrm{Var}(V_{d-1}(F_0))=0$ and that $p>M^{-d}$ is equivalent to $M^2p^2>p/M^{d-2}$ . Note that $\tilde{c} \frac{M^{d}p}{M^{d}p-1}=\bar c_2$ , i.e. the constant in the first-order term of $\mathrm{Var}(V_{d-1}(F_n))$ is as asserted in (3.3), and also the second-order terms given in (3.3) are transparent from the above equation. This completes the proof of the formula for the variance in Theorem 3.2.

Funding information

S. W. was supported by the Deutsche Forschungsgemeinschaft (DFG) grant WI 3264/5-1. M. A. K. was supported by the DFG through the SPP 2265 under grant numbers LO 418/25-1, WI 5527/1-1, and ME 1361/16-1. M. A. K. also acknowledges funding from the Volkswagenstiftung via the Experiment-Projekt Mecke, and from the Helmholtz Association and the DLR via the Helmholtz Young Investigator Group ‘DataMat’.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Arhosalo, I. et al. (2012). Visible parts of fractal percolation. Proc. Edinburgh Math. Soc. 55, 311331.10.1017/S0013091509001680CrossRefGoogle Scholar
Athreya, K. B. and Ney, P. E. (1972). Branching Processes. Springer, New York.10.1007/978-3-642-65371-1CrossRefGoogle Scholar
Berlinkov, A. and Järvenpää, E. (2019). Porosities of Mandelbrot percolation. J. Theoret. Prob. 32, 608632.10.1007/s10959-019-00895-zCrossRefGoogle Scholar
Biggins, J. D. (1977). Martingale convergence in the branching random walk. J. Appl. Prob. 14, 2537.10.2307/3213258CrossRefGoogle Scholar
Broman, E. I. and Camia, F. (2008). Large-N limit of crossing probabilities, discontinuity, and asymptotic behavior of threshold values in Mandelbrot’s fractal percolation process. Electron. J. Prob. 13, 980–999.10.1214/EJP.v13-511CrossRefGoogle Scholar
Broman, E. I. and Camia, F. (2010). Universal behavior of connectivity properties in fractal percolation models. Electron. J. Prob. 15, 13941414.10.1214/EJP.v15-805CrossRefGoogle Scholar
Broman, E. I., Camia, F., Joosten, M. and Meester, R. (2013). Dimension (in)equalities and Hölder continuous curves in fractal percolation. J. Theoret. Prob. 26, 836854.10.1007/s10959-012-0413-8CrossRefGoogle Scholar
Buczolich, Z. et al. (2021). Fractal percolation is unrectifiable. Adv. Math. 390, article no. 107906.10.1016/j.aim.2021.107906CrossRefGoogle Scholar
Chayes, J. T. and Chayes, L. (1989). The large-N limit of the threshold values in Mandelbrot’s fractal percolation process. J. Phys. A 22, L501L506.10.1088/0305-4470/22/11/009CrossRefGoogle Scholar
Chayes, J. T., Chayes, L. and Durrett, R. (1988). Connectivity properties of Mandelbrot’s percolation process. Prob. Theory Relat. Fields 77, 307324.10.1007/BF00319291CrossRefGoogle Scholar
Chayes, L. (1996). On the length of the shortest crossing in the super-critical phase of Mandelbrot’s percolation process. Stoch. Process. Appl. 61, 2543.10.1016/0304-4149(95)00071-2CrossRefGoogle Scholar
Chen, C., Ojala, T., Rossi, E. and Suomala, V. (2017). Fractal percolation, porosity, and dimension. J. Theoret. Prob. 30, 14711498.10.1007/s10959-016-0680-xCrossRefGoogle Scholar
Don, H. (2015). New methods to bound the critical probability in fractal percolation. Random Structures Algorithms 47, 710730.10.1002/rsa.20566CrossRefGoogle Scholar
Falconer, K. J. (1986). Random fractals. Math. Proc. Camb. Phil. Soc. 100, 559582.10.1017/S0305004100066299CrossRefGoogle Scholar
Gatzouras, D. (2000). Lacunarity of self-similar and stochastically self-similar sets. Trans. Amer. Math. Soc. 352, 19531983.10.1090/S0002-9947-99-02539-8CrossRefGoogle Scholar
Graf, S. (1987). Statistically self-similar fractals. Prob. Theory Relat. Fields 74, 357392.10.1007/BF00699096CrossRefGoogle Scholar
Klatt, M. A., Schröder-Turk, G. E. and Mecke, K. (2017). Anisotropy in finite continuum percolation: threshold estimation by Minkowski functionals. J. Statist. Mech. 2017, article no. 023302.10.1088/1742-5468/aa5a19CrossRefGoogle Scholar
Klatt, M. A. and Winter, S. (2020). Geometric functionals of fractal percolation. Adv. Appl. Prob. 52, 10851126.10.1017/apr.2020.33CrossRefGoogle Scholar
Last, G. and Ochsenreither, E. (2014). Percolation on stationary tessellations: models, mean values, and second-order structure. J. Appl. Prob. 51, 311332.10.1239/jap/1417528483CrossRefGoogle Scholar
Mandelbrot, B. B. (1974). Intermittent turbulence in self-similar cascades: divergence of high moments and dimension of the carrier. J. Fluid Mech. 62, 331358.10.1017/S0022112074000711CrossRefGoogle Scholar
Mauldin, R. D. and Williams, S. C. (1986). Random recursive constructions: asymptotic geometric and topological properties. Trans. Amer. Math. Soc. 295, 325346.10.1090/S0002-9947-1986-0831202-5CrossRefGoogle Scholar
Neher, R. A., Mecke, K. and Wagner, H. (2008). Topological estimation of percolation thresholds. J. Statist. Mech. 2008, article no. P01011.10.1088/1742-5468/2008/01/P01011CrossRefGoogle Scholar
Nerman, O. (1981). On the convergence of supercritical general (C-M-J) branching processes. Z. Wahrscheinlichkeitsth. 57, 365395.10.1007/BF00534830CrossRefGoogle Scholar
Newman, M. E. J. and Ziff, R. M. (2000). Efficient Monte Carlo algorithm and high-precision results for percolation. Phys. Rev. Lett. 85, 41044107.10.1103/PhysRevLett.85.4104CrossRefGoogle ScholarPubMed
Patzschke, N. (1997). The strong open set condition in the random case. Proc. Amer. Math. Soc. 125, 21192125.10.1090/S0002-9939-97-03816-1CrossRefGoogle Scholar
Rams, M. and Simon, K. (2014). The dimension of projections of fractal percolations. J. Statist. Phys. 154, 633655.10.1007/s10955-013-0886-6CrossRefGoogle Scholar
Rams, M. and Simon, K. (2015). Projections of fractal percolations. Ergod. Theory Dynam. Systems 35, 530545.10.1017/etds.2013.45CrossRefGoogle Scholar
Schneider, R. (2014). Convex Bodies: The Brunn–Minkowski Theory. Cambridge University Press.Google Scholar
Schneider, R. and Weil, W. (2008). Stochastic and Integral Geometry. Springer, Berlin.10.1007/978-3-540-78859-1CrossRefGoogle Scholar
Zähle, M. (2011). Lipschitz–Killing curvatures of self-similar random fractals. Trans. Amer. Math. Soc. 363, 26632684.10.1090/S0002-9947-2010-05198-0CrossRefGoogle Scholar
Figure 0

Figure 1. Semi-logarithmic plots of the variances and the absolute values of the covariances of $Z^k_{\infty}$ in ${\mathbb R}^2$ as functions of p for different values of $M\leq 20$ (indicated by the color scale); see (2.6) and (2.7). For $M\to\infty$, all variances and covariances converge to zero.

Figure 1

Figure 2. Semi-logarithmic plots of the variance of $Z^0_{\infty}$ rescaled by $M^2$ as functions of p for $M=2$ (left), $M=3$ (center), and $M=1000$ (right). The shaded areas for $M=2$ and $M=3$ indicate rigorously known bounds on the percolation threshold [13], as in [18, Figure 4]. The vertical line for $M=1000$ indicates an empirical estimate of $p_{c,NN}$ provided by simulations [24].