Hostname: page-component-586b7cd67f-tf8b9 Total loading time: 0 Render date: 2024-11-20T15:20:17.398Z Has data issue: false hasContentIssue false

Brownian bridge expansions for Lévy area approximations and particular values of the Riemann zeta function

Published online by Cambridge University Press:  03 November 2022

James Foster
Affiliation:
Department of Mathematical Sciences, University of Bath, Bath, BA2 7AY, UK
Karen Habermann*
Affiliation:
Department of Statistics, University of Warwick, Coventry, CV4 7AL, UK
*
*Corresponding author. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We study approximations for the Lévy area of Brownian motion which are based on the Fourier series expansion and a polynomial expansion of the associated Brownian bridge. Comparing the asymptotic convergence rates of the Lévy area approximations, we see that the approximation resulting from the polynomial expansion of the Brownian bridge is more accurate than the Kloeden–Platen–Wright approximation, whilst still only using independent normal random vectors. We then link the asymptotic convergence rates of these approximations to the limiting fluctuations for the corresponding series expansions of the Brownian bridge. Moreover, and of interest in its own right, the analysis we use to identify the fluctuation processes for the Karhunen–Loève and Fourier series expansions of the Brownian bridge is extended to give a stand-alone derivation of the values of the Riemann zeta function at even positive integers.

Type
Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press

1. Introduction

One of the well-known applications for expansions of the Brownian bridge is the strong or $L^2(\mathbb{P})$ approximation of stochastic integrals. Most notably, the second iterated integrals of Brownian motion are required by high order strong numerical methods for general stochastic differential equations (SDEs), as discussed in [Reference Clark and Cameron4, Reference Kloeden and Platen22, Reference Rößler33]. Due to integration by parts, such integrals can be expressed in terms of the increment and Lévy area of Brownian motion. The approximation of multidimensional Lévy area is well studied, see [Reference Davie5, Reference Dickinson8, Reference Foster11, Reference Gaines and Lyons13, Reference Gaines and Lyons14, Reference Kloeden, Platen and Wright23, Reference Kuznetsov25, Reference Mrongowius and Rößler32, Reference Wiktorsson35], with the majority of the algorithms proposed being based on a Fourier series expansion or the standard piecewise linear approximation of Brownian motion. Some alternatives include [Reference Davie5, Reference Foster11, Reference Kuznetsov25] which consider methods associated with a polynomial expansion of the Brownian bridge.

Since the advent of Multilevel Monte Carlo (MLMC), introduced by Giles in [Reference Giles16] and subsequently developed in [Reference Belomestny and Nagapetyan2, Reference Debrabant, Ghasemifard and Mattsson6, Reference Debrabant and Rößler7, Reference Giles15, Reference Giles and Szpruch17], Lévy area approximation has become less prominent in the literature. In particular, the antithetic MLMC method introduced by Giles and Szpruch in [Reference Giles and Szpruch17] achieves the optimal complexity for the weak approximation of multidimensional SDEs without the need to generate Brownian Lévy area. That said, there are concrete applications where the simulation of Lévy area is beneficial, such as for sampling from non-log-concave distributions using Itô diffusions. For these sampling problems, high order strong convergence properties of the SDE solver lead to faster mixing properties of the resulting Markov chain Monte Carlo (MCMC) algorithm, see [Reference Li, Wu, Mackey and Erdogdu26].

In this paper, we compare the approximations of Lévy area based on the Fourier series expansion and on a polynomial expansion of the Brownian bridge. We particularly observe their convergence rates and link those to the fluctuation processes associated with the different expansions of the Brownian bridge. The fluctuation process for the polynomial expansion is studied in [Reference Habermann19], and our study of the fluctuation process for the Fourier series expansion allows us, at the same time, to determine the fluctuation process for the Karhunen–Loève expansion of the Brownian bridge. As an attractive side result, we extend the required analysis to obtain a stand-alone derivation of the values of the Riemann zeta function at even positive integers. Throughout, we denote the positive integers by $\mathbb{N}$ and the nonnegative integers by ${\mathbb{N}}_0$ .

Let us start by considering a Brownian bridge $(B_t)_{t\in [0,1]}$ in $\mathbb{R}$ with $B_0=B_1=0$ . This is the unique continuous-time Gaussian process with mean zero and whose covariance function $K_B$ is given by, for $s,t\in [0,1]$ ,

(1.1) \begin{equation} K_B(s,t)=\min\!(s,t)-st. \end{equation}

We are concerned with the following three expansions of the Brownian bridge. The Karhunen–Loève expansion of the Brownian bridge, see Loève [[Reference Loève27], p. 144], is of the form, for $t\in [0,1]$ ,

(1.2) \begin{equation} B_t=\sum _{k=1}^\infty \frac{2\sin\!(k\pi t)}{k\pi } \int _0^1\cos\!(k\pi r){\mathrm{d}} B_r. \end{equation}

The Fourier series expansion of the Brownian bridge, see Kloeden–Platen [[Reference Kloeden and Platen22], p. 198] or Kahane [[Reference Kahane21], Sect. 16.3], yields, for $t\in [0,1]$ ,

(1.3) \begin{equation} B_t=\frac{1}{2}a_0+\sum _{k=1}^\infty \left ( a_k\cos\!(2k\pi t)+b_k\sin\!(2k\pi t) \right ), \end{equation}

where, for $k\in{\mathbb{N}}_0$ ,

(1.4) \begin{equation} a_k=2\int _0^1 \cos\!(2k\pi r)B_r{\mathrm{d}} r\quad \text{and}\quad b_k=2\int _0^1 \sin\!(2k\pi r)B_r{\mathrm{d}} r. \end{equation}

A polynomial expansion of the Brownian bridge in terms of the shifted Legendre polynomials $Q_k$ on the interval $[0,1]$ of degree $k$ , see [Reference Foster, Lyons and Oberhauser12, Reference Habermann19], is given by, for $t\in [0,1]$ ,

(1.5) \begin{equation} B_t=\sum _{k=1}^\infty (2k+1) c_k \int _0^t Q_k(r){\mathrm{d}} r, \end{equation}

where, for $k\in{\mathbb{N}}$ ,

(1.6) \begin{equation} c_k=\int _0^1 Q_k(r){\mathrm{d}} B_r. \end{equation}

These expansions are summarised in Table A1 in Appendix A and they are discussed in more detail in Section 2. For an implementation of the corresponding approximations for Brownian motion as Chebfun examples into MATLAB, see Filip, Javeed and Trefethen [Reference Filip, Javeed and Trefethen9] as well as Trefethen [Reference Trefethen34].

We remark that the polynomial expansion (1.5) can be viewed as a Karhunen–Loève expansion of the Brownian bridge with respect to the weight function $w$ on $(0,1)$ given by $w(t) = \frac{1}{t(1-t)}$ . This approach is employed in [Reference Foster, Lyons and Oberhauser12] to derive the expansion along with the standard optimality property of Karhunen–Loève expansions. In this setting, the polynomial approximation of $(B_t)_{t\in [0,1]}$ is optimal among truncated series expansions in a weighted $L^2(\mathbb{P})$ sense corresponding to the nonconstant weight function $w$ . To avoid confusion, we still adopt the convention throughout to reserve the term Karhunen–Loève expansion for (1.2), whereas (1.5) will be referred to as the polynomial expansion.

Before we investigate the approximations of Lévy area based on the different expansions of the Brownian bridge, we first analyse the fluctuations associated with the expansions. The fluctuation process for the polynomial expansion is studied and characterised in [Reference Habermann19], and these results are recalled in Section 2.3. The fluctuation processes $(F_t^{N,1})_{t\in [0,1]}$ for the Karhunen–Loève expansion and the fluctuation processes $(F_t^{N,2})_{t\in [0,1]}$ for the Fourier series expansion are defined as, for $N\in{\mathbb{N}}$ ,

(1.7) \begin{equation} F_t^{N,1}=\sqrt{N}\left (B_t-\sum _{k=1}^N\frac{2\sin\!(k\pi t)}{k\pi }\int _0^1\cos\!(k\pi r){\mathrm{d}} B_r\right ), \end{equation}

and

(1.8) \begin{equation} F_t^{N,2}=\sqrt{2N}\left (B_t-\frac{1}{2}a_0-\sum _{k=1}^N\left (a_k\cos\!(2k\pi t)+b_k\sin\!(2k\pi t)\right )\right ). \end{equation}

The scaling by $\sqrt{2N}$ in the process $(F_t^{N,2})_{t\in [0,1]}$ is the natural scaling to use because increasing $N$ by one results in the subtraction of two additional Gaussian random variables. We use $\mathbb{E}$ to denote the expectation with respect to Wiener measure $\mathbb{P}$ .

Theorem 1.1. The fluctuation processes $(F_t^{N,1})_{t\in [0,1]}$ for the Karhunen–Loève expansion converge in finite dimensional distributions as $N\to \infty$ to the collection $(F_t^1)_{t\in [0,1]}$ of independent Gaussian random variables with mean zero and variance

\begin{equation*} {\mathbb {E}}\left [\left (F_t^1\right )^2 \right ]= \begin {cases} \dfrac{1}{\pi ^2} & \text {if }\;t\in (0,1)\\[12pt] 0 & \text {if }\; t=0\;\text { or }\;t=1 \end {cases}\!. \end{equation*}

The fluctuation processes $(F_t^{N,2})_{t\in [0,1]}$ for the Fourier expansion converge in finite dimensional distributions as $N\to \infty$ to the collection $(F_t^2)_{t\in [0,1]}$ of zero-mean Gaussian random variables whose covariance structure is given by, for $s,t\in [0,1]$ ,

\begin{equation*} {\mathbb {E}}\left [F_s^2 F_t^2 \right ]= \begin {cases} \dfrac{1}{\pi ^2} & \text {if }\;s=t\text { or } s,t\in \{0,1\}\\[12pt] 0 & \text {otherwise} \end {cases}. \end{equation*}

The difference between the fluctuation result for the Karhunen–Loève expansion and the fluctuation result for the polynomial expansion, see [[Reference Habermann19], Theorem 1.6] or Section 2.3, is that there the variances of the independent Gaussian random variables follow the semicircle $\frac{1}{\pi }\sqrt{t(1-t)}$ whereas here they are constant on $(0,1)$ , see Figure 1. The limit fluctuations for the Fourier series expansion further exhibit endpoints which are correlated.

Figure 1. Table showing basis functions and fluctuations for the Brownian bridge expansions.

As pointed out in [Reference Habermann19], the reason for considering convergence in finite dimensional distributions for the fluctuation processes is that the limit fluctuations neither have a realisation as processes in $C([0,1],{\mathbb{R}})$ , nor are they equivalent to measurable processes.

We prove Theorem 1.1 by studying the covariance functions of the Gaussian processes $(F_t^{N,1})_{t\in [0,1]}$ and $(F_t^{N,2})_{t\in [0,1]}$ given in Lemma 2.2 and Lemma 2.3 in the limit $N\to \infty$ . The key ingredient is the following limit theorem for sine functions, which we see concerns the pointwise convergence for the covariance function of $(F_t^{N,1})_{t\in [0,1]}$ .

Theorem 1.2. For all $s,t\in [0,1]$ , we have

\begin{equation*} \lim _{N\to \infty }N\left (\min\!(s,t)-st-\sum _{k=1}^N\frac {2\sin\!(k\pi s)\sin\!(k\pi t)}{k^2\pi ^2}\right )= \begin {cases} \dfrac{1}{\pi ^2} & \text {if } s=t\text { and } t\in (0,1)\\[10pt] 0 & \text {otherwise} \end {cases}. \end{equation*}

The above result serves as one of four base cases in the analysis performed in [Reference Habermann18] of the asymptotic error arising when approximating the Green’s function of a Sturm–Liouville problem through a truncation of its eigenfunction expansion. The work [Reference Habermann18] offers a unifying view for Theorem 1.2 and [[Reference Habermann19], Theorem 1.5].

The proof of Theorem 1.2 is split into an on-diagonal and an off-diagonal argument. We start by proving the convergence on the diagonal away from its endpoints by establishing locally uniform convergence, which ensures continuity of the limit function, and by using a moment argument to identify the limit. As a consequence of the on-diagonal convergence, we obtain the next corollary which then implies the off-diagonal convergence in Theorem 1.2.

Corollary 1.3. For all $t\in (0,1)$ , we have

\begin{equation*} \lim _{N\to \infty } N\sum _{k=N+1}^\infty \frac {\cos\!(2k\pi t)}{k^2\pi ^2}=0. \end{equation*}

Moreover, and of interest in its own right, the moment analysis we use to prove the on-diagonal convergence in Theorem 1.2 leads to a stand-alone derivation of the result that the values of the Riemann zeta function $\zeta \colon{\mathbb{C}}\setminus \{1\}\to{\mathbb{C}}$ at even positive integers can be expressed in terms of the Bernoulli numbers $B_{2n}$ as, for $n\in{\mathbb{N}}$ ,

\begin{equation*} \zeta (2n)=(\!-\!1)^{n+1}\frac {\left (2\pi \right )^{2n}B_{2n}}{2(2n)!}, \end{equation*}

see Borevich and Shafarevich [Reference Borevich and Shafarevich3]. In particular, the identity

(1.9) \begin{equation} \sum _{k=1}^\infty \frac{1}{k^2} =\frac{\pi ^2}{6}, \end{equation}

that is, the resolution to the Basel problem posed by Mengoli [Reference Mengoli28] is a consequence of our analysis and not a prerequisite for it.

We turn our attention to studying approximations of second iterated integrals of Brownian motion. For $d\geq 2$ , let $(W_t)_{t\in [0,1]}$ denote a $d$ -dimensional Brownian motion and let $(B_t)_{t\in [0,1]}$ given by $B_t=W_t-tW_1$ be its associated Brownian bridge in ${\mathbb{R}}^d$ . We denote the independent components of $(W_t)_{t\in [0,1]}$ by $(W_t^{(i)})_{t\in [0,1]}$ , for $i\in \{1,\ldots,d\}$ , and the components of $(B_t)_{t\in [0,1]}$ by $(B_t^{(i)})_{t\in [0,1]}$ , which are also independent by construction. We now focus on approximations of Lévy area.

Definition 1.4. The Lévy area of the $d$ -dimensional Brownian motion $W$ over the interval $[s,t]$ is the antisymmetric $d\times d$ matrix $A_{s,t}$ with the following entries, for $i,j\in \{1,\ldots,d\}$ ,

\begin{equation*} A_{s,t}^{(i,j)} \;:\!=\; \frac {1}{2}\left (\int _s^t \left (W_r^{(i)} - W_s^{(i)}\right ){\mathrm {d}} W_r^{(j)} - \int _s^t \left (W_r^{(j)} - W_s^{(j)}\right ){\mathrm {d}} W_r^{(i)}\right ). \end{equation*}

For an illustration of Lévy area for a two-dimensional Brownian motion, see Figure 2.

Figure 2. Lévy area is the chordal area between independent Brownian motions.

Remark 1.5. Given the increment $W_t - W_s$ and the Lévy area $A_{s,t}$ , we can recover the second iterated integrals of Brownian motion using integration by parts as, for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ ,

\begin{equation*} \int _s^t \left (W_r^{(i)} - W_s^{(i)}\right ){\mathrm {d}} W_r^{(j)} = \frac {1}{2}\left (W_t^{(i)} - W_s^{(i)}\right )\left (W_t^{(j)} - W_s^{(j)}\right ) + A_{s,t}^{(i,j)}. \end{equation*}

We consider the sequences $\{a_k\}_{k\in{\mathbb{N}}_0}$ , $\{b_k\}_{k\in{\mathbb{N}}}$ and $\{c_k\}_{k\in{\mathbb{N}}}$ of Gaussian random vectors, where the coordinate random variables $a_k^{(i)}$ , $b_k^{(i)}$ and $c_k^{(i)}$ are defined for $i\in \{1,\ldots,d\}$ by (1.4) and (1.6), respectively, in terms of the Brownian bridge $(B_t^{(i)})_{t\in [0,1]}$ . Using the random coefficients arising from the Fourier series expansion(1.3), we obtain the approximation of Brownian Lévy area proposed by Kloeden and Platen [Reference Kloeden and Platen22] and Milstein [Reference Milstein31]. Further approximating terms so that only independent random coefficients are used yields the Kloeden–Platen–Wright approximation in [Reference Kloeden, Platen and Wright23, Reference Milstein30, Reference Wiktorsson35]. Similarly, using the random coefficients from the polynomial expansion (1.5), we obtain the Lévy area approximation first proposed by Kuznetsov in [Reference Kuznetsov24]. These Lévy area approximations are summarised in Table A2 in Appendix A and have the following asymptotic convergence rates.

Theorem 1.6 (Asymptotic convergence rates of Lévy area approximations). For $n\in{\mathbb{N}}$ , we set $N=2n$ and define approximations $\widehat{A}_{n}$ , $\widetilde{A}_{n}$ and $\overline{A}_{2n}$ of the Lévy area $A_{0,1}$ by, for $i,j\in \{1,\ldots,d\}$ ,

(1.10) \begin{align} \widehat{A}_{n}^{ (i,j)} \;:\!=\; \frac{1}{2}\left (a_0^{(i)}W_1^{(j)} - W_1^{(i)}a_0^{(j)}\right ) + \pi \sum _{k=1}^{n-1} k\left (a_{k}^{(i)}b_k^{(j)} - b_k^{(i)}a_{k}^{(j)}\right ), \end{align}
(1.11) \begin{align} \widetilde{A}_{n}^{ (i,j)} \;:\!=\; \pi \sum _{k=1}^{n-1} k\left (a_{k}^{(i)}\left (b_k^{(j)} - \frac{1}{k\pi }W_1^{(j)}\right ) - \left (b_k^{(i)} - \frac{1}{k\pi }W_1^{(i)}\right )a_{k}^{(j)}\right ), \end{align}
(1.12) \begin{align} \overline{A}_{2n}^{ (i,j)} \;:\!=\; \frac{1}{2}\left (W_1^{(i)}c_1^{(j)} - c_1^{(i)}W_1^{(j)}\right ) + \frac{1}{2}\sum _{k=1}^{2n-1}\left (c_k^{(i)}c_{k+1}^{(j)} - c_{k+1}^{(i)}c_k^{(j)}\right ). \end{align}

Then $\widehat{A}_{n}$ , $\widetilde{A}_{n}$ and $\overline{A}_{2n}$ are antisymmetric $d \times d$ matrices and, for $i\neq j$ and as $N\to \infty$ , we have

\begin{align*}{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \widehat{A}_{n}^{ (i,j)}\Big )^2 \bigg ] & \sim \frac{1}{\pi ^2}\bigg (\frac{1}{N}\bigg ),\\[3pt]{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \widetilde{A}_{n}^{ (i,j)}\Big )^2 \bigg ] & \sim \frac{3}{\pi ^2}\bigg (\frac{1}{N}\bigg ),\\[3pt]{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \overline{A}_{2n}^{ (i,j)}\Big )^2 \bigg ] & \sim \frac{1}{8}\bigg (\frac{1}{N}\bigg ). \end{align*}

The asymptotic convergence rates in Theorem 1.6 are phrased in terms of $N$ since the number of Gaussian random vectors required to define the above Lévy area approximations is $N$ or $N-1$ , respectively. Of course, it is straightforward to define the polynomial approximation $\overline{A}_{n}$ for $n\in{\mathbb{N}}$ , see Theorem 5.4.

Intriguingly, the convergence rates for the approximations resulting from the Fourier series and the polynomial expansion correspond exactly with the areas under the limit variance function for each fluctuation process, which are

\begin{equation*} \int _0^1\frac {1}{\pi ^2}{\mathrm {d}} t=\frac {1}{\pi ^2} \quad \text {and}\quad \int _0^1 \frac {1}{\pi }\sqrt {t(1-t)}{\mathrm {d}} t=\frac {1}{8}. \end{equation*}

We provide heuristics demonstrating how this correspondence arises at the end of Section 5.

By adding an additional Gaussian random matrix that matches the covariance of the tail sum, it is possible to derive high order Lévy area approximations with $O(N^{-1})$ convergence in $L^2(\mathbb{P})$ . Wiktorsson [Reference Wiktorsson35] proposed this approach using the Kloeden–Platen–Wright approximation (1.11) and this was recently improved by Mrongowius and Rößler in [Reference Mrongowius and Rößler32] who use the approximation (1.10) obtained from the Fourier series expansion (1.3).

We expect that an $O(N^{-1})$ polynomial-based approximation is possible using the same techniques. While this approximation should be slightly less accurate than the Fourier approach, we expect it to be easier to implement due to both the independence of the coefficients $\{c_k\}_{k\in{\mathbb{N}}}$ and the covariance of the tail sum having a closed-form expression, see Theorem 5.4. Moreover, this type of method has already been studied in [Reference Davie5, Reference Flint and Lyons10, Reference Foster11] with Brownian Lévy area being approximated by

(1.13) \begin{equation} \widehat{A}_{0,1}^{ (i,j)} \;:\!=\; \frac{1}{2}\left (W_1^{(i)}c_1^{(j)} - c_1^{(i)}W_1^{(j)}\right ) + \lambda _{ 0,1}^{(i,j)}, \end{equation}

where the antisymmetric $d\times d$ matrix $\lambda _{ 0,1}$ is normally distributed and designed so that $\widehat{A}_{ 0,1}$ has the same covariance structure as the Brownian Lévy area $A_{ 0,1}$ . Davie [Reference Davie5] as well as Flint and Lyons [Reference Flint and Lyons10] generate each $(i,j)$ -entry of $\lambda _{0,1}$ independently as $\lambda _{ 0,1}^{(i,j)} \sim \mathcal{N}\big (0, \frac{1}{12}\big )$ for $i \lt j$ . In [Reference Foster11], it is shown that the covariance structure of $A_{0,1}$ can be explicitly computed conditional on both $W_1$ and $c_1$ . By matching the conditional covariance structure of $A_{ 0,1}$ , the work [Reference Foster11] obtains the approximation

\begin{equation*} \lambda _{ 0,1}^{(i,j)} \sim \mathcal {N}\bigg (0, \frac {1}{20} + \frac {1}{20}\Big (\big (c_1^{(i)}\big )^2 + \big (c_1^{(j)}\big )^2\Big )\bigg ), \end{equation*}

where the entries $ \{\lambda _{ 0,1}^{(i,j)} \}_{i \lt j}$ are still generated independently, but only after $c_1$ has been generated.

By rescaling (1.13) to approximate Lévy area on $\big [\frac{k}{N}, \frac{k+1}{N}\big ]$ and summing over $k\in \{0,\ldots, N-1\}$ , we obtain a fine discretisation of $A_{0,1}$ involving $2N$ Gaussian random vectors and $N$ random matrices. In [Reference Davie5, Reference Flint and Lyons10, Reference Foster11], the Lévy area of Brownian motion and this approximation are probabilistically coupled in such a way that $L^{2}(\mathbb{P})$ convergence rates of $O(N^{-1})$ can be established. Furthermore, the efficient Lévy area approximation (1.13) can be used directly in numerical methods for SDEs, which then achieve $L^{2}(\mathbb{P})$ convergence of $O(N^{-1})$ under certain conditions on the SDE vector fields, see [Reference Davie5, Reference Flint and Lyons10]. We leave such high order polynomial-based approximations of Lévy area as a topic for future work.

The paper is organised as follows

In Section 2, we provide an overview of the three expansions we consider for the Brownian bridge, and we characterise the associated fluctuation processes $(F_t^{N,1})_{t\in [0,1]}$ and $(F_t^{N,2})_{t\in [0,1]}$ . Before discussing their behaviour in the limit $N\to \infty$ , we initiate the moment analysis used to prove the on-diagonal part of Theorem 1.2 and we extend the analysis to determine the values of the Riemann zeta function at even positive integers in Section 3. The proof of Theorem 1.2 follows in Section 4, where we complete the moment analysis and establish a locally uniform convergence to identify the limit on the diagonal, before we deduce Corollary 1.3, which then allows us to obtain the off-diagonal convergence in Theorem 1.2. We close Section 4 by proving Theorem 1.1. In Section 5, we compare the asymptotic convergence rates of the different approximations of Lévy area, which results in a proof of Theorem 1.6.

2. Series expansions for the Brownian bridge

We discuss the Karhunen–Loève expansion as well as the Fourier expansion of the Brownian bridge more closely, and we derive expressions for the covariance functions of their Gaussian fluctuation processes.

In our analysis, we frequently use a type of Itô isometry for Itô integrals with respect to a Brownian bridge, and we include its statement and proof for completeness.

Lemma 2.1. Let $(B_t)_{t\in [0,1]}$ be a Brownian bridge in $\mathbb{R}$ with $B_0=B_1=0$ , and let $f,g{\kern-0.5pt}\colon [0,1]\to{\mathbb{R}}$ be integrable functions. Setting $F(1)=\int _0^1 f(t){\mathrm{d}} t$ and $G(1)=\int _0^1 g(t){\mathrm{d}} t$ , we have

\begin{equation*} {\mathbb {E}}\left [\left (\int _0^1 f(t){\mathrm {d}} B_t\right )\left (\int _0^1 g(t){\mathrm {d}} B_t\right )\right ] =\int _0^1 f(t)g(t) {\mathrm {d}} t - F(1)G(1). \end{equation*}

Proof. For a standard one-dimensional Brownian motion $(W_t)_{t\in [0,1]}$ , the process $(W_t-t W_1)_{t\in [0,1]}$ has the same law as the Brownian bridge $(B_t)_{t\in [0,1]}$ . In particular, the random variable $\int _0^1 f(t){\mathrm{d}} B_t$ is equal in law to the random variable

\begin{equation*} \int _0^1 f(t){\mathrm {d}} W_t -W_1\int _0^1f(t){\mathrm {d}} t =\int _0^1 f(t){\mathrm {d}} W_t - W_1 F(1). \end{equation*}

Using a similar expression for $\int _0^1 g(t){\mathrm{d}} B_t$ and applying the usual Itô isometry, we deduce that

\begin{align*} &{\mathbb{E}}\left [\left (\int _0^1 f(t){\mathrm{d}} B_t\right )\left (\int _0^1 g(t){\mathrm{d}} B_t\right )\right ]\\[5pt] &\qquad =\int _0^1 f(t)g(t){\mathrm{d}} t-F(1)\int _0^1 g(t){\mathrm{d}} t -G(1)\int _0^1 f(t){\mathrm{d}} t+F(1)G(1)\\[5pt] &\qquad =\int _0^1 f(t)g(t){\mathrm{d}} t - F(1)G(1), \end{align*}

as claimed.

2.1 The Karhunen–Loève expansion

Mercer’s theorem, see [Reference Mercer.29], states that for a continuous symmetric nonnegative definite kernel $K\colon [0,1]\times [0,1]\to{\mathbb{R}}$ there exists an orthonormal basis $\{e_k\}_{k\in{\mathbb{N}}}$ of $L^2([0,1])$ which consists of eigenfunctions of the Hilbert–Schmidt integral operator associated with $K$ and whose eigenvalues $\{\lambda _k\}_{k\in{\mathbb{N}}}$ are nonnegative and such that, for $s,t\in [0,1]$ , we have the representation

\begin{equation*} K(s,t)=\sum _{k=1}^\infty \lambda _k e_k(s) e_k(t), \end{equation*}

which converges absolutely and uniformly on $[0,1]\times [0,1]$ . For the covariance function $K_B$ defined by (1.1) of the Brownian bridge $(B_t)_{t\in [0,1]}$ , we obtain, for $k\in{\mathbb{N}}$ and $t\in [0,1]$ ,

\begin{equation*} e_k(t)=\sqrt {2}\sin\!(k\pi t) \quad \text {and}\quad \lambda _k=\frac {1}{k^2\pi ^2}. \end{equation*}

The Karhunen–Loève expansion of the Brownian bridge is then given by

\begin{equation*} B_t=\sum _{k=1}^\infty \sqrt {2}\sin\!(k\pi t) Z_k \quad \text {where}\quad Z_k=\int _0^1 \sqrt {2}\sin\!(k\pi r) B_r {\mathrm {d}} r, \end{equation*}

which after integration by parts yields the expression (1.2). Applying Lemma 2.1, we can compute the covariance functions of the associated fluctuation processes $(F_t^{N,1})_{t\in [0,1]}$ .

Lemma 2.2. The fluctuation process $(F_t^{N,1})_{t\in [0,1]}$ for $N\in{\mathbb{N}}$ is a zero-mean Gaussian process with covariance function $NC_1^N$ where $C_1^N\colon [0,1]\times [0,1]\to{\mathbb{R}}$ is given by

\begin{equation*} C_1^N(s,t)=\min\!(s,t)-st- \sum _{k=1}^N\frac {2\sin\!(k\pi s)\sin\!(k\pi t)}{k^2\pi ^2}. \end{equation*}

Proof. From the definition (1.7), we see that $(F_t^{N,1})_{t\in [0,1]}$ is a zero-mean Gaussian process. Hence, it suffices to determine its covariance function. By Lemma 2.1, we have, for $k,l\in{\mathbb{N}}$ ,

\begin{equation*} {\mathbb {E}}\left [\left (\int _0^1\cos\!(k\pi r){\mathrm {d}} B_r\right )\left (\int _0^1\cos\!(l\pi r){\mathrm {d}} B_r\right )\right ] =\int _0^1\cos\!(k\pi r)\cos\!(l\pi r){\mathrm {d}} r= \begin {cases} \frac {1}{2} & \text {if }k=l\\[5pt] 0 & \text {otherwise} \end {cases} \end{equation*}

and, for $t\in [0,1]$ ,

\begin{equation*} {\mathbb {E}}\left [B_t\int _0^1\cos\!(k\pi r){\mathrm {d}} B_r\right ] =\int _0^t\cos\!(k\pi r){\mathrm {d}} r=\frac {\sin\!(k\pi t)}{k\pi }. \end{equation*}

Therefore, from (1.1) and (1.7), we obtain that, for all $s,t\in [0,1]$ ,

\begin{equation*} {\mathbb {E}}\left [F_s^{N,1}F_t^{N,1}\right ] =N\left (\min\!(s,t)-st -\sum _{k=1}^N\frac {2\sin\!(k\pi s)\sin\!(k\pi t)}{k^2\pi ^2}\right ), \end{equation*}

as claimed.

Consequently, Theorem 1.2 is a statement about the pointwise convergence of the function $NC_1^N$ in the limit $N\to \infty$ .

For our stand-alone derivation of the values of the Riemann zeta function at even positive integers in Section 3, it is further important to note that since, by Mercer’s theorem, the representation

(2.1) \begin{equation} K_B(s,t)=\min\!(s,t)-st=\sum _{k=1}^\infty \frac{2\sin\!(k\pi s)\sin\!(k\pi t)}{k^2\pi ^2} \end{equation}

converges uniformly for $s,t\in [0,1]$ , the sequence $\{C_1^N\}_{N\in{\mathbb{N}}}$ converges uniformly on $[0,1]\times [0,1]$ to the zero function. It follows that, for all $n\in{\mathbb{N}}_0$ ,

(2.2) \begin{equation} \lim _{N\to \infty }\int _0^1 C_1^N(t,t) t^n{\mathrm{d}} t = 0. \end{equation}

2.2 The Fourier expansion

Whereas for the Karhunen–Loève expansion the sequence

\begin{equation*} \left \{\int _0^1\cos\!(k\pi r){\mathrm {d}} B_r\right \}_{k\in {\mathbb {N}}} \end{equation*}

of random coefficients is formed by independent Gaussian random variables, it is crucial to observe that the random coefficients appearing in the Fourier expansion are not independent. Integrating by parts, we can rewrite the coefficients defined in (1.4) as

(2.3) \begin{equation} a_0=2\int _0^1B_r{\mathrm{d}} r=-2\int _0^1r{\mathrm{d}} B_r \quad \text{and}\quad b_0=0 \end{equation}

as well as, for $k\in{\mathbb{N}}$ ,

(2.4) \begin{equation} a_k=-\int _0^1\frac{\sin\!(2k\pi r)}{k\pi }{\mathrm{d}} B_r \quad \text{and}\quad b_k=\int _0^1\frac{\cos\!(2k\pi r)}{k\pi }{\mathrm{d}} B_r. \end{equation}

Applying Lemma 2.1, we see that

(2.5) \begin{equation} {\mathbb{E}}\left [a_0^2\right ]=4\left (\int _0^1 r^2{\mathrm{d}} r-\frac{1}{4}\right )=\frac{1}{3} \end{equation}

and, for $k,l\in{\mathbb{N}}$ ,

(2.6) \begin{equation} {\mathbb{E}}\left [a_k a_l\right ]={\mathbb{E}}\left [b_k b_l\right ]= \begin{cases} \dfrac{1}{2k^2\pi ^2} & \text{if }k=l\\[10pt] 0 & \text{otherwise} \end{cases}. \end{equation}

Since the random coefficients are Gaussian random variables with mean zero, by (2.3) and (2.4), this implies that, for $k\in{\mathbb{N}}$ ,

\begin{equation*} a_0\sim \mathcal {N}\left (0,\frac {1}{3}\right ) \quad \text {and}\quad a_k,b_k\sim \mathcal {N}\left (0,\frac {1}{2k^2\pi ^2}\right ). \end{equation*}

For the remaining covariances of these random coefficients, we obtain that, for $k,l\in{\mathbb{N}}$ ,

(2.7) \begin{equation} {\mathbb{E}}\left [a_k b_l\right ]=0,\quad{\mathbb{E}}\left [a_0a_k\right ]=2\int _0^1\frac{\sin\!(2k\pi r)}{k\pi }r{\mathrm{d}} r=-\frac{1}{k^2\pi ^2} \quad \text{and}\quad{\mathbb{E}}\left [a_0b_k\right ]=0. \end{equation}

Using the covariance structure of the random coefficients, we determine the covariance functions of the fluctuation processes $(F_t^{N,2})_{t\in [0,1]}$ defined in (1.8) for the Fourier series expansion.

Lemma 2.3. The fluctuation process $(F_t^{N,2})_{t\in [0,1]}$ for $N\in{\mathbb{N}}$ is a Gaussian process with mean zero and whose covariance function is $2NC_2^N$ where $C_2^N\colon [0,1]\times [0,1]$ is given by

\begin{equation*} C_2^N(s,t)=\min\!(s,t)-st+\frac {s^2-s}{2}+\frac {t^2-t}{2}+\frac {1}{12}- \sum _{k=1}^N\frac {\cos\!(2k\pi (t-s))}{2k^2\pi ^2}. \end{equation*}

Proof. Repeatedly applying Lemma 2.1, we compute that, for $t\in [0,1]$ ,

(2.8) \begin{equation} {\mathbb{E}}\left [B_ta_0\right ]=-2\int _0^t r{\mathrm{d}} r+\int _0^t{\mathrm{d}} r=t-t^2 \end{equation}

as well as, for $k\in{\mathbb{N}}$ ,

(2.9) \begin{equation} {\mathbb{E}}\left [B_ta_k\right ]=-\int _0^t\frac{\sin\!(2k\pi r)}{k\pi }{\mathrm{d}} r =\frac{\cos\!(2k\pi t)-1}{2k^2\pi ^2} \quad \text{and}\quad{\mathbb{E}}\left [B_tb_k\right ]=\frac{\sin\!(2k\pi t)}{2k^2\pi ^2}. \end{equation}

From (2.5) and (2.8), it follows that, for $s,t\in [0,1]$ ,

\begin{equation*} {\mathbb {E}}\left [\left (B_s-\frac {1}{2}a_0\right )\left (B_t-\frac {1}{2}a_0\right )\right ] =\min\!(s,t)-st+\frac {s^2-s}{2}+\frac {t^2-t}{2}+\frac {1}{12}, \end{equation*}

whereas (2.7) and (2.9) imply that

\begin{equation*} {\mathbb {E}}\left [\frac {1}{2}a_0\sum _{k=1}^N a_k \cos\!(2k\pi t) -B_s\sum _{k=1}^N a_k \cos\!(2k\pi t)\right ] =-\sum _{k=1}^N\frac {\cos\!(2k\pi s)\cos\!(2k\pi t)}{2k^2\pi ^2} \end{equation*}

as well as

\begin{equation*} {\mathbb {E}}\left [B_s\sum _{k=1}^N b_k \sin\!(2k\pi t)\right ] =\sum _{k=1}^N\frac {\sin\!(2k\pi s)\sin\!(2k\pi t)}{2k^2\pi ^2}. \end{equation*}

It remains to observe that, by (2.6) and (2.7),

\begin{align*} &{\mathbb{E}}\left [\left (\sum _{k=1}^N\left (a_k\cos\!(2k\pi s)+b_k\sin\!(2k\pi s)\right )\right ) \left (\sum _{k=1}^N\left (a_k\cos\!(2k\pi t)+b_k\sin\!(2k\pi t)\right )\right )\right ]\\[5pt] &\qquad = \sum _{k=1}^N\frac{\cos\!(2k\pi s)\cos\!(2k\pi t)+\sin\!(2k\pi s)\sin\!(2k\pi t)}{2k^2\pi ^2}. \end{align*}

Using the identity

(2.10) \begin{equation} \cos\!(2k\pi (t-s))=\cos\!(2k\pi s)\cos\!(2k\pi t)+\sin\!(2k\pi s)\sin\!(2k\pi t) \end{equation}

and recalling the definition (1.8) of the fluctuation process $(F_t^{N,2})_{t\in [0,1]}$ for the Fourier expansion, we obtain the desired result.

By combining Corollary 1.3, the resolution (1.9) to the Basel problem and the representation (2.1), we can determine the pointwise limit of $2N C_2^N$ as $N\to \infty$ . We leave further considerations until Section 4.2 to demonstrate that the identity (1.9) is really a consequence of our analysis.

2.3 The polynomial expansion

As pointed out in the introduction and as discussed in detail in [Reference Foster, Lyons and Oberhauser12], the polynomial expansion of the Brownian bridge is a type of Karhunen–Loève expansion in the weighted $L^2({\mathbb{P}})$ space with weight function $w$ on $(0,1)$ defined by $w(t)=\frac{1}{t(1-t)}$ .

An alternative derivation of the polynomial expansion is given in [Reference Habermann19] by considering iterated Kolmogorov diffusions. The iterated Kolmogorov diffusion of step $N\in{\mathbb{N}}$ pairs a one-dimensional Brownian motion $(W_t)_{t\in [0,1]}$ with its first $N-1$ iterated time integrals, that is, it is the stochastic process in ${\mathbb{R}}^N$ of the form

\begin{equation*} \left (W_t,\int _0^t W_{s_1}{\mathrm {d}} s_1,\ldots, \int _0^t\int _0^{s_{N-1}}\ldots \int _0^{s_2} W_{s_1}{\mathrm {d}} s_1\ldots {\mathrm {d}} s_{N-1}\right )_{t\in [0,1]}. \end{equation*}

The shifted Legendre polynomial $Q_k$ of degree $k\in{\mathbb{N}}$ on the interval $[0,1]$ is defined in terms of the standard Legendre polynomial $P_k$ of degree $k$ on $[-1,1]$ by, for $t\in [0,1]$ ,

\begin{equation*} Q_k(t)=P_k(2t-1). \end{equation*}

It is then shown that the first component of an iterated Kolmogorov diffusion of step $N\in{\mathbb{N}}$ conditioned to return to $0\in{\mathbb{R}}^N$ in time $1$ has the same law as the stochastic process

\begin{equation*} \left (B_t-\sum _{k=1}^{N-1}(2k+1)\int _0^tQ_k(r){\mathrm {d}} r \int _0^1Q_k(r){\mathrm {d}} B_r\right )_{t\in [0,1]}. \end{equation*}

The polynomial expansion (1.5) is an immediate consequence of the result [[Reference Habermann19], Theorem 1.4] which states that these first components of the conditioned iterated Kolmogorov diffusions converge weakly as $N\to \infty$ to the zero process.

As for the Karhunen–Loève expansion discussed above, the sequence $\{c_k\}_{k\in{\mathbb{N}}}$ of random coefficients defined by (1.6) is again formed by independent Gaussian random variables. To see this, we first recall the following identities for Legendre polynomials [[Reference Arfken and Weber1], (12.23), (12.31), (12.32)] which in terms of the shifted Legendre polynomials read as, for $k\in{\mathbb{N}}$ ,

(2.11) \begin{equation} Q_k = \frac{1}{2(2k+1)}\left (Q_{k+1}^{\prime } - Q_{k-1}^{\prime }\right ),\qquad Q_k(0) = (\!-\!1)^k,\qquad Q_k(1) = 1. \end{equation}

In particular, it follows that, for all $k\in{\mathbb{N}}$ ,

\begin{equation*} \int _0^1 Q_k(r){\mathrm {d}} r=0, \end{equation*}

which, by Lemma 2.1, implies that, for $k,l\in{\mathbb{N}}$ ,

\begin{equation*} {\mathbb {E}}\left [c_k c_l\right ] ={\mathbb {E}}\left [\left (\int _0^1 Q_k(r){\mathrm {d}} B_r\right )\left (\int _0^1 Q_l(r){\mathrm {d}} B_r\right )\right ] =\int _0^1 Q_k(r) Q_l(r){\mathrm {d}} r= \begin {cases} \dfrac {1}{2k+1} & \text {if } k=l\\[6pt] 0 & \text {otherwise} \end {cases}. \end{equation*}

Since the random coefficients are Gaussian with mean zero, this establishes their independence.

The fluctuation processes $(F_t^{N,3})_{t\in [0,1]}$ for the polynomial expansion defined by

(2.12) \begin{equation} F_t^{N,3}=\sqrt{N}\left (B_t-\sum _{k=1}^{N-1}(2k+1) \int _0^t Q_k(r){\mathrm{d}} r \int _0^1 Q_k(r){\mathrm{d}} B_r\right ) \end{equation}

are studied in [Reference Habermann19]. According to [[Reference Habermann19], Theorem 1.6], they converge in finite dimensional distributions as $N\to \infty$ to the collection $(F_t^3)_{t\in [0,1]}$ of independent Gaussian random variables with mean zero and variance

\begin{equation*} {\mathbb {E}}\left [\left (F_t^3\right )^2\right ]=\frac {1}{\pi }\sqrt {t(1-t)}, \end{equation*}

that is, the variance function of the limit fluctuations is given by a scaled semicircle.

3. Particular values of the Riemann zeta function

We demonstrate how to use the Karhunen–Loève expansion of the Brownian bridge or, more precisely, the series representation arising from Mercer’s theorem for the covariance function of the Brownian bridge to determine the values of the Riemann zeta function at even positive integers. The analysis further feeds directly into Section 4.1 where we characterise the limit fluctuations for the Karhunen–Loève expansion.

The crucial ingredient is the observation (2.2) from Section 2, which implies that, for all $n\in{\mathbb{N}}_0$ ,

(3.1) \begin{equation} \sum _{k=1}^\infty \int _0^1 \frac{2\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2}t^{n}{\mathrm{d}} t =\int _0^1\left (t-t^2\right )t^{n}{\mathrm{d}} t =\frac{1}{(n+2)(n+3)}. \end{equation}

For completeness, we recall that the Riemann zeta function $\zeta \colon{\mathbb{C}}\setminus \{1\}\to{\mathbb{C}}$ analytically continues the sum of the Dirichlet series

\begin{equation*} \zeta (s)=\sum _{k=1}^\infty \frac {1}{k^s}. \end{equation*}

When discussing its values at even positive integers, we encounter the Bernoulli numbers. The Bernoulli numbers $B_n$ , for $n\in{\mathbb{N}}$ , are signed rational numbers defined by an exponential generating function via, for $t\in (\!-\!2\pi,2\pi )$ ,

\begin{equation*} \frac {t}{\operatorname {e}^t-1}=1+\sum _{n=1}^\infty \frac {B_n t^n}{n!}, \end{equation*}

see Borevich and Shafarevich [[Reference Borevich and Shafarevich3], Chapter 5.8]. These numbers play an important role in number theory and analysis. For instance, they feature in the series expansion of the (hyperbolic) tangent and the (hyperbolic) cotangent, and they appear in formulae by Bernoulli and by Faulhaber for the sum of positive integer powers of the first $k$ positive integers. The characterisation of the Bernoulli numbers which is essential to our analysis is that, according to [[Reference Borevich and Shafarevich3], Theorem 5.8.1], they satisfy and are uniquely given by the recurrence relations

(3.2) \begin{equation} 1+\sum _{n=1}^{m}\binom{m+1}{n}B_n=0\quad \text{for }m\in{\mathbb{N}}. \end{equation}

In particular, choosing $m=1$ yields $1+2B_1=0$ , which shows that

\begin{equation*} B_{1}=-\frac {1}{2}. \end{equation*}

Moreover, since the function defined by, for $t\in (\!-\!2\pi,2\pi )$ ,

\begin{equation*} \frac {t}{\operatorname {e}^t-1}+\frac {t}{2}=1+\sum _{n=2}^\infty \frac {B_n t^n}{n!} \end{equation*}

is an even function, we obtain $B_{2n+1}=0$ for all $n\in{\mathbb{N}}$ , see [[Reference Borevich and Shafarevich3], Theorem 5.8.2]. It follows from (3.2) that the Bernoulli numbers $B_{2n}$ indexed by even positive integers are uniquely characterised by the recurrence relations

(3.3) \begin{equation} \sum _{n=1}^m \binom{2m+1}{2n}B_{2n}=\frac{2m-1}{2}\quad \text{for }m\in{\mathbb{N}}. \end{equation}

These recurrence relations are our tool for identifying the Bernoulli numbers when determining the values of the Riemann zeta function at even positive integers.

The starting point for our analysis is (3.1), and we first illustrate how it allows us to compute $\zeta (2)$ . Taking $n=0$ in (3.1), multiplying through by $\pi ^2$ , and using that $\int _0^1\left (\sin\!(k\pi t)\right )^2{\mathrm{d}} t=\frac{1}{2}$ for $k\in{\mathbb{N}}$ , we deduce that

\begin{equation*} \zeta (2)=\sum _{k=1}^\infty \frac {1}{k^2} =\sum _{k=1}^\infty \int _0^1 \frac {2\left (\sin\!(k\pi t)\right )^2}{k^2} {\mathrm {d}} t =\frac {\pi ^2}{6}. \end{equation*}

We observe that this is exactly the identity obtained by applying the general result

\begin{equation*} \int _0^1 K(t,t){\mathrm {d}} t=\sum _{k=1}^\infty \lambda _k \end{equation*}

for a representation arising from Mercer’s theorem to the representation for the covariance function $K_B$ of the Brownian bridge.

For working out the values for the remaining even positive integers, we iterate over the degree of the moment in (3.1). While for the remainder of this section it suffices to only consider the even moments, we derive the following recurrence relation and the explicit expression both for the even and for the odd moments as these are needed in Section 4.1. For $k\in{\mathbb{N}}$ and $n\in{\mathbb{N}}_0$ , we set

\begin{equation*} e_{k,n}=\int _0^1 2\left (\sin\!(k\pi t)\right )^2t^{n}{\mathrm {d}} t. \end{equation*}

Lemma 3.1. For all $k\in{\mathbb{N}}$ and all $n\in{\mathbb{N}}$ with $n\geq 2$ , we have

\begin{equation*} e_{k,n}=\frac {1}{n+1}-\frac {n(n-1)}{4k^2\pi ^2}e_{k,n-2} \end{equation*}

subject to the initial conditions

\begin{equation*} e_{k,0}=1\quad \text {and}\quad e_{k,1}=\frac {1}{2}. \end{equation*}

Proof. For $k\in{\mathbb{N}}$ , the values for $e_{k,0}$ and $e_{k,1}$ can be verified directly. For $n\in{\mathbb{N}}$ with $n\geq 2$ , we integrate by parts twice to obtain

\begin{align*} e_{k,n} &=\int _0^1 2\left (\sin\!(k\pi t)\right )^2t^{n}{\mathrm{d}} t\\[5pt] &=1-\int _0^1\left (t-\frac{\sin\!(2k\pi t)}{2k\pi }\right ) nt^{n-1}{\mathrm{d}} t\\[5pt] &=1-\frac{n}{2}+\frac{n(n-1)}{2} \int _0^1\left (t^2- \frac{\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2}\right )t^{n-2}{\mathrm{d}} t \\[5pt] &=\frac{2-n}{2}+\frac{n(n-1)}{2} \left (\frac{1}{n+1}-\frac{1}{2k^2\pi ^2}e_{k,n-2}\right )\\[5pt] &=\frac{1}{n+1}-\frac{n(n-1)}{4k^2\pi ^2}e_{k,n-2}, \end{align*}

as claimed.

Iteratively applying the recurrence relation, we find the following explicit expression, which despite its involvedness is exactly what we need.

Lemma 3.2. For all $k\in{\mathbb{N}}$ and $m\in{\mathbb{N}}_0$ , we have

\begin{align*} e_{k,2m}&=\frac{1}{2m+1}+ \sum _{n=1}^m\frac{(\!-\!1)^n(2m)!}{(2(m-n)+1)!2^{2n}}\frac{1}{k^{2n}\pi ^{2n}} \quad \text{and}\\[5pt] e_{k,2m+1}&=\frac{1}{2m+2}+ \sum _{n=1}^m\frac{(\!-\!1)^n(2m+1)!}{(2(m-n)+2)!2^{2n}}\frac{1}{k^{2n}\pi ^{2n}}. \end{align*}

Proof. We proceed by induction over $m$ . Since $e_{k,0}=1$ and $e_{k,1}=\frac{1}{2}$ for all $k\in{\mathbb{N}}$ , the expressions are true for $m=0$ with the sums being understood as empty sums in this case. Assuming that the result is true for some fixed $m\in{\mathbb{N}}_0$ , we use Lemma 3.1 to deduce that

\begin{align*} e_{k,2m+2}&=\frac{1}{2m+3}-\frac{(2m+2)(2m+1)}{4k^2\pi ^2}e_{k,2m}\\[5pt] &=\frac{1}{2m+3}-\frac{2m+2}{4k^2\pi ^2}- \sum _{n=1}^m\frac{(\!-\!1)^n(2m+2)!}{(2(m-n)+1)!2^{2n+2}}\frac{1}{k^{2n+2}\pi ^{2n+2}}\\[5pt] &=\frac{1}{2m+3}+ \sum _{n=1}^{m+1}\frac{(\!-\!1)^n(2m+2)!}{(2(m-n)+3)!2^{2n}}\frac{1}{k^{2n}\pi ^{2n}} \end{align*}

as well as

\begin{align*} e_{k,2m+3}&=\frac{1}{2m+4}-\frac{(2m+3)(2m+2)}{4k^2\pi ^2}e_{k,2m+1}\\[5pt] &=\frac{1}{2m+4}-\frac{2m+3}{4k^2\pi ^2}- \sum _{n=1}^m\frac{(\!-\!1)^n(2m+3)!}{(2(m-n)+2)!2^{2n+2}}\frac{1}{k^{2n+2}\pi ^{2n+2}}\\[5pt] &=\frac{1}{2m+4}+ \sum _{n=1}^{m+1}\frac{(\!-\!1)^n(2m+3)!}{(2(m-n)+4)!2^{2n}}\frac{1}{k^{2n}\pi ^{2n}}, \end{align*}

which settles the induction step.

Focusing on the even moments for the remainder of this section, we see that by (3.1), for all $m\in{\mathbb{N}}_0$ ,

\begin{equation*} \sum _{k=1}^\infty \frac {e_{k,2m}}{k^2\pi ^2} =\frac {1}{(2m+2)(2m+3)}. \end{equation*}

From Lemma 3.2, it follows that

\begin{equation*} \sum _{k=1}^\infty \frac {1}{k^2\pi ^2} \left (\sum _{n=0}^m\frac {(\!-\!1)^n(2m)!}{(2(m-n)+1)!2^{2n}}\frac {1}{k^{2n}\pi ^{2n}}\right ) =\frac {1}{(2m+2)(2m+3)}. \end{equation*}

Since $\sum _{k=1}^\infty k^{-2n}$ converges for all $n\in{\mathbb{N}}$ , we can rearrange sums to obtain

\begin{equation*} \sum _{n=0}^m\frac {(\!-\!1)^n(2m)!}{(2(m-n)+1)!2^{2n}} \left (\sum _{k=1}^\infty \frac {1}{k^{2n+2}\pi ^{2n+2}}\right ) =\frac {1}{(2m+2)(2m+3)}, \end{equation*}

which in terms of the Riemann zeta function and after reindexing the sum rewrites as

\begin{equation*} \sum _{n=1}^{m+1}\frac {(\!-\!1)^{n+1}(2m)!}{(2(m-n)+3)!2^{2n-2}} \frac {\zeta (2n)}{\pi ^{2n}} =\frac {1}{(2m+2)(2m+3)}. \end{equation*}

Multiplying through by $(2m+1)(2m+2)(2m+3)$ shows that, for all $m\in{\mathbb{N}}_0$ ,

\begin{equation*} \sum _{n=1}^{m+1}\binom {2m+3}{2n} \left (\frac {(\!-\!1)^{n+1}2(2n)!}{\left (2\pi \right )^{2n}}\zeta (2n)\right ) =\frac {2m+1}{2}. \end{equation*}

Comparing the last expression with the characterisation (3.3) of the Bernoulli numbers $B_{2n}$ indexed by even positive integers implies that

\begin{equation*} B_{2n}=\frac {(\!-\!1)^{n+1}2(2n)!}{\left (2\pi \right )^{2n}}\zeta (2n), \end{equation*}

that is, we have established that, for all $n\in{\mathbb{N}}$ ,

\begin{equation*} \zeta (2n)=(\!-\!1)^{n+1}\frac {\left (2\pi \right )^{2n}B_{2n}}{2(2n)!}. \end{equation*}

4. Fluctuations for the trigonometric expansions of the Brownian bridge

We first prove Theorem 1.2 and Corollary 1.3 which we use to determine the pointwise limits for the covariance functions of the fluctuation processes for the Karhunen–Loève expansion and of the fluctuation processes for the Fourier series expansion, and then we deduce Theorem 1.1.

4.1 Fluctuations for the Karhunen–Loève expansion

For the moment analysis initiated in the previous section to allow us to identify the limit of $NC_1^N$ as $N\to \infty$ on the diagonal away from its endpoints, we apply the Arzelà–Ascoli theorem to guarantee continuity of the limit away from the endpoints. To this end, we first need to establish the uniform boundedness of two families of functions. Recall that the functions $C_1^N\colon [0,1]\times [0,1]\to{\mathbb{R}}$ are defined in Lemma 2.2.

Lemma 4.1. The family $\{NC_1^N(t,t)\colon N\in{\mathbb{N}}\text{ and }t\in [0,1]\}$ is uniformly bounded.

Proof. Combining the expression for $C_1^N(t,t)$ from Lemma 2.2 and the representation (2.1) for $K_B$ arising from Mercer’s theorem, we see that

\begin{equation*} NC_1^N(t,t)=N\sum _{k=N+1}^\infty \frac {2\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2}. \end{equation*}

In particular, for all $N\in{\mathbb{N}}$ and all $t\in [0,1]$ , we have

\begin{equation*} \left |NC_1^N(t,t)\right |\leq N\sum _{k=N+1}^\infty \frac {2}{k^2\pi ^2}. \end{equation*}

We further observe that

(4.1) \begin{equation} \lim _{M\to \infty }N\sum _{k=N+1}^M\frac{1}{k^2}\leq \lim _{M\to \infty } N\sum _{k=N+1}^M\left (\frac{1}{k-1}-\frac{1}{k}\right ) =\lim _{M\to \infty }\left (1-\frac{N}{M}\right )=1. \end{equation}

It follows that, for all $N\in{\mathbb{N}}$ and all $t\in [0,1]$ ,

\begin{equation*} \left |NC_1^N(t,t)\right |\leq \frac {2}{\pi ^2}, \end{equation*}

which is illustrated in Figure 3 and which establishes the claimed uniform boundedness.

Figure 3. Profiles of $t\mapsto NC_1^N(t,t)$ plotted for $N\in \{5, 25, 100\}$ along with $t\mapsto \frac{2}{\pi ^2}$ .

Lemma 4.2. Fix $\varepsilon \gt 0$ . The family

\begin{equation*} \left \{N\frac {{\mathrm {d}}}{{\mathrm {d}} t}C_1^N(t,t)\colon N\in {\mathbb {N}}\text { and }t\in [\varepsilon,1-\varepsilon ]\right \} \end{equation*}

is uniformly bounded.

Proof. According to Lemma 2.2, we have, for all $t\in [0,1]$ ,

\begin{equation*} C_1^N(t,t)=t-t^2-\sum _{k=1}^N \frac {2\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2}, \end{equation*}

which implies that

\begin{equation*} N\frac {{\mathrm {d}}}{{\mathrm {d}} t}C_1^N(t,t)=N\left (1-2t-\sum _{k=1}^N\frac {2\sin\!(2k\pi t)}{k\pi }\right ). \end{equation*}

Figure 4. Profiles of $t\mapsto N(\frac{\pi -t}{2}-\sum \limits _{k=1}^N \frac{\sin\!(kt)}{k})$ plotted for $N\in \{5, 25, 100, 1000\}$ on $[\varepsilon, 2\pi - \varepsilon ]$ with $\varepsilon = 0.1$ .

The desired result then follows by showing that, for $\varepsilon \gt 0$ fixed, the family

\begin{equation*} \left \{N\left (\frac {\pi -t}{2}-\sum _{k=1}^N\frac {\sin\!(kt)}{k}\right )\colon N\in {\mathbb {N}}\text { and }t\in [\varepsilon,2\pi -\varepsilon ]\right \} \end{equation*}

is uniformly bounded, as illustrated in Figure 4. Employing a usual approach, we use the Dirichlet kernel, for $N\in{\mathbb{N}}$ ,

\begin{equation*} \sum _{k=-N}^N\operatorname {e}^{\operatorname {i} kt}=1+\sum _{k=1}^N2\cos\!(kt)= \frac {\sin\!\left (\left (N+\frac {1}{2}\right )t\right )}{\sin\!\left (\frac {t}{2}\right )} \end{equation*}

to write, for $t\in (0,2\pi )$ ,

\begin{equation*} \frac {\pi -t}{2}-\sum _{k=1}^N\frac {\sin\!(kt)}{k} =-\frac {1}{2}\int _\pi ^t\left (1+\sum _{k=1}^N2\cos\!(ks)\right ){\mathrm {d}} s =-\frac {1}{2}\int _\pi ^t\frac {\sin\!\left (\left (N+\frac {1}{2}\right )s\right )} {\sin\!\left (\frac {s}{2}\right )}{\mathrm {d}} s. \end{equation*}

Integration by parts yields

\begin{equation*} -\frac {1}{2}\int _\pi ^t\frac {\sin\!\left (\left (N+\frac {1}{2}\right )s\right )} {\sin\!\left (\frac {s}{2}\right )}{\mathrm {d}} s =\frac {\cos\!\left (\left (N+\frac {1}{2}\right )t\right )}{(2N+1)\sin\!\left (\frac {t}{2}\right )} -\frac {1}{2N+1}\int _\pi ^t\cos\!\left (\left (N+\frac {1}{2}\right )s\right )\frac {{\mathrm {d}}}{{\mathrm {d}} s} \left (\frac {1}{\sin\!\left (\frac {s}{2}\right )}\right ){\mathrm {d}} s. \end{equation*}

By the first mean value theorem for definite integrals, it follows that for $t\in (0,\pi ]$ fixed, there exists $\xi \in [t,\pi ]$ , whereas for $t\in [\pi,2\pi )$ fixed, there exists $\xi \in [\pi,t]$ , such that

\begin{equation*} -\frac {1}{2}\int _\pi ^t\frac {\sin\!\left (\left (N+\frac {1}{2}\right )s\right )} {\sin\!\left (\frac {s}{2}\right )}{\mathrm {d}} s =\frac {\cos\!\left (\left (N+\frac {1}{2}\right )t\right )}{(2N+1)\sin\!\left (\frac {t}{2}\right )} -\frac {\cos\!\left (\left (N+\frac {1}{2}\right )\xi \right )}{2N+1} \left (\frac {1}{\sin\!\left (\frac {t}{2}\right )}-1\right ). \end{equation*}

Since $\left |\cos\!\left (\left (N+\frac{1}{2}\right )\xi \right )\right |$ is bounded above by one independently of $\xi$ and as $\frac{t}{2}\in (0,\pi )$ for $t\in (0,2\pi )$ implies that $0\lt \sin\!\left (\frac{t}{2}\right )\leq 1$ , we conclude that, for all $N\in{\mathbb{N}}$ and for all $t\in (0,2\pi )$ ,

\begin{equation*} N\left |\frac {\pi -t}{2}-\sum _{k=1}^N\frac {\sin\!(kt)}{k}\right | \leq \frac {2N}{(2N+1)\sin\!\left (\frac {t}{2}\right )}, \end{equation*}

which, for $t\in [\varepsilon,2\pi -\varepsilon ]$ , is uniformly bounded by $1/\sin\!\left (\frac{\varepsilon }{2}\right )$ .

Remark 4.3. In the proof of the previous lemma, we have essentially controlled the error in the Fourier series expansion for the fractional part of $t$ which is given by

\begin{equation*} \frac {1}{2}-\sum _{k=1}^\infty \frac {\sin\!(2k\pi t)}{k\pi }, \end{equation*}

see [[Reference Iwaniec20], Exercise on p. 4].

We can now prove the convergence in Theorem 1.2 on the diagonal away from the endpoints, which consists of a moment analysis to identify the moments of the limit function as well as an application of the Arzelà–Ascoli theorem to show that the limit function is continuous away from the endpoints. Alternatively, one could prove Corollary 1.3 directly with a similar approach as in the proof of Lemma 4.2, but integrating the Dirichlet kernel twice, and then deduce Theorem 1.2. However, as the moment analysis was already set up in Section 3 to determine the values of the Riemann zeta function at even positive integers, we demonstrate how to proceed with this approach.

Proposition 4.4. For all $t\in (0,1)$ , we have

\begin{equation*} \lim _{N\to \infty } N\left (t-t^2-\sum _{k=1}^N \frac {2\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2}\right ) =\frac {1}{\pi ^2}. \end{equation*}

Proof. Recall that, due Lemma 2.2 and the representation (2.1), we have, for $t\in [0,1]$ ,

(4.2) \begin{equation} C_1^N(t,t)=t-t^2-\sum _{k=1}^N \frac{2\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2} =\sum _{k=N+1}^\infty \frac{2\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2}. \end{equation}

By Lemmas 4.1 and 4.2, the Arzelà–Ascoli theorem can be applied locally to any subsequence of $\{N C_1^N\}_{N\in{\mathbb{N}}}$ . Repeatedly using the Arzelà–Ascoli theorem and a diagonal argument, we deduce that there exists a subsequence of $\{N C_1^N\}_{N\in{\mathbb{N}}}$ which converges pointwise to a continuous limit function on the interval $(0,1)$ . To prove that the full sequence converges pointwise and to identify the limit function, we proceed with the moment analysis initiated in Section 3. Applying Lemma 3.2, we see that, for $m\in{\mathbb{N}}_0$ ,

(4.3) \begin{equation} N\sum _{k=N+1}^\infty \frac{e_{k,2m}}{k^2\pi ^2} =N\sum _{k=N+1}^\infty \frac{1}{k^2\pi ^2} \left (\frac{1}{2m+1}+ \sum _{n=1}^m\frac{(\!-\!1)^n(2m)!}{(2(m-n)+1)!2^{2n}}\frac{1}{k^{2n}\pi ^{2n}}\right ), \end{equation}
(4.4) \begin{equation} N\sum _{k=N+1}^\infty \frac{e_{k,2m+1}}{k^2\pi ^2} =N\sum _{k=N+1}^\infty \frac{1}{k^2\pi ^2} \left (\frac{1}{2m+2}+ \sum _{n=1}^m\frac{(\!-\!1)^n(2m+1)!}{(2(m-n)+2)!2^{2n}}\frac{1}{k^{2n}\pi ^{2n}}\right ). \end{equation}

The bound (4.1) together with

\begin{equation*} \lim _{M\to \infty }N\sum _{k=N+1}^M\frac {1}{k^2} \geq \lim _{M\to \infty }N\sum _{k=N+1}^M\left (\frac {1}{k}-\frac {1}{k+1}\right ) =\lim _{M\to \infty }\left (\frac {N}{N+1}-\frac {N}{M+1}\right ) =\frac {N}{N+1} \end{equation*}

implies that

(4.5) \begin{equation} \lim _{N\to \infty }N\sum _{k=N+1}^\infty \frac{1}{k^2}=1. \end{equation}

For $n\in{\mathbb{N}}$ , we further have

\begin{equation*} 0\leq N\sum _{k=N+1}^\infty \frac {1}{k^{2n+2}} \leq \frac {N}{(N+1)^2}\sum _{k=N+1}^\infty \frac {1}{k^{2n}} \leq \frac {1}{N}\sum _{k=1}^\infty \frac {1}{k^{2n}}, \end{equation*}

and since $\sum _{k=1}^\infty k^{-2n}$ converges, this yields

\begin{equation*} \lim _{N\to \infty }N\sum _{k=N+1}^\infty \frac {1}{k^{2n+2}}=0 \quad \text {for }n\in {\mathbb {N}}. \end{equation*}

From (4.2) as well as (4.3) and (4.4), it follows that, for all $n\in{\mathbb{N}}_0$ ,

\begin{equation*} \lim _{N\to \infty }\int _0^1 N C_1^N(t,t) t^n{\mathrm {d}} t= \lim _{N\to \infty }N\sum _{k=N+1}^\infty \frac {e_{k,n}}{k^2\pi ^2}=\frac {1}{(n+1)\pi ^2}. \end{equation*}

This shows that, for all $n\in{\mathbb{N}}_0$ ,

\begin{equation*} \lim _{N\to \infty }\int _0^1 N C_1^N(t,t) t^n{\mathrm {d}} t= \int _0^1\frac {1}{\pi ^2}t^n{\mathrm {d}} t. \end{equation*}

If the sequence $\{N C_1^N\}_{N\in{\mathbb{N}}}$ failed to converge pointwise, we could use the Arzelà–Ascoli theorem and a diagonal argument to construct a second subsequence of $\{N C_1^N\}_{N\in{\mathbb{N}}}$ converging pointwise but to a different continuous limit function on $(0,1)$ compared to the first subsequence. Since this contradicts the convergence of moments, the claimed result follows.

We included the on-diagonal convergence in Theorem 1.2 as a separate statement to demonstrate that Corollary 1.3 is a consequence of Proposition 4.4, which is then used to prove the off-diagonal convergence in Theorem 1.2.

Proof of Corollary 1.3. Using the identity that, for $k\in{\mathbb{N}}$ ,

(4.6) \begin{equation} \cos\!(2k\pi t)=1-2\left (\sin\!(k\pi t)\right )^2, \end{equation}

we obtain

\begin{equation*} \sum _{k=N+1}^\infty \frac {\cos\!(2k\pi t)}{k^2\pi ^2} =\sum _{k=N+1}^\infty \frac {1}{k^2\pi ^2}- \sum _{k=N+1}^\infty \frac {2\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2}. \end{equation*}

From (4.5) and Proposition 4.4, it follows that, for all $t\in (0,1)$ ,

\begin{equation*} \lim _{N\to \infty } N\sum _{k=N+1}^\infty \frac {\cos\!(2k\pi t)}{k^2\pi ^2} =\frac {1}{\pi ^2}-\frac {1}{\pi ^2}=0, \end{equation*}

as claimed.

Proof of Theorem 1.2. If $s\in \{0,1\}$ or $t\in \{0,1\}$ , the result follows immediately from $\sin\!(k\pi )=0$ for all $k\in{\mathbb{N}}_0$ , and if $s=t$ for $t\in (0,1)$ , the claimed convergence is given by Proposition 4.4. Therefore, it remains to consider the off-diagonal case, and we may assume that $s,t\in (0,1)$ are such that $s\lt t$ . Due to the representation (2.1) and the identity

\begin{equation*} 2\sin\!(k\pi s)\sin\!(k\pi t)=\cos\!(k\pi (t-s))-\cos\!(k\pi (t+s)), \end{equation*}

we have

\begin{align*} \min\!(s,t)-st-\sum _{k=1}^N\frac{2\sin\!(k\pi s)\sin\!(k\pi t)}{k^2\pi ^2} &=\sum _{k=N+1}^\infty \frac{2\sin\!(k\pi s)\sin\!(k\pi t)}{k^2\pi ^2}\\[5pt] &=\sum _{k=N+1}^\infty \frac{\cos\!(k\pi (t-s))-\cos\!(k\pi (t+s))}{k^2\pi ^2}. \end{align*}

Since $0\lt t-s\lt t+s\lt 2$ for $s,t\in (0,1)$ with $s\lt t$ , the convergence away from the diagonal is a consequence of Corollary 1.3.

Note that Theorem 1.2 states, for $s,t\in [0,1]$ ,

(4.7) \begin{equation} \lim _{N\to \infty } N C_1^N(s,t)= \begin{cases} \dfrac{1}{\pi ^2} & \text{if } s=t\text{ and } t\in (0,1)\\[10pt] 0 & \text{otherwise} \end{cases}, \end{equation}

which is the key ingredient for obtaining the characterisation of the limit fluctuations for the Karhunen–Loève expansion given in Theorem 1.1. We provide the full proof of Theorem 1.1 below after having determined the limit of $2N C_2^N$ as $N\to \infty$ .

4.2 Fluctuations for the Fourier series expansion

Instead of setting up another moment analysis to study the pointwise limit of $2N C_2^N$ as $N\to \infty$ , we simplify the expression for $C_2^N$ from Lemma 2.3 and deduce the desired pointwise limit from Corollary 1.3.

Using the standard Fourier basis for $L^2([0,1])$ , the polarised Parseval identity and the trigonometric identity (2.10), we can write, for $s,t\in [0,1]$ ,

\begin{align*} \min\!(s,t)&=\int _0^1{\mathbb {1}}_{[0,s]}(r){\mathbb {1}}_{[0,t]}(r){\mathrm{d}} r\\[5pt] &=st+\sum _{k=1}^\infty 2\int _0^s \cos\!(2k\pi r){\mathrm{d}} r\int _0^t \cos\!(2k\pi r){\mathrm{d}} r +\sum _{k=1}^\infty 2\int _0^s \sin\!(2k\pi r){\mathrm{d}} r\int _0^t \sin\!(2k\pi r){\mathrm{d}} r\\[5pt] &=st-\sum _{k=1}^\infty \frac{\cos\!(2k\pi s)}{2k^2\pi ^2} -\sum _{k=1}^\infty \frac{\cos\!(2k\pi t)}{2k^2\pi ^2} +\sum _{k=1}^\infty \frac{\cos\!(2k\pi (t-s))}{2k^2\pi ^2}+\sum _{k=1}^\infty \frac{1}{2k^2\pi ^2}. \end{align*}

Applying the identity (4.6) as well as the representation (2.1) and using the value for $\zeta (2)$ derived in Section 3, we have

\begin{equation*} \sum _{k=1}^\infty \frac {\cos\!(2k\pi t)}{2k^2\pi ^2}= \sum _{k=1}^\infty \frac {1}{2k^2\pi ^2}- \sum _{k=1}^\infty \frac {\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2} =\frac {1}{12}+\frac {t^2-t}{2}. \end{equation*}

Once again exploiting the value for $\zeta (2)$ , we obtain

\begin{equation*} \min\!(s,t)-st+\frac {s^2-s}{2}+\frac {t^2-t}{2}+\frac {1}{12} =\sum _{k=1}^\infty \frac {\cos\!(2k\pi (t-s))}{2k^2\pi ^2}. \end{equation*}

Using the expression for $C_2^N$ from Lemma 2.3, it follows that, for $s,t\in [0,1]$ ,

\begin{equation*} C_2^N(s,t)=\sum _{k=N+1}^\infty \frac {\cos\!(2k\pi (t-s))}{2k^2\pi ^2}. \end{equation*}

This implies that if $t-s$ is an integer then, as a result of the limit (4.5),

\begin{equation*} \lim _{N\to \infty } 2NC_2^N(s,t)=\frac {1}{\pi ^2}, \end{equation*}

whereas if $t-s$ is not an integer then, by Corollary 1.3,

\begin{equation*} \lim _{N\to \infty } 2NC_2^N(s,t)=0. \end{equation*}

This can be summarised as, for $s,t\in [0,1]$ ,

(4.8) \begin{equation} \lim _{N\to \infty } 2NC_2^N(s,t)= \begin{cases} \dfrac{1}{\pi ^2} & \text{if } s=t\text{ or } s,t\in \{0,1\}\\[10pt] 0 & \text{otherwise} \end{cases}. \end{equation}

We finally prove Theorem 1.1 by considering characteristic functions.

Proof of Theorem 1.1. According to Lemma 2.2 as well as Lemma 2.3, the fluctuation processes $(F_t^{N,1})_{t\in [0,1]}$ and $(F_t^{N,2})_{t\in [0,1]}$ are zero-mean Gaussian processes with covariance functions $N C_1^N$ and $2N C_2^N$ , respectively.

By the pointwise convergences (4.7) and (4.8) of the covariance functions in the limit $N\to \infty$ , for any $n\in{\mathbb{N}}$ and any $t_1,\ldots,t_n\in [0,1]$ , the characteristic functions of the Gaussian random vectors $(F_{t_1}^{N,i},\ldots,F_{t_n}^{N,i})$ , for $i\in \{1,2\}$ , converge pointwise as $N\to \infty$ to the characteristic function of the Gaussian random vector $(F_{t_1}^{i},\ldots,F_{t_n}^{i})$ . Therefore, the claimed convergences in finite dimensional distributions are consequences of Lévy’s continuity theorem.

5. Approximations of Brownian Lévy area

In this section, we consider approximations of second iterated integrals of Brownian motion, which is a classical problem in the numerical analysis of stochastic differential equations (SDEs), see [Reference Kloeden and Platen22]. Due to their presence within stochastic Taylor expansions, increments and second iterated integrals of multidimensional Brownian motion are required by high order strong methods for general SDEs, such as stochastic Taylor [Reference Kloeden and Platen22] and Runge–Kutta [Reference Rößler33] methods. Currently, the only methodology for exactly generating the increment and second iterated integral, or equivalently the Lévy area, given by Definition 1.4, of a $d$ -dimensional Brownian motion is limited to the case when $d = 2$ . This algorithm for the exact generation of Brownian increments and Lévy area is detailed in [Reference Gaines and Lyons13]. The approach adapts Marsaglia’s “rectangle-wedge-tail” algorithm to the joint density function of $ (W_1^{(1)}, W_1^{(2)}, A_{0,1}^{(1,2)} )$ , which is expressible as an integral, but can only be evaluated numerically. Due to the subtle relationships between different entries in $A_{0,1}$ , it has not been extended to $d\gt 2$ .

Obtaining good approximations of Brownian Lévy area in an $L^{2}(\mathbb{P})$ sense is known to be difficult. For example, it was shown in [Reference Dickinson8] that any approximation of Lévy area which is measurable with respect to $N$ Gaussian random variables, obtained from linear functionals of the Brownian path, cannot achieve strong convergence faster than $O(N^{-\frac{1}{2}})$ . In particular, this result extends the classical theorem of Clark and Cameron [Reference Clark and Cameron4] which establishes a best convergence rate of $O(N^{-\frac{1}{2}})$ for approximations of Lévy area based on only the Brownian increments $\{W_{(n+1)h} - W_{nh}\}_{0\leq n\leq N - 1}$ . Therefore, approximations have been developed which fall outside of this paradigm, see [Reference Davie5, Reference Foster11, Reference Mrongowius and Rößler32, Reference Wiktorsson35]. In the analysis of these methodologies, the Lévy area of Brownian motion and its approximation are probabilistically coupled in such a way that $L^{2}(\mathbb{P})$ convergence rates of $O(N^{-1})$ can be established.

We are interested in the approximations of Brownian Lévy area that can be obtained directly from the Fourier series expansion (1.3) and the polynomial expansion (1.5) of the Brownian bridge. For the remainder of the section, the Brownian motion $(W_t)_{t\in [0,1]}$ is assumed to be $d$ -dimensional and $(B_t)_{t\in [0,1]}$ is its associated Brownian bridge.

We first recall the standard Fourier approach to the strong approximation of Brownian Lévy area.

Theorem 5.1 (Approximation of Brownian Lévy area via Fourier coefficients, see [[Reference Kloeden and Platen22], p. 205] and [[Reference Milstein31], p. 99]). For $n\in{\mathbb{N}}$ , we define a random antisymmetric $d\times d$ matrix $\widehat{A}_{n}$ by, for $i,j\in \{1,\ldots,d\}$ ,

\begin{equation*} \widehat {A}_{n}^{ (i,j)} \;:\!=\; \frac {1}{2}\left (a_0^{(i)}W_1^{(j)} - W_1^{(i)}a_0^{(j)}\right ) + \pi \sum _{k=1}^{n-1} k\left (a_{k}^{(i)}b_k^{(j)} - b_k^{(i)}a_{k}^{(j)}\right ), \end{equation*}

where the normal random vectors $\{a_k\}_{k\in{\mathbb{N}}_0}$ and $\{b_k\}_{k\in{\mathbb{N}}}$ are the coefficients from the Brownian bridge expansion (1.3), that is, the coordinates of each random vector are independent and defined according to (1.4). Then, for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ , we have

\begin{align*}{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \widehat{A}_{n}^{ (i,j)}\Big )^2 \bigg ] & = \frac{1}{2\pi ^2}\sum _{k = n}^{\infty }\frac{1}{k^2}. \end{align*}

Remark 5.2. Using the covariance structure given by (2.5), (2.6), (2.7) and the independence of the components of a Brownian bridge, it immediately follows that the coefficients $\{a_k\}_{k\in{\mathbb{N}}_0}$ and $\{b_k\}_{k\in{\mathbb{N}}}$ are jointly normal with $a_0\sim \mathcal{N}\big (0,\frac{1}{3}I_d\big )$ , $a_k, b_k\sim \mathcal{N}\big (0,\frac{1}{2k^2\pi ^2}I_d\big )$ , $\operatorname{cov}(a_0, a_k) = -\frac{1}{k^2\pi ^2}I_d$ and $\operatorname{cov}(a_l, b_k) = 0$ for $k\in{\mathbb{N}}$ and $l\in{\mathbb{N}}_0$ .

In practice, the above approximation may involve generating the $N$ independent random vectors $\{a_k\}_{1\leq k\leq N}$ followed by the coefficient $a_0$ , which will not be independent, but can be expressed as a linear combination of $\{a_k\}_{1\leq k\leq N}$ along with an additional independent normal random vector. Without this additional normal random vector, we obtain the following discretisation of Lévy area.

Theorem 5.3 (Kloeden–Platen–Wright approximation of Brownian Lévy area, see [Reference Kloeden, Platen and Wright23, Reference Milstein30, Reference Wiktorsson35]). For $n\in{\mathbb{N}}$ , we define a random antisymmetric $d\times d$ matrix $\widetilde{A}_{n}$ by, for $i,j\in \{1,\ldots,d\}$ ,

\begin{equation*} \widetilde {A}_{n}^{ (i,j)} \;:\!=\; \pi \sum _{k=1}^{n-1} k\left (a_{k}^{(i)}\left (b_k^{(j)} - \frac {1}{k\pi }W_1^{(j)}\right ) - \left (b_k^{(i)} - \frac {1}{k\pi }W_1^{(i)}\right )a_{k}^{(j)}\right ), \end{equation*}

where the sequences $\{a_k\}_{k\in{\mathbb{N}}}$ and $\{b_k\}_{k\in{\mathbb{N}}}$ of independent normal random vectors are the same as before. Then, for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ , we have

\begin{align*}{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \widetilde{A}_{n}^{ (i,j)}\Big )^2 \bigg ] & = \frac{3}{2\pi ^2}\sum _{k = n}^{\infty }\frac{1}{k^2}. \end{align*}

Proof. As for Theorem 5.1, the result follows by direct calculation. The constant is larger because, for $i\in \{1,\ldots,d\}$ and $k\in{\mathbb{N}}$ ,

\begin{equation*} {\mathbb {E}}\bigg [\Big (b_k^{(i)} - \frac {1}{k\pi }W_1^{(i)}\Big )^2 \bigg ] = \frac {3}{2k^2\pi ^2} = 3\,{\mathbb {E}}\Big [\big (b_k^{(i)}\big )^2 \Big ], \end{equation*}

which yields the required result.

Finally, we give the approximation of Lévy area corresponding to the polynomial expansion (1.5). Although this series expansion of Brownian Lévy area was first proposed in [Reference Kuznetsov24], a straightforward derivation based on the polynomial expansion (1.5) was only established much later in [Reference Kuznetsov25]. However in [Reference Kuznetsov24, Reference Kuznetsov25], the optimal bound for the mean squared error of the approximation is not identified. We will present a similar derivation to [Reference Kuznetsov25], but with a simple formula for the mean squared error.

Theorem 5.4 (Polynomial approximation of Brownian Lévy area, see [[Reference Kuznetsov24], p. 47] and [Reference Kuznetsov25]). For $n\in{\mathbb{N}}_0$ , we define a random antisymmetric $d\times d$ matrix $\overline{A}_{n}$ by, for $n\in{\mathbb{N}}$ and $i,j\in \{1,\ldots,d\}$ ,

\begin{equation*} \overline {A}_{n}^{ (i,j)} \;:\!=\; \frac {1}{2}\left (W_1^{(i)}c_1^{(j)} - c_1^{(i)}W_1^{(j)}\right ) + \frac {1}{2}\sum _{k=1}^{n-1}\left (c_k^{(i)}c_{k+1}^{(j)} - c_{k+1}^{(i)}c_k^{(j)}\right ), \end{equation*}

where the normal random vectors $\{c_k\}_{k\in{\mathbb{N}}}$ are the coefficients from the polynomial expansion (1.5), that is, the coordinates are independent and defined according to (1.6), and we set

\begin{equation*} \overline {A}_{0}^{ (i,j)} \;:\!=\; 0. \end{equation*}

Then, for $n\in{\mathbb{N}}_0$ and for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ , we have

\begin{equation*} {\mathbb {E}}\bigg [\Big (A_{0,1}^{(i,j)} - \overline {A}_{n}^{ (i,j)}\Big )^2 \bigg ] = \frac {1}{8n+4}. \end{equation*}

Remark 5.5. By applying Lemma 2.1, the orthogonality of shifted Legendre polynomials and the independence of the components of a Brownian bridge, we see that the coefficients $\{c_k\}_{k\in{\mathbb{N}}}$ are independent and distributed as $c_k\sim \mathcal{N}\big (0,\frac{1}{2k + 1}I_d\big )$ for $k\in{\mathbb{N}}$ .

Proof. It follows from the polynomial expansion (1.5) that, for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ ,

(5.1) \begin{equation} \int _0^1 B_t^{(i)}{\mathrm{d}} B_t^{(j)} = \int _0^1 \left (\sum _{k=1}^\infty (2k+1)\, c_k^{(i)} \int _0^t Q_k(r){\mathrm{d}} r\right ){\mathrm{d}}\left (\sum _{l=1}^\infty (2l+1)\, c_l^{(j)} \int _0^t Q_l(r){\mathrm{d}} r \right ), \end{equation}

where the series converge in $L^2(\mathbb{P})$ . To simplify (5.1), we use the identities in (2.11) for shifted Legendre polynomials as well as the orthogonality of shifted Legendre polynomials to obtain that, for $k, l\in{\mathbb{N}}$ ,

\begin{align*} \int _0^1 \left (\int _0^t Q_k(r){\mathrm{d}} r\right ){\mathrm{d}} \left (\int _0^t Q_l(r){\mathrm{d}} r\right ) & = \int _0^1 Q_l(t)\int _0^t Q_k(r){\mathrm{d}} r{\mathrm{d}} t\\[5pt] & = \dfrac{1}{2(2k+1)}{\displaystyle \int _0^1 Q_l(t)\left (Q_{k+1}(t) - Q_{k-1}(t)\right ){\mathrm{d}} t}\\[3pt] & = \begin{cases}\phantom{-}\dfrac{1}{2(2k+1)}{\displaystyle \int _0^1 \left (Q_{k+1}(t)\right )^2{\mathrm{d}} t} & \text{if }l = k + 1\\[9pt] -\dfrac{1}{2(2k+1)}{\displaystyle \int _0^1 \left (Q_{k-1}(t)\right )^2{\mathrm{d}} t} & \text{if }l = k - 1\\[8pt] \phantom{-}0 & \text{otherwise}\end{cases}. \end{align*}

Evaluating the above integrals gives, for $k, l\in{\mathbb{N}}$ ,

(5.2) \begin{align} \int _0^1 \left (\int _0^t Q_k(r){\mathrm{d}} r\right ){\mathrm{d}} \left (\int _0^t Q_l(r){\mathrm{d}} r\right ) & = \begin{cases} \phantom{-}\dfrac{1}{2(2k+1)(2k + 3)} & \text{if }l = k + 1\\[12pt] -\dfrac{1}{2(2k+1)(2k - 1)} & \text{if }l = k - 1\\[12pt] \phantom{-}0 & \text{otherwise}\end{cases}. \end{align}

In particular, for $k,l\in{\mathbb{N}}$ , this implies that

\begin{align*} \int _0^1 \left ((2k+1) c_k^{(i)} \int _0^t Q_k(r){\mathrm{d}} r\right ){\mathrm{d}} \left ((2l+1) c_l^{(j)} \int _0^t Q_l(r){\mathrm{d}} r\right ) & = \begin{cases} \phantom{-}\dfrac{1}{2}c_k^{(i)}c_{k+1}^{(j)} & \text{if }l = k + 1\\[13pt] -\dfrac{1}{2}c_k^{(i)}c_{k-1}^{(j)} & \text{if }l = k - 1\\[13pt] \phantom{-}0 & \text{otherwise}\end{cases}. \end{align*}

Therefore, by the bounded convergence theorem in $L^2(\mathbb{P})$ , we can simplify the expansion (5.1) to

(5.3) \begin{align} \int _0^1 B_t^{(i)}{\mathrm{d}} B_t^{(j)} & = \frac{1}{2}\sum _{k=1}^{\infty }\left (c_k^{(i)}c_{k+1}^{(j)} - c_{k+1}^{(i)}c_k^{(j)}\right ), \end{align}

where, just as before, the series converges in $L^2(\mathbb{P})$ . Since $W_t = t W_1 + B_t$ for $t\in [0,1]$ , we have, for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ ,

\begin{align*} \int _0^1 W_t^{(i)}{\mathrm{d}} W_t^{(j)} & = \int _0^1 \big (t W_1^{(i)}\big ){\mathrm{d}} \big (t W_1^{(j)}\big ) + \int _0^1 B_t^{(i)}{\mathrm{d}} \big (t W_1^{(j)}\big )+ \int _0^1 \big (t W_1^{(i)}\big ){\mathrm{d}} B_t^{(j)} + \int _0^1 B_t^{(i)}{\mathrm{d}} B_t^{(j)}\\[3pt] & = \frac{1}{2}W_1^{(i)}W_1^{(j)} - W_1^{(j)}\int _0^1 t{\mathrm{d}} B_t^{(i)} + W_1^{(i)}\int _0^1 t{\mathrm{d}} B_t^{(j)} + \int _0^1 B_t^{(i)}{\mathrm{d}} B_t^{(j)}, \end{align*}

where the second line follows by integration by parts. As

\begin{equation*}\int _0^1 W_t^{(i)}{\mathrm {d}} W_t^{(j)} = \frac {1}{2}W_1^{(i)}W_1^{(j)} + A_{0,1}^{(i,j)}\end{equation*}

and $Q_1(t) = 2t - 1$ , the above and (5.3) imply that, for $i,j\in \{1,\ldots,d\}$ ,

\begin{align*} A_{0,1}^{(i,j)} = \frac{1}{2}\left (W_1^{(i)}c_1^{(j)} - c_1^{(i)}W_1^{(j)}\right ) + \frac{1}{2}\sum _{k=1}^{\infty }\left (c_k^{(i)}c_{k+1}^{(j)} - c_{k+1}^{(i)}c_k^{(j)}\right ). \end{align*}

By the independence of the normal random vectors in the sequence $\{c_k\}_{k\in{\mathbb{N}}}$ , it is straightforward to compute the mean squared error in approximating $A_{0,1}$ and we obtain, for $n\in{\mathbb{N}}$ and for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ ,

\begin{align*}{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \overline{A}_{n}^{ (i,j)}\Big )^2 \bigg ] & ={\mathbb{E}}\left [\left (\frac{1}{2}\sum _{k=n}^{\infty } \left (c_{k}^{(i)}c_{k+1}^{(j)} - c_{k+1}^{(i)}c_{k}^{(j)}\right )\right )^2 \right ]\\[5pt] & = \frac{1}{4}\sum _{k=n}^{\infty } \frac{2}{(2k+1)(2k+3)}\\[5pt] & = \frac{1}{4}\sum _{k=n}^{\infty } \left (\frac{1}{2k+1} - \frac{1}{2k+3}\right )\\[5pt] & = \frac{1}{8n+4}, \end{align*}

by Remark 5.5. Similarly, as the normal random vector $W_1$ and the ones in the sequence $\{c_k\}_{k\in{\mathbb{N}}}$ are independent, we have

\begin{align*}{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \overline{A}_{0}^{(i,j)}\Big )^2 \bigg ] & ={\mathbb{E}}\left [\left (\frac{1}{2}\left (W_1^{(i)}c_1^{(j)} - c_1^{(i)}W_1^{(j)}\right )\right )^2 \right ] +{\mathbb{E}}\left [\left (\frac{1}{2}\sum _{k=1}^{\infty } \left (c_{k}^{(i)}c_{k+1}^{(j)} - c_{k+1}^{(i)}c_{k}^{(j)}\right )\right )^2 \right ]\\[5pt] & = \frac{1}{6} + \frac{1}{12} = \frac{1}{4}, \end{align*}

as claimed.

Given that we have now considered three different strong approximations of Brownian Lévy area, it is reasonable to compare their respective rates of convergence. Combining the above theorems, we obtain the following result.

Corollary 5.6 (Asymptotic convergence rates of Lévy area approximations). For $n\in{\mathbb{N}}$ , we set $N = 2n$ so that the number of Gaussian random vectors required to define the Lévy area approximations $\widehat{A}_{n}, \widetilde{A}_{n}$ and $\overline{A}_{2n}$ is $N$ or $N-1$ , respectively. Then, for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ and as $N\to \infty$ , we have

\begin{align*}{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \widehat{A}_{n}^{ (i,j)}\Big )^2 \bigg ] & \sim \frac{1}{\pi ^2}\bigg (\frac{1}{N}\bigg ),\\[3pt]{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \widetilde{A}_{n}^{ (i,j)}\Big )^2 \bigg ] & \sim \frac{3}{\pi ^2}\bigg (\frac{1}{N}\bigg ),\\[3pt]{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \overline{A}_{2n}^{ (i,j)}\Big )^2 \bigg ] & \sim \frac{1}{8}\bigg (\frac{1}{N}\bigg ). \end{align*}

In particular, the polynomial approximation of Brownian Lévy area is more accurate than the Kloeden–Platen–Wright approximation, both of which use only independent Gaussian vectors.

Remark 5.7. It was shown in [Reference Dickinson8] that $\frac{1}{\pi ^2}\hspace{-0.5mm}\left (\frac{1}{N}\right )$ is the optimal asymptotic rate of mean squared convergence for Lévy area approximations that are measurable with respect to $N$ Gaussian random variables, obtained from linear functionals of the Brownian path.

As one would expect, all the Lévy area approximations converge in $L^{2}(\mathbb{P})$ with a rate of $O(N^{-\frac{1}{2}})$ and thus the main difference between their respective accuracies is in the leading error constant. More concretely, for sufficiently large $N$ , the approximation based on the Fourier expansion of the Brownian bridge is roughly 11% more accurate in $L^{2}(\mathbb{P})$ than that of the polynomial approximation. On the other hand, the polynomial approximation is easier to implement in practice as all of the required coefficients are independent. Since it has the largest asymptotic error constant, the Kloeden–Platen–Wright approach gives the least accurate approximation for Brownian Lévy area.

We observe that the leading error constants for the Lévy area approximations resulting from the Fourier series and the polynomial expansion coincide with the average $L^2(\mathbb{P})$ error of their respective fluctuation processes, that is, applying Fubini’s theorem followed by the limit theorems for the fluctuation processes $(F_t^{N,2})_{t\in [0,1]}$ and $(F_t^{N,3})_{t\in [0,1]}$ defined by (1.8) and (2.12), respectively, gives

\begin{align*} \lim _{N\to \infty }{\mathbb{E}}\left [\int _0^1 \left (F_t^{N,2}\right )^2{\mathrm{d}} t \right ] &=\int _0^1\frac{1}{\pi ^2}{\mathrm{d}} t=\frac{1}{\pi ^2},\\[3pt] \lim _{N\to \infty }{\mathbb{E}}\left [\int _0^1 \left (F_t^{N,3}\right )^2{\mathrm{d}} t \right ] &=\int _0^1 \frac{1}{\pi }\sqrt{t(1-t)}{\mathrm{d}} t=\frac{1}{8}. \end{align*}

To demonstrate how this correspondence arises, we close with some heuristics. For $N\in{\mathbb{N}}$ , we consider an approximation of the Brownian bridge which uses $N$ random vectors, and we denote the corresponding approximation of Brownian motion $(W_t)_{t\in [0,1]}$ by $(S_t^N)_{t\in [0,1]}$ , where the difference between Brownian motion and its associated Brownian bridge is the first term in the approximation. In the Fourier and polynomial approaches, the error in approximating Brownian Lévy area is then essentially given by

\begin{equation*} \int _0^1 W_t^{(i)}{\mathrm {d}} W_t^{(j)}-\int _0^1 S_t^{N,(i)}{\mathrm {d}} S_t^{N,(j)} =\int _0^1 \left (W_t^{(i)}-S_t^{N,(i)}\right ){\mathrm {d}} W_t^{(j)} +\int _0^1 S_t^{N,(i)}{\mathrm {d}} \left (W_t^{(j)}-S_t^{N,(j)}\right ). \end{equation*}

If one can argue that

\begin{equation*} \int _0^1 S_t^{N,(i)}{\mathrm {d}} \left (W_t^{(j)}-S_t^{N,(j)}\right )=O\left (\frac {1}{N}\right ), \end{equation*}

which, for instance, for the polynomial approximation follows directly from (5.2) and Remark 5.5, then in terms of the fluctuation processes $(F_t^N)_{t\in [0,1]}$ defined by

\begin{equation*} F_t^N=\sqrt {N}\left (W_t-S_t^{N}\right ), \end{equation*}

the error of the Lévy area approximation can be expressed as

\begin{equation*} \frac {1}{\sqrt {N}}\int _0^1 F_t^{N,(i)}{\mathrm {d}} W_t^{(j)}+O\left (\frac {1}{N}\right ). \end{equation*}

Thus, by Itô’s isometry and Fubini’s theorem, the leading error constant in the mean squared error is indeed given by

\begin{equation*} \int _0^1\lim _{N\to \infty }{\mathbb {E}}\left [\left (F_t^{N,(i)}\right )^2 \right ]{\mathrm {d}} t. \end{equation*}

This connection could be interpreted as an asymptotic Itô isometry for Lévy area approximations.

A. Summarising tables

Table A1 Table summarising the Brownian bridge expansions considered in this paper

Table A2 Table summarising the Lévy area expansions considered in this paper

Footnotes

James Foster was supported by the Department of Mathematical Sciences at the University of Bath and the DataSig programme under the EPSRC grant EP/S026347/1.

References

Arfken, G. B. and Weber, H. J. (2005) Mathematical Methods for Physicists, Sixth ed. Elsevier.Google Scholar
Belomestny, D. and Nagapetyan, T. (2017) Multilevel path simulation for weak approximation schemes with application to Lévy-driven SDEs. Bernoulli 23 927950.CrossRefGoogle Scholar
Borevich, Z. I. and Shafarevich, I. R. (1966) Number Theory. Translated from the Russian by Newcomb Greenleaf, Pure and Applied Mathematics, Vol. 20. New York: Academic Press.Google Scholar
Clark, J. M. C. and Cameron, R. J. (1980) The maximum rate of convergence of discrete approximations for stochastic differential equations. In Stochastic Differential Systems Filtering and Control. Springer.Google Scholar
Davie, A. (2014) KMT theory applied to approximations of SDE. In Stochastic Analysis and Applications, Vol. 100. Springer Proceedings in Mathematics and Statistics. Springer, pp. 185201 . CrossRefGoogle Scholar
Debrabant, K., Ghasemifard, A. and Mattsson, N. C. (2019) Weak Antithetic MLMC Estimation of SDEs with the Milstein scheme for Low-Dimensional Wiener Processes. Appl. Math. Lett. 91 2227.CrossRefGoogle Scholar
Debrabant, K. and Rößler, A. (2015) On the acceleration of the multi-level Monte Carlo method. J. Appl. Probab. 52 307322.CrossRefGoogle Scholar
Dickinson, A. S. (2007) Optimal Approximation of the Second Iterated Integral of Brownian Motion. Stoch. Anal. Appl. 25(5) 11091128.CrossRefGoogle Scholar
Filip, S., Javeed, A. and Trefethen, L. N. (2019) Smooth random functions, random ODEs, and Gaussian processes. SIAM Rev. 61(1) 185205.CrossRefGoogle Scholar
Flint, G. and Lyons, T. (2015) Pathwise approximation of SDEs by coupling piecewise abelian rough paths. Available at https://arxiv.org/abs/1505.01298 Google Scholar
Foster, J. (2020) Numerical approximations for stochastic differential equations, PhD thesis. University of Oxford. Google Scholar
Foster, J., Lyons, T. and Oberhauser, H. (2020) An optimal polynomial approximation of Brownian motion. SIAM J. Numer. Anal. 58 13931421.CrossRefGoogle Scholar
Gaines, J. and Lyons, T. (1994) Random Generation of Stochastic Area Integrals. SIAM J. Appl. Math. 54 11321146.CrossRefGoogle Scholar
Gaines, J. and Lyons, T. (1997) Variable step size control for stochastic differential equations. SIAM J. Appl. Math. 57 14551484.CrossRefGoogle Scholar
Giles, M. B. (2008) Improved multilevel Monte Carlo convergence using the Milstein scheme. In Monte Carlo and quasi-Monte Carlo methods 2006. Springer, pp. 343358.Google Scholar
Giles, M. B. (2008) Multilevel Monte Carlo path simulation. Oper. Res. 56 607617.CrossRefGoogle Scholar
Giles, M. B. and Szpruch, L. (2014) Antithetic multilevel Monte Carlo estimation for multi-dimensional SDEs without Lévy area simulation. Ann. Appl. Probab. 24 15851620.CrossRefGoogle Scholar
Habermann, K. (2021) Asymptotic error in the eigenfunction expansion for the Green’s function of a Sturm–Liouville problem. Available at https://arxiv.org/abs/2109.10887 Google Scholar
Habermann, K. (2021) A semicircle law and decorrelation phenomena for iterated Kolmogorov loops. J. London Math. Soc. 103 558586.CrossRefGoogle Scholar
Iwaniec, H. (1997) Topics in Classical Automorphic Forms, Graduate Studies in Mathematics, Vol. 17. American Mathematical Society.Google Scholar
Kahane, J.-P. (1985) Some Random Series of Functions, Cambridge Studies in Advanced Mathematics, Vol. 5, Second ed. Cambridge University Press.Google Scholar
Kloeden, P. E. and Platen, E. (1992) Numerical Solution of Stochastic Differential Equations, Applications of Mathematics, Vol. 23. Springer.Google Scholar
Kloeden, P. E., Platen, E. and Wright, I. W. (1992) The approximation of multiple stochastic integrals. Stoch. Anal. Appl. 10 431441.CrossRefGoogle Scholar
Kuznetsov, D. F. (1997) A method of expansion and approximation of repeated stochastic Stratonovich integrals based on multiple Fourier series on full orthonormal systems [In Russian]. Electron. J. “Diff. Eq. Control Process.” 1 18-77.Google Scholar
Kuznetsov, D. F. (2019) New Simple Method of Expansion of Iterated Ito Stochastic integrals of Multiplicity 2 Based on Expansion of the Brownian Motion Using Legendre Polynomials and Trigonometric Functions. Available at https://arxiv.org/abs/1807.00409 Google Scholar
Li, X., Wu, D., Mackey, L. and Erdogdu, M. A. (2019) Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond. Adv. Neural Inform. Process. Syst. Google Scholar
Loève, M. (1978) Probability Theory II, Vol. 46. Fourth ed., Graduate Texts in Mathematics. Springer. CrossRefGoogle Scholar
Mengoli, Pietro (1650) Novae quadraturae arithmeticae, seu de Additione fractionum. Ex Typographia Iacobi Montij.Google Scholar
Mercer., J XVI (1909) Functions of positive and negative type, and their connection with the theory of integral equations. Philosop. Trans. Roy. Soc. London. A 209 415446.Google Scholar
Milstein, G. N. (1988) Numerical Integration of Stochastic Differential Equations. [In Russian]. Ural University Press.Google Scholar
Milstein, G. N. (1994) Numerical Integration of Stochastic Differential Equations, Vol. 313. Springer Science & Business Media. Google Scholar
Mrongowius, J. and Rößler, A. (2022) On the approximation and simulation of iterated stochastic integrals and the corresponding Lévy areas in terms of a multidimensional Brownian motion. Stoch. Anal. Appl. 40 397425.CrossRefGoogle Scholar
Rößler, A. (2010) Runge–Kutta Methods for the Strong Approximation of Solutions of Stochastic Differential Equations. SIAM J. Numer. Anal. 8 922952.CrossRefGoogle Scholar
Trefethen, N. (June 2019) Brownian paths and random polynomials, Version, Chebfun Example. Available at https://www.chebfun.org/examples/stats/RandomPolynomials.html Google Scholar
Wiktorsson, M. (2001) Joint characteristic function and simultaneous simulation of iterated Itô integrals for multiple independent Brownian motions. Ann. Appl. Probab. 11 470487.CrossRefGoogle Scholar
Figure 0

Figure 1. Table showing basis functions and fluctuations for the Brownian bridge expansions.

Figure 1

Figure 2. Lévy area is the chordal area between independent Brownian motions.

Figure 2

Figure 3. Profiles of $t\mapsto NC_1^N(t,t)$ plotted for $N\in \{5, 25, 100\}$ along with $t\mapsto \frac{2}{\pi ^2}$.

Figure 3

Figure 4. Profiles of $t\mapsto N(\frac{\pi -t}{2}-\sum \limits _{k=1}^N \frac{\sin\!(kt)}{k})$ plotted for $N\in \{5, 25, 100, 1000\}$ on $[\varepsilon, 2\pi - \varepsilon ]$ with $\varepsilon = 0.1$.

Figure 4

Table A1 Table summarising the Brownian bridge expansions considered in this paper

Figure 5

Table A2 Table summarising the Lévy area expansions considered in this paper