Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-25T08:00:10.052Z Has data issue: false hasContentIssue false

Generalised shot-noise representations of stochastic systems driven by non-Gaussian Lévy processes

Published online by Cambridge University Press:  21 March 2024

Simon Godsill*
Affiliation:
University of Cambridge
Ioannis Kontoyiannis*
Affiliation:
University of Cambridge
Marcos Tapia Costa*
Affiliation:
University of Cambridge
*
*Postal address: Floor 9, 16–18 Prince’s Gardens, London SW7 1NE, UK.
*Postal address: Floor 9, 16–18 Prince’s Gardens, London SW7 1NE, UK.
*Postal address: Floor 9, 16–18 Prince’s Gardens, London SW7 1NE, UK.
Rights & Permissions [Opens in a new window]

Abstract

We consider the problem of obtaining effective representations for the solutions of linear, vector-valued stochastic differential equations (SDEs) driven by non-Gaussian pure-jump Lévy processes, and we show how such representations lead to efficient simulation methods. The processes considered constitute a broad class of models that find application across the physical and biological sciences, mathematics, finance, and engineering. Motivated by important relevant problems in statistical inference, we derive new, generalised shot-noise simulation methods whenever a normal variance-mean (NVM) mixture representation exists for the driving Lévy process, including the generalised hyperbolic, normal-gamma, and normal tempered stable cases. Simple, explicit conditions are identified for the convergence of the residual of a truncated shot-noise representation to a Brownian motion in the case of the pure Lévy process, and to a Brownian-driven SDE in the case of the Lévy-driven SDE. These results provide Gaussian approximations to the small jumps of the process under the NVM representation. The resulting representations are of particular importance in state inference and parameter estimation for Lévy-driven SDE models, since the resulting conditionally Gaussian structures can be readily incorporated into latent variable inference methods such as Markov chain Monte Carlo, expectation-maximisation, and sequential Monte Carlo.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Lévy processes are commonly employed in the study of asset returns and derivative pricing models, as well as in the prediction of high-frequency trading returns [Reference Barndorff-Nielsen and Shephard3, Reference Madan, Carr and Chang34, Reference Christensen, Murphy and Godsill24]. In derivative pricing in particular, the Black–Scholes model specifies the dynamics of a financial market which holds derivative instruments such as options, futures, and swaps [Reference Black and Scholes7]. Yet the assumption that option prices are modelled by a stationary log-Gaussian process often fails to be validated by empirical data [Reference Black and Scholes7], motivating research into the relaxation of this assumption. Lévy processes such as the normal tempered stable process [Reference Barndorff-Nielsen and Shephard4, Section 9], the generalised hyperbolic process [Reference Eberlein15], or the variance-gamma process [Reference Madan, Carr and Chang34, Reference Udoye and Ekhaguere43] can be used to model the variance, or volatility, of the security as a stochastic process.

In biology, the generalised inverse Gaussian (GIG) process has been used to model spike train activity from neurons [Reference Iyengar and Liao25], and to model the likelihood of extreme meteorological events with high economic and social costs [Reference El Adlouni, Chebana and Bobée17]. The tempered stable process has also been extensively studied in relation to Lévy flight models [Reference Rosinski40] in physics, and the variance-gamma process has been used to analyse continuous trait evolution such as population growth [Reference Landis, Schraiber and Liang35, Reference Russo41], and in the analysis of nitrate concentration in the eastern U.S. [Reference Woodard, Wolpert and O’Connell48]. More recently, the leptokurtic properties of non-Gaussian Lévy processes have been exploited in the field of object tracking [Reference Gan, Ahmad and Godsill21] in order to capture abrupt changes in the kinematic state of a moving object, which in turn has applications to animal tracking and the analysis of hunting patterns.

We are here concerned with the analysis of different representations of Lévy-driven stochastic processes of the form

\begin{equation*} d\boldsymbol{X}(t) = \boldsymbol{A}\boldsymbol{X}(t)dt + \boldsymbol{h}dW(t), \ \boldsymbol{X}(t) \in \mathbb{R}^{P}, \end{equation*}

where $\boldsymbol{A}$ is a $P\times P$ matrix, $\boldsymbol{h}\in\mathbb{R}^P$ , and (W(t)) is a one-dimensional non-Gaussian Lévy process.

Based on infinite-series representations for $\left(W(t)\right)$ , tractable and conditionally Gaussian models can be developed for simulation and inference purposes; see, for example, [Reference Lemke and Godsill33, Reference Godsill, Riabiz and Kontoyiannis23, Reference Gan, Ahmad and Godsill21], where such models and inference procedures were developed for $\alpha$ -stable Lévy processes driving linear stochastic differential equations (SDEs). Here, by contrast, we study variance-mean mixture Lévy processes [Reference Barndorff-Nielsen and Shephard3], including (but not limited to) the normal-gamma, normal tempered stable, and generalised hyperbolic processes. In particular, we provide functional central-limit-theorem-like results for the convergence of the residual terms when series representations of such processes are truncated at a finite level, and for the corresponding Gaussian convergence of the above SDE’s residuals when it too is truncated in a corresponding fashion. The results are linked in spirit to the study of [Reference Asmussen and Rosinski2], although posed in terms of a different truncation of small jumps.

Lévy processes have been studied extensively via the general theory of stochastic processes with stationary, independent increments. Samorodnitsky and Taqqu [Reference Samorodnitsky and Taqqu42], Küchler and Tappe [Reference Küchler and Tappe31], and Barndorff-Nielsen and Shephard [Reference Barndorff-Nielsen and Shephard3, Reference Barndorff-Nielsen and Shephard4, Reference Barndorff-Nielsen and Shephard5] consider important special cases, including the tempered stable, inverse Gaussian, and $\alpha$ -stable processes. The relevant properties established are then used in the definition and analysis of broader classes of Lévy processes, such as the normal variance-mean (NVM) mixture processes [Reference Barndorff-Nielsen and Shephard4], which are formed by the subordination of Brownian motion to a non-negative process. Samorodnitsky and Taqqu also examine stochastic integrals with respect to $\alpha$ -stable random noise, providing a series representation closely related to the series representation of the $\alpha$ -stable process. Barndorff-Nielsen extends the series representation of $\alpha$ -stable stochastic integrals to ones driven by a non-negative Lévy process [Reference Barndorff-Nielsen and Shephard3, Section 8], and Rosinski studies an analogous representation for symmetric Gaussian mixture processes [Reference Rosinski38].

The breadth and depth of the study of Lévy processes have allowed for numerous studies into their applications in modelling rare-event phenomena, e.g. in finance [Reference Barndorff-Nielsen and Shephard3, Reference Christensen, Murphy and Godsill24, Reference Cont and Tankov12], physics [Reference Rosinski40], and biology [Reference Iyengar and Liao25, Reference El Adlouni, Chebana and Bobée17, Reference Woodard, Wolpert and O’Connell48, Reference Gan, Ahmad and Godsill21].

Efficient methods for simulating Lévy processes have been critical in successfully bridging the gap between theory and application. Muller and Box outline a method of generating normal random variates [Reference Box and Muller9], and Chambers, Mallows and Stuck study the generation of $\alpha$ -stable random variables, both of which are summarised in work by Barndorff-Nielsen [Reference Barndorff-Nielsen and Shephard5, Sections 2.3–2.4] and Samorodnitsky and Taqqu [Reference Samorodnitsky and Taqqu42]. While simulation from finite-activity processes is straightforward [Reference Cont and Tankov12], exact simulation from infinite-activity processes, such as those considered in this work, is impossible, because of the presence of an infinite number of small jumps in any finite time interval. Khinchin [Reference Khintchine29], Bondesson [Reference Bondesson8], and Fergusson and Klass [Reference Ferguson and Klass18] outline the inverse Lévy method for simulating jumps from infinite-activity processes with non-negative increments, and Rosinski [Reference Rosinski39] develops generalised shot-noise methods with thinning and rejection sampling for processes with Lévy densities that cannot be simulated directly using approaches such as the inverse Lévy method. In [Reference Godsill and Kndap22], Godsill and Kindap propose novel algorithms for simulating from the infinite-activity GIG process by combining previous approaches, paving the way for direct simulation of generalised hyperbolic jumps.

While most simulation methods in principle require the computation of an infinite sum, truncation of the series is of course required in practice. Asmussen and Rosinski [Reference Asmussen and Rosinski2] analyse the effect of strict truncation of the process jumps on the representation of the ‘residual process’, namely, the truncation error. They provide necessary and sufficient conditions for the convergence of the residual to a diffusion process, and several authors [Reference Deligiannidis, Maurer and Tretyakov13, Reference Dia14] provide bounds on the rate of convergence of the residual for specific classes of Lévy processes. Further work [Reference Carpentier, Duval and Mariucci11, Reference Vlad and Yifeng44] has also examined the Gaussian representation of strictly small jumps when studying SDEs. In the setting of SDEs driven by Lévy processes, the work of [Reference Kohatsu-Higa and Tankov30] provides a promising approach in which the process is simulated by ODE solvers between large jumps of the process, leading to an alternative and general methodology for nonlinear SDEs. In the context of the generalised shot-noise representation of more complex processes, Rosinski studies instead random Poisson truncations of the epochs of the process. This methodology is applied by Godsill et al. [Reference Godsill, Riabiz and Kontoyiannis23] to the $\alpha$ -stable process, where they provide verification of the convergence of the $\alpha$ -stable residual to a Brownian motion. Yet the analysis there is limited to the $\alpha$ -stable normal mixture process where the mean and standard deviation are proportional to an underlying stable process, and it does not consider the much broader class of NVM mixture processes, where the mean and variance are proportional to the subordinator.

A framework for performing inference on state space models described by linear Lévy-driven SDEs is presented in [Reference Godsill, Riabiz and Kontoyiannis23]. The problem of inference on linear Gaussian state space models has largely been solved through the development of the Kalman filter [Reference Kalman27]. Variants such as the extended Kalman filter and the unscented Kalman filter have also been introduced in order to handle nonlinearities [Reference Alspach and Sorenson1]. Inference on non-Gaussian models has been accomplished primarily through Bayesian modelling combined with sequential Monte Carlo methods (outlined e.g. in [Reference Cappé, Godsill and Moulines10, Reference Kantas28]), and this is exploited in [Reference Godsill, Riabiz and Kontoyiannis23] through the use of the Rao–Blackwellised particle filters to perform inference for a conditionally Gaussian state space model driven by Lévy noise. The algorithm there uses the Kalman filter on the linear conditionally Gaussian part, and it employs particle filtering to infer the distribution of the hidden process driven by the $\alpha$ -stable noise. The main aim of this paper is to provide the theoretical foundations with which to extend the ‘Lévy state space model’ [Reference Godsill, Riabiz and Kontoyiannis23] to a broader class of processes.

2. Background: Lévy processes and series representations

2.1. Lévy processes

A Lévy process, $(X(t))=(X(t)\;;\;t\geq 0)$ , is a real-valued, infinitely divisible stochastic process with stationary and independent increments. The log-characteristic function, commonly known as the characteristic exponent (CE) or cumulant function, of any Lévy process is given by [Reference Eberlein16]

(2.1) \begin{equation} K(t;\,\theta) \,:\!=\,\ln\mathbb{E}\!\left(e^{i\theta X(t)}\right) = ti\theta a - t\frac{1}{2}b^{2}\theta^{2} + t\int_{\mathbb{R}^*}\left[e^{i\theta x} -1 - \mathbb{1}(|x|<1)ix\theta\right]Q(dx), \end{equation}

where $\mathbb{R}^{*} \,:\!=\, \mathbb{R}\backslash \{0\}$ . The term $\mathbb{1}(|x|<1)ix\theta$ is a centring term that ensures convergence of the CE for processes for which $\int_{|x|<1}xQ(dx)$ is divergent, though it can be omitted for processes with finite first absolute moment. The Lévy triplet $(a, b^{2}, Q)$ uniquely defines the Lévy process [Reference Eberlein16], with $a \in \mathbb{R}$ , $b \in [0,\infty)$ , and where the Lévy measure, Q(dx), is a Poisson-process intensity measure defining the distribution of jumps in the Lévy process, satisfying

(2.2) \begin{equation} \int_{\mathbb{R}^*}\big(1 \wedge x^{2}\big)Q(dx) < \infty, \end{equation}

where $(a\wedge b)$ denotes the minimum value of a and b. See e.g. [Reference Barndorff-Nielsen and Shephard5, Section 2.3] for more details.

2.2. Subordinators

A subordinator $(Z(t))_{t\geq0}$ is a particular case of a Lévy process with non-negative increments, such that its paths are almost surely (a.s.) increasing [Reference Wolpert and Ickstadt47]. The Lévy measure of any subordinator, $Q_{Z}(dz)$ , satisfies [Reference Barndorff-Nielsen and Shephard5]

(2.3) \begin{equation} \int_{(0,\infty)} (1\wedge z)Q_{Z}(dz) < \infty. \end{equation}

Observe that this is a stricter condition than that required for general Lévy processes in (2.2). Consequently, the Lévy triplet of a subordinator has $b^{2} = 0$ and no centring term is required [Reference Wolpert and Ickstadt47]. The current work will consider only subordinators without drift; thus the CE for such a subordinator (Z(t)) will always be of the form

(2.4) \begin{equation} K_Z(t;\,\theta)\,:\!=\, \ln\mathbb{E}\big[e^{i\theta Z(t)}\big] = t\int_{(0,\infty)}\left(e^{i\theta z} - 1\right)Q_{Z}(dz). \end{equation}

The mean and variance of Z(t), when they exist, may be obtained for all t from the Lévy measure:

\begin{equation*} \mathbb{E}[Z(t)] = t\int_{(0,\infty)}z Q_Z(dz), \quad {Var}[Z(t)] = t\int_{(0,\infty)}z^{2}Q_{Z}(dz). \end{equation*}

Most but not all of the processes we consider here will have finite first and second moments, because their Lévy densities decay exponentially for large jump sizes; see Sections 3.1.4 and 3.3.2 of [Reference Barndorff-Nielsen and Shephard5]. We will also be concerned here with so-called infinite-activity subordinators, which exhibit infinitely many jumps in any finite time interval: $Q_Z((0,\infty)) = \infty$ . Combined with (2.2), this implies the presence of infinitely many small jumps ( $|x|<1$ ) within finite time intervals.

2.3. Normal variance-mean processes

A normal variance-mean (NVM) process is defined as time-deformed Brownian motion, where jumps in a subordinator process (Z(t)) drive random time deformations of an independent Brownian motion (B(t)) as follows [Reference Barndorff-Nielsen and Shephard5]:

(2.5) \begin{equation} X(t) = \mu t + \mu_{W}Z(t) + \sigma_{W}B(Z(t)), \quad t\geq 0,\; \mu, \mu_{W} \in \mathbb{R}, \ \sigma_{W} \in (0,\infty). \end{equation}

Here (B(t)) is a standard one-dimensional Brownian motion and (Z(t)) is a subordinator process as in the previous section. We limit attention to the case $\mu=0$ without loss of generality. The parameter $\mu_{W}$ models the skewness of the jump distribution, with a fully symmetric process for $\mu_{W} = 0$ . The specification of $\left(Z(t)\right)$ , coupled with the choice of $\mu_{W}$ and $\sigma_{W}$ , allows for a broad family of heavy-tailed and skewed processes to be implemented. We can express the Lévy measure Q of any such process in terms of its subordinator’s Lévy measure $Q_Z$ :

(2.6) \begin{equation} Q(dx) = \int_{(0,\infty)}\mathcal{N}\big(dx;\,\mu_{W}z,\sigma_{W}^{2}z\big)Q_{Z}(dz), \end{equation}

where ${\mathcal N}(\cdot;\,\mu,\sigma^2)$ denotes the Gaussian law with mean $\mu$ and variance $\sigma^2$ . Finally, if $K_Z(t;\,\theta)$ is the Lévy–Khintchine exponent of Z(t) in (2.4), then the CE for the subordinated process is given by [Reference Cont and Tankov12, Section 4.4]

(2.7) \begin{align} K_{X}(t;\,\theta) = K_Z\!\left(t;\, {{\mu_{W}\theta +i \frac{1}{2}\sigma_{W}^{2}\theta^{2}}}\right) &= t\int_{(0,\infty)}\left[e^{\left(i\mu_{W}\theta - \frac{1}{2}\sigma_{W}^{2}\theta^{2}\right)z}-1\right]Q_{Z}(dz), \end{align}

from which we obtain the mean and variance directly in terms of the moments of the subordinator. Specifically, for $t\geq 0,$

(2.8) \begin{align} &\mathbb{E}[X(t)] = \mu_{W}\int_{(0,\infty)}zQ_{Z}(dz) = t\mu_{W}\mathbb{E}\!\left[Z(1)\right] \end{align}

and

(2.9) \begin{align} &{Var}[X(t)] = t\mu_{W}^{2}\int_{(0,\infty)}z^{2}Q_{Z}(dz) + t\sigma_{W}^{2}\int_{0}^{\infty}zQ_{Z}(dz) = t\mu_{W}^{2}{Var}[Z(1)] + t\sigma_{W}^{2}\mathbb{E}[Z(1)], \end{align}

when these expectations exist.

We will consider examples based on the normal-gamma, normal tempered stable, and generalised hyperbolic processes, which are obtained via Brownian motion subordinated to the gamma, tempered stable, and GIG processes, respectively, with Lévy measures shown in (2.10), (2.11), (2.12), respectively:

(2.10) \begin{align} &Q_{Z}(dz) = \nu z^{-1}\exp\!\left({-}\frac{1}{2}\gamma^{2}z\right)dz, \end{align}

(2.11) \begin{align} &Q_{Z}(dz) = Az^{-1-\kappa}\exp\!\left({-}\frac{1}{2}\gamma^{\frac{1}{\kappa}}z\right)dz, \end{align}

(2.12) \begin{align} &Q_{Z}(dz) = z^{-1}\exp \!\left({-}\frac{\gamma^{2}}{2}z\right)\left[\max(0, \lambda)+\frac{2}{\pi^{2}}\int_{0}^{\infty}\frac{1}{y\!\left|H_{|\lambda|}(y)\right|^{2}} \exp \!\left( -\frac{zy^{2}}{2\delta^{2}}\right) dy\right] dz. \end{align}

Appropriate ranges for the different parameters are specified in Section 6.

2.4. Generalised shot-noise representation

The shot-noise representation of a Lévy process is fundamental to the simulation methods in [Reference Godsill, Riabiz and Kontoyiannis23], [Reference Rosinski39], and [Reference Godsill and Kndap22]. Adopting the perspective of viewing a Lévy process as a point process defined on $[0, T] \times \mathbb{R}^{d}$ , Rosinski [Reference Rosinski39] considers the equivalent representation

(2.13) \begin{equation} N = \sum_{i=1}^{\infty}\delta_{V_{i}, H(Z_{i}, U_{i})}, \end{equation}

where $\delta_x$ denotes the Dirac measure at x, and $\left\{V_{i}\right\}$ is a sequence of independent and identically distributed (i.i.d.) uniforms, $V_{i} \sim U[0,T]$ , independent of $\left\{Z_{i}, U_{i}\right\}$ . Intuitively, $\{V_{i}\}$ represent the arrival times of the jumps $X_{i} = H(Z_{i}, U_{i})$ in the process, and $\{Z_i\}$ represent a non-increasing set of jump sizes drawn from the subordinator process.

For the NVM Lévy process, $H(\cdot)$ is related to the time-domain representation of the process in (2.5) via

(2.14) \begin{equation} H(Z_{i}, U_{i}) = \mu_{W}Z_{i} + \sigma_{W}\sqrt{Z_i}U_{i}, \quad \ U_{i} \overset{i.i.d.}{\sim} \mathcal{N}(0,1). \end{equation}

The ordered jumps $\{Z_i\}$ are typically simulated through generation of the ordered epochs $\{\Gamma_i\}$ of a standard Poisson point process, obtained by the partial sum of exponential random variables, $e_{i} \overset{i.i.d.}{\sim} {\textrm{Exp}}(1)$ : $\Gamma_{i} = \Gamma_{i-1} + e_{i}.$ Then we obtain the ith ordered subordinator jump through a function $Z_{i} = h(\Gamma_{i})$ as

\begin{equation*} Z(t) = \sum_{i=1}^{\infty}h(\Gamma_{i})\mathbb{1}(V_{i} \leq t), \end{equation*}

where the map $h(\cdot)$ can be expressed in terms of the upper tail of $Q_Z$ [Reference Barndorff-Nielsen and Shephard5, Section 3.4.1] as $h(\Gamma_{i}) = \inf \{z \in \mathbb{R} : Q_Z([z,\infty)) < \Gamma_i\},$ which may or may not be available in closed form, depending on the particular choice of $Q_Z$ . If it is not available, then rejection sampling or thinning methods can be employed, as in [Reference Godsill and Kndap22], which leads to suitable algorithms for all of the models considered here.

Substituting (2.13) into the Lévy–Khintchine representation (2.1), the Lévy process can be expressed as the convergent infinite series [Reference Rosinski39]

(2.15) \begin{equation} X(t) = \sum_{i=1}^{\infty}H(Z_i, U_{i})\mathbb{1}(V_{i} \leq t) -tb_{i}, \end{equation}

where $b_{i}$ is a compensator or centring term [Reference Rosinski39] that ensures the convergence of the series. A Lévy process need not be compensated if and only if

\begin{equation*} \int_{\mathbb{R}^*}(1 \wedge |x|)Q(dx) < \infty; \end{equation*}

see [Reference Wolpert and Ickstadt47, Reference Winkel46]. In view of (2.3), any subordinator satisfies this condition, and hence so does the corresponding NVM process; see Appendix C. Therefore, none of the Lévy processes studied in this paper requires compensating terms, and we take $b_{i} = 0 $ for all i throughout.

3. Random truncation of subordinator

Consider the problem of simulating an NVM Lévy process via the shot-noise representation in the previous section. Suppose we have access to a simulation algorithm for subordinator generation, which is capable of producing a non-increasing set of random jumps $\{Z_i\}$ , $i=1,2,\ldots$ , and associated uniformly distributed random jump times $\{V_i\}$ . Exact simulation for infinite-activity processes via (2.15) is impossible with finite computational resources, so simulation of a truncated process is typically implemented. In this paper, we consider the effect of random Poisson truncations on the jumps of the subordinator process as in [Reference Godsill, Riabiz and Kontoyiannis23, Reference Rosinski39],

\begin{align*} \hat{X}_{\epsilon}(t) & = \sum_{i: Z_i\geq {\epsilon}}H(Z_{i}, U_{i})\mathbb{1}(V_{i} \leq t),\\ X_{\epsilon}(t) & = {X}(t) - \hat{X}_{\epsilon}(t) = \sum_{i: Z_i < \epsilon}H(Z_{i}, U_{i})\mathbb{1}(V_{i} \leq t),\end{align*}

where, as before, H is defined by $H(Z_{i}, U_{i}) = \mu_{W}Z_i + \sigma_{W}\sqrt{Z_i}U_{i}$ , with $U_{i}$ being i.i.d. $\mathcal{N}(0,1).$ Then $\hat{X}_\epsilon(t)$ is the process X(t) with subordinator jumps truncated at level $\epsilon$ , and $X_{\epsilon}(t)$ is the remainder, corresponding to the small jumps. There intuitively exists a trade-off between the computational complexity of computing $\hat{X}_\epsilon(t)$ , how accurately it approximates the true process X(t), and whether a Gaussian approximation to $X_{\epsilon}(t)$ is valid.

Figure 1. Left: ten sample paths from a truncated gamma process. Right: histogram of $N=10^5$ process values at $t=1$ . Both are generated with $\epsilon = 10^{-10}$ . The solid line is the true density of the original process at time $t=1$ .

Figure 2. Left: ten sample paths from a truncated tempered stable process. Right: Q–Q plot of $N=10^5$ truncated process values at $t=1$ versus $N=10^5$ samples from the true distribution of the process at $t=1$ . Both are generated with $\epsilon = 10^{-10}$ .

The left-hand plots in Figures 1, 2, and 3 show sample paths from the truncated versions of the gamma (with parameters $\gamma=\sqrt{2}$ and $\nu=2$ ), tempered stable (with parameters $\kappa=1/2$ , $\gamma=1.35$ , and $\delta=1$ ), and GIG (with parameters $\delta=1/3$ , $\gamma=\sqrt{2}$ , and $\lambda=0.2$ ) subordinators, using the above methodology with $\epsilon=10^{-10}$ , as described in [Reference Godsill and Kndap22]. See Section 6 for the precise definitions of these processes. The corresponding right-hand plots compare the empirical distributions of $N=10^5$ truncated process values at time $t=1$ with random variates from the theoretical exact (not truncated) marginal distribution of each of these truncated processes at time $t=1$ . These are truncated at very low values of $\epsilon$ that might lead to infeasibly large computational burden in practical use. This motivates our justification of Gaussian approximations to the residuals in the following sections, when larger values of $\epsilon$ are used for computational reasons.

Figure 3. Left: ten sample paths from a truncated GIG process. Right: Q–Q plot of $N=10^4$ truncated process values at $t=1$ versus $N=10^4$ samples from the true distribution of the process at $t=1$ . Both are generated with $\epsilon = 10^{-6}$ .

4. Gaussian process convergence

4.1. Preliminaries

Consider a subordinator (Z(t)) with Lévy measure $Q_Z$ . The corresponding process with jumps truncated at $\epsilon$ is denoted by $(Z_\epsilon(t))$ and has Lévy measure $Q_{Z_\epsilon}(B)=Q_Z(B\cap (0,\epsilon])$ for all Borel sets B. The corresponding residual NVM process, denoted by $(X_\epsilon(t))$ , has Lévy measure

(4.1) \begin{equation}Q_{X_\epsilon}(dx)=\int_{0}^{\epsilon}{\mathcal {N}}\big(dx|\mu_{W} z, \sigma_{W}^2z\big)Q_Z(dz),\end{equation}

and its moments can be obtained from (2.8) and (2.9):

\begin{equation*}\mathbb{E}[X_\epsilon(t)]=t\int_{-\infty}^\infty x Q_{X_\epsilon}(dx)=t\mu_{W} M^{(1)}_{Z_\epsilon}\end{equation*}

and

(4.2) \begin{equation} t\sigma_\epsilon^2=\mathbb{Var} \!\left[X_\epsilon(t)\right]=t\int_{-\infty}^\infty x^2 Q_{X_\epsilon}(dx)=t\Big[\mu_{W}^2M^{(2)}_{Z_\epsilon}+\sigma_{W}^2M^{(1)}_{Z_\epsilon}\Big], \end{equation}

where

(4.3) \begin{equation} M^{(n)}_{Z_\epsilon}=\int_0^\epsilon z^nQ_{Z_\epsilon}(dz)<\infty, \quad n\geq 1. \end{equation}

Note that these moments are all well defined and finite since $Q_{Z_\epsilon}$ satisfies (2.3) and for all $n\geq 1$ we have $\lim_{\epsilon\rightarrow0}M_{Z_{\epsilon}}^{(n)} = 0$ .

Note also that, for any $0< \epsilon \leq 1$ and $m<n$ ,

(4.4) \begin{equation}M^{(n)}_{Z_\epsilon}\leq \epsilon^{n-m} M^{(m)}_{Z_\epsilon}, \quad n\geq 1,\end{equation}

since $x^n\leq \epsilon^{n-m} x^m$ for $0<x\leq \epsilon$ . In particular this implies that

\begin{equation*}\sigma_\epsilon^2=\mu_{W}^2M^{(2)}_{Z_\epsilon}+\sigma_{W}^2M^{(1)}_{Z_\epsilon}\leq M^{(1)}_{Z_\epsilon}\left(\mu_{W}^2\epsilon+\sigma_{W}^2\right)\overset{\epsilon\rightarrow 0}{\rightarrow} 0\end{equation*}

and that

(4.5) \begin{equation}\frac{{\sigma_\epsilon}^4}{M^{(2)}_{Z_\epsilon}}=\frac{\left(\mu_{W}^2M^{(2)}_{Z_\epsilon}+\sigma_{W}^2M^{(1)}_{Z_\epsilon}\right)^2}{M^{(2)}_{Z_\epsilon}}\xrightarrow{\epsilon \rightarrow 0} \sigma_{W}^4\lim_{\epsilon\rightarrow 0}\frac{{M^{(1)}_{Z_\epsilon}}^2}{M^{(2)}_{Z_\epsilon}},\end{equation}

whenever the limit $\lim_{\epsilon\rightarrow 0}\frac{{M^{(1)}_{Z_\epsilon}}^2}{M^{(2)}_{Z_\epsilon}}$ exists.

4.2. Gaussianity of the limit of $X_{\epsilon}$

The following is our first main result; it gives conditions under which the residual process corresponding to an NVM Lévy process with jumps truncated at level $\epsilon$ converges to Brownian motion as $\epsilon\to 0$ . An analogous one-dimensional result for $\alpha$ -stable processes is established, along with corresponding convergence bounds, in [Reference Riabiz, Ardeshiri, Kontoyiannis and Godsill37].

Theorem 4.1. Consider a truncated NVM Lévy process $X_{\epsilon}=(X_{\epsilon}(t))$ with Lévy measure as in (4.1). Let the standardised process $Y_\epsilon$ be defined as $Y_\epsilon(t)=(X_\epsilon(t)-\mathbb{E}[X_\epsilon(t)])/\sigma_\epsilon$ , $t\geq 0$ . If

(4.6) \begin{align} \lim_{\epsilon \rightarrow 0}\frac{M_{Z_\epsilon}^{(2)}}{{M_{Z_\epsilon}^{(1)}}^2} = 0, \end{align}

then $Y_\epsilon$ converges weakly to a standard Brownian motion $W=(W(t))$ in D[0,1] with the topology of uniform convergence.

Proof. The main content of the proof is in establishing the claim that $X_\epsilon(1)$ , properly standardised, converges to a Gaussian under (4.6). For that, it suffices to show that the CE of the standardised version of $X_\epsilon(1)$ converges pointwise to $-u^2/2$ . The CE of $X_\epsilon(1)$ is given as in (2.7), by

\begin{align*} \phi^\epsilon_{X}(u)=\int_{\mathbb{R}^*} (\exp(iux)-1)Q_X^\epsilon(dx)=\int_0^\epsilon\left[\exp\Big(iu\mu_{W}z-\frac{1}{2}{u^2}\sigma_{W}^2z\Big)-1\right]Q_Z(dz),\end{align*}

where $Q_X^\epsilon(dx)$ is the Lévy measure for $X_\epsilon$ , having the same structure as (4.1). The CE for $\tilde{X}_\epsilon(t)={X}_\epsilon(t)-\mathbb{E}[{X}_\epsilon(t)]$ is then given by

\begin{align*} \phi_{\tilde{X}}^{\epsilon}(u) = \phi_{X}^{\epsilon}(u) - iu\int_{\mathbb{R}^*}xQ_{X}^{\epsilon}(dx) = \int_0^\epsilon\left[\exp\!\left(iu\mu_{W}z-\frac{1}{2}u^2\sigma_{W}^2z\right)-1-iu\mu_{W}z\right]Q_{Z}(dz).\end{align*}

Note that $X_\epsilon$ has finite mean, $\int_{\mathbb{R}^*}x Q_X^\epsilon(dx)=\mu_{W}M_{Z\epsilon}^{(1)}<\infty$ , by the finiteness of $M_{Z\epsilon}^{(1)}$ in (4.3). Now, if we scale the centred process, $\tilde{X}_{\epsilon}$ , to have unit variance at $t=1$ , i.e., if we let $Y_\epsilon(t)=\tilde{X}_\epsilon(t)/\sigma_\epsilon,$ then the CE for $Y_\epsilon(1)$ becomes

\begin{align*} \phi^\epsilon_Y(u) = \phi^\epsilon_{\tilde{X}}(u/\sigma_\epsilon)&= \int_{0}^{\epsilon}\left[\exp\!\left(v\frac{z}{\sigma_\epsilon^2}\right)-1-iu\mu_{W}\frac{z}{\sigma_\epsilon}\right]Q_Z(dz)\\ &=\int_0^\epsilon\left[\exp\!\left(v\frac{z}{\sigma_\epsilon^2}\right)-1-v\frac{z}{\sigma_\epsilon^{2}}-\frac{1}{2}u^2\sigma_{W}^2\frac{z}{\sigma_\epsilon^2}\right]Q_Z(dz)\\ &= \int_{0}^{\epsilon}\left[\exp\!\left(v\frac{z}{\sigma_\epsilon^2}\right)-1-v\frac{z}{\sigma_\epsilon^{2}}\right]Q_Z(dz)-\frac{1}{2}u^2\sigma_{W}^2\frac{M^{(1)}_{Z_\epsilon}}{\sigma_\epsilon^2}\\ &=\varphi_\epsilon(u)-\frac{1}{2}u^2\psi_\epsilon,\end{align*}

where $v={iu\mu_{W}}{\sigma_\epsilon}-{u^2\sigma_{W}^2}/2$ , $\varphi_\epsilon(u)=\int_{0}^{\epsilon}\big[\exp\big(v\frac{z}{\sigma_\epsilon^2}\big)-1-v\frac{z}{\sigma_\epsilon^{2}}\big]Q_Z(dz)$ , and $\psi_\epsilon=\sigma_{W}^2\frac{M^{(1)}_{Z_\epsilon}}{\sigma_\epsilon^2}$ .

Consider the difference between $\exp(\phi^\epsilon_Y(u))$ and the CF of the standard normal:

\begin{align*} e_\epsilon(u)&\,:\!=\,\exp\!\left[\varphi_\epsilon(u)-\frac{1}{2}u^2\psi_\epsilon\right]-\exp\!\left({-}\frac{u^2}{2}\right)\\ &=\exp\!\left({-}\frac{u^2}{2}\right)\left\{\exp\!\left[\varphi_\epsilon(u)+\frac{1}{2}u^2(1-\psi_\epsilon)\right]-1\right\}.\end{align*}

First we establish bounds on $\varphi_\epsilon$ and $1-\psi_\epsilon$ . For $1-\psi_\epsilon$ , from (4.2) and (4.4) we have

\begin{equation*}1-\psi_\epsilon=\frac{\mu_{W}^2M^{(2)}_{Z_\epsilon}}{{\sigma_\epsilon}^2}=\frac{\mu_{W}^2M^{(2)}_{Z_\epsilon}}{\mu_{W}^2M^{(2)}_{Z_\epsilon}+\sigma_{W}^2M^{(1)}_{Z_\epsilon}}=\frac{\mu_{W}^2}{\mu_{W}^2+\sigma_{W}^2M^{(1)}_{Z_\epsilon}/M^{(2)}_{Z_\epsilon}};\end{equation*}

therefore, since clearly $1-\psi_\epsilon\geq 0$ ,

(4.7) \begin{equation}0\leq 1-\psi_\epsilon\leq \frac{\mu_{W}^2}{\mu_{W}^2+\sigma_{W}^2/\epsilon}\xrightarrow[]{\epsilon\rightarrow 0}0.\end{equation}

For $\varphi_\epsilon$ , first recall that for any complex z with negative real part,

(4.8) \begin{align} \Bigg|\exp(z)-\sum_{i=0}^{n}z^n/n!\Bigg|\leq |z|^{n+1}/(n+1)!, \end{align}

and so

(4.9) \begin{align} |\varphi_\epsilon(u)|&=\left|\int_0^\epsilon\big(\!\exp\big(vz/\sigma_\epsilon^2\big)-1-vz/\sigma_\epsilon^2\big)Q(dz)\right|\nonumber\\ &\leq \int_0^\epsilon \left|\big(\!\exp\big(vz/\sigma_\epsilon^2\big)-1-vz/\sigma_\epsilon^2\big)\right|Q(dz)\nonumber\\ &\leq \int_0^\epsilon \frac{1}{2}|v|^2\frac{z^2}{\sigma_\epsilon^{4}}Q(dz) \end{align}
(4.10) \begin{align}=\frac{1}{2}|v|^2 \frac{M^{(2)}_{Z_\epsilon}}{\sigma_\epsilon^4}.\end{align}

The final integral is well defined by (4.3). Since $|v|^2 = {u^2\mu_{W}^2}{\sigma_\epsilon^2}+{u^4\sigma_{W}^4}/4$ , using (4.5) and (4.7), we have

\begin{align*} \lim_{\epsilon\rightarrow 0}|v|^2\frac{M^{(2)}_{Z_\epsilon}}{\sigma_\epsilon^4}&= \lim_{\epsilon\rightarrow 0}\left[u^{2}\frac{\mu_{W}^{2}M_{Z_{\epsilon}}^{(2)}}{\sigma_{\epsilon}^{2}}+\frac{1}{4}u^{4}\sigma_{W}^{4}\frac{M_{Z_{\epsilon}}^{(2)}}{\sigma_{\epsilon}^{4}}\right]\\ &=\frac{1}{4}u^{4}\sigma_{W}^4\lim_{\epsilon\rightarrow 0}\frac{M^{(2)}_{Z_\epsilon}}{\Big(\mu_{W}^2M^{(2)}_{Z_\epsilon}+\sigma_{W}^2M^{(1)}_{Z_\epsilon}\Big)^2}\\ &=\frac{1}{4}u^4\lim_{\epsilon\rightarrow 0} \frac{M^{(2)}_{Z_\epsilon}}{\big(M^{(1)}_{Z_\epsilon}\big)^2},\end{align*}

where we used the fact that $M^{(2)}_{Z_\epsilon}/M^{(1)}_{Z_\epsilon} \leq \epsilon \rightarrow 0$ from (4.4). Combining the bounds in (4.7) and (4.10), we have

\begin{align*} \Big|\varphi_\epsilon(u)+\frac{1}{2}u^2(1-\psi_\epsilon)\Big|&\leq \frac{1}{2}|v|^2 \frac{M^{(2)}_{Z_\epsilon}}{\sigma_\epsilon^4}+\frac{1}{2}u^2\frac{\epsilon\mu_{W}^2}{\epsilon\mu_{W}^2+\sigma_{W}^2}\xrightarrow{\epsilon\rightarrow 0} \frac{u^4}{8}\lim_{\epsilon\rightarrow 0} \frac{M^{(2)}_{Z_\epsilon}}{{M^{(1)}_{Z_\epsilon}}^2}.\end{align*}

This bound is finite for any $|u|<\infty$ by the properties of $\sigma_\epsilon$ and $M^{(2)}_{Z_\epsilon}$ , and hence, under (4.6),

\begin{align*} |e_\epsilon(u)| &=\left|\exp\!\left({-}\frac{u^2}{2}\right)\left\{\exp\!\left[\varphi_\epsilon(u)+\frac{1}{2}u^2(1-\psi_\epsilon)\right]-1\right\}\right|\\ &\leq \exp\!\left({-}\frac{u^2}{2}\right)\left(\exp\!\left|\varphi_\epsilon(u)+\frac{1}{2}u^2(1-\psi_\epsilon)\right|-1\right) \xrightarrow[]{\epsilon\rightarrow 0} 0.\end{align*}

This proves the claimed pointwise convergence $\exp(\phi^\epsilon_Y(u))\rightarrow \exp({-}u^2/2)$ as $\epsilon\to 0$ .

Next we argue that the condition (4.6) is in fact sufficient for the process-level convergence claimed in the theorem. Note that the result of the claim also implies uniform convergence of the relevant characteristic functions on compact intervals [Reference Kallenberg26, Theorem 5.3, p. 86]. Now, it is straightforward, from the definition of a Lévy process, that the increments $Y_\epsilon(t)-Y_\epsilon(s)$ converge in distribution to the corresponding (Gaussian) increments of a Brownian motion, for all $s < t$ . We proceed to verify the requirement (III) in [Reference Pollard36, Theorem V.19], i.e. that given $\delta > 0$ , there are $\alpha > 0$ , $\beta>0$ , and $\epsilon_0 > 0$ such that ${\mathbb P}\{|Y_\epsilon(t) - Y_\epsilon(s)| \leq \delta\} \geq \beta$ whenever $|t - s| < \alpha$ and $\epsilon < \epsilon_0$ . To see this, note that the Lévy process $(Y_\epsilon(t) : t \in [0, 1])$ is centred, so for any $1 \geq t \geq s \geq 0$ , by Chebyshev,

\begin{align*}{\mathbb P}\{|Y_\epsilon(t) - Y_\epsilon(s)| > \delta\}&={\mathbb P}\{|Y_\epsilon(t - s)| > \delta\}\\&\leq \frac{1}{\delta^2} {\textrm{Var}}(Y_\epsilon(t - s))\\&=\frac{(t - s){\textrm{Var}}(Y_\epsilon(1))}{\delta^2}= \frac{(t - s)}{\delta^2}.\end{align*}

By taking $\delta\in (0, 1)$ , we can choose any $\epsilon_0$ and $\alpha = \delta^3$ . Then, ${\mathbb P}\{|Y_\epsilon(t) - Y_\epsilon(s)| > \delta\}=1-{\mathbb P}\{|Y_\epsilon(t) - Y_\epsilon(s)| \leq \delta\}$ , so we can set $\beta=1-\delta$ , and we conclude that $(Y_{\epsilon}(t)) \xrightarrow{\epsilon \rightarrow 0}{} (W(t))$ in D[0,1], as claimed.

Next we show that some conditions are in fact necessary for the Gaussian limit in Theorem 4.1 to hold.

Theorem 4.2. Consider a truncated NVM Lévy process $X_{\epsilon}=(X_{\epsilon}(t))$ , and define the associated standardised process $Y_\epsilon$ as in Theorem 4.1. If the condition (4.6) does not hold and, moreover,

(4.11) \begin{align}L_1\,:\!=\,\liminf_{\epsilon \rightarrow 0}\frac{M_{Z_\epsilon}^{(2)}}{{M_{Z_\epsilon}^{(1)}}^2} >0\quad \mbox{and}\quad L_2\,:\!=\,\sigma_W^6\limsup_{\epsilon\rightarrow 0}\frac{M^{(3)}_{Z_\epsilon}}{\sigma_\epsilon^6}>0,\end{align}

then $Y_\epsilon(1)$ does not converge to ${\mathcal N}(0,1)$ in distribution as $\epsilon\to 0$ .

Proof. Suppose the conditions in (4.11) hold. We will assume that $Y_\epsilon(1)$ converges to ${\mathcal N}(0,1)$ in distribution as $\epsilon\to 0$ , and derive a contradiction.

In the notation of the proof of Theorem 4.1, expanding the exponential series in (4.9) for one more term than before and using (4.8) yields

\begin{align*} \int_0^\epsilon\big(\exp\big(vz/\sigma_\epsilon^2\big)-1-vz/\sigma_\epsilon^2-v^2z^2/\big(2\sigma_\epsilon^4\big)\big)Q(dz)=D_\epsilon(v), \end{align*}

with $|D_\epsilon(v)|\leq |v|^3M^{(3)}_{Z_\epsilon}/(3!\sigma_\epsilon^6)$ . Hence, rearranging and integrating, we obtain

(4.12) \begin{equation}\varphi_\epsilon(u)=\int_0^\epsilon\big(\!\exp\big(vz/\sigma_\epsilon^2\big)-1-vz/\sigma_\epsilon^2\big)Q(dz)=F_\epsilon(v)+D_\epsilon(v), \end{equation}

where $v={iu\mu_{W}}{\sigma_\epsilon}-{u^2\sigma_{W}^2}/2$ as before and $F_\epsilon(v)\,:\!=\,v^2 M^{(2)}_{Z_\epsilon}/\big(2\sigma_\epsilon^4\big).$ The assumption that $Y_\epsilon(1)$ converges to ${\mathcal N}(0,1)$ implies that $|\varphi_\epsilon(u)|\to 0$ for all u.

We consider several cases. First, we note that $F_\epsilon(v)$ cannot converge to zero for any $u\neq 0$ , since, by (4.11),

(4.13) \begin{align} \liminf_{\epsilon\rightarrow 0} {|}F_\epsilon(v){|} =\frac{1}{4}u^4\sigma_{W}^4\liminf_{\epsilon\rightarrow 0}\frac{M^{(2)}_{Z_\epsilon}}{\sigma_\epsilon^4} =\frac{1}{4}u^4\sigma_{W}^4\liminf_{\epsilon \rightarrow 0}\frac{M^{(2)}_{Z_\epsilon}}{{M^{(1)}_{Z_\epsilon}}^2} =\frac{1}{4}u^4\sigma_{W}^4 L_1>0. \end{align}

Second, we note that $D_\epsilon(v)$ also cannot converge to zero, because then $F_\epsilon(v)$ would need to converge to zero as well. The only remaining case is if, for every u, neither $F_\epsilon(v)$ nor $D_\epsilon(v)$ vanishes as $\epsilon\to0$ , while their sum does vanish. By (4.11) we have

\begin{equation*}\limsup_{\epsilon\to 0} |D_\epsilon(v)|\leq \limsup_{\epsilon\to 0}|v|^3\frac{M^{(3)}_{Z_\epsilon}}{3!\sigma_\epsilon^6}={\frac{1}{48}u^6L_2}.\end{equation*}

On the other hand, by (4.12) and (4.13) we have

\begin{equation*}\liminf_{\epsilon\to 0} |D_\epsilon(v)|=\liminf_{\epsilon\to 0} \Big| |\varphi_\epsilon(v)| - F_\epsilon(v)\Big|=\liminf_{\epsilon\to 0} |F_\epsilon(v)|=\frac{1}{4}u^4\sigma_{W}^4 L_1.\end{equation*}

Therefore,

\begin{equation*}{\frac{1}{48}u^6L_2}\geq\limsup_{\epsilon\to 0} |D_\epsilon(v)|\geq \liminf_{\epsilon\to 0} |D_\epsilon(v)|=\frac{1}{4}\sigma_{W}^4 L_1u^4.\end{equation*}

Since $L_1,L_2>0$ , this is clearly violated for u small enough, providing the desired contradiction and completing the proof.

Corollary 4.1. Suppose the subordinator Lévy measure $Q_{Z}(dz)$ admits a density $Q_{Z}(z)$ , which, for some $E>0$ , satisfies $Q_{Z}(z) > 0$ for all $z \in (0, E]$ . Then the condition (4.6) in Theorem 4.1 is equivalent to

\begin{equation*} \lim_{\epsilon\rightarrow0}\epsilon Q_{Z}(\epsilon) = +\infty.\end{equation*}

Proof. A straightforward application of L’Hôpital’s rule twice gives

\begin{align*} \lim_{\epsilon\rightarrow0} \frac{M^{(2)}_{Z_\epsilon}}{M^{(1)^{2}}_{Z_\epsilon}} = \lim_{\epsilon\rightarrow0}\frac{\int_{0}^{\epsilon}z^{2}Q_{Z}(z)dz}{\left(\int_{0}^{\epsilon}zQ_{Z}(z)dz\right)^{2}} = \lim_{\epsilon\rightarrow0} \frac{\epsilon}{2M^{(1)}_{Z_\epsilon}} = \lim_{\epsilon\rightarrow0}\frac{1}{2\epsilon Q_{Z}(\epsilon)},\end{align*}

where the positivity of $Q_Z(z)$ in (0, E] was required to ensure that the derivative of the denominator is non-zero for all $\epsilon$ small enough.

5. Bounds on the marginal convergence rate

In addition to the asymptotic convergence of the process to a Brownian motion as in Theorems 4.1 and 4.2, it is possible to compute finite- $\epsilon$ bounds on the marginals' distance from Gaussianity. The following theorem, based on Berry–Esseen-style arguments, gives a general result. It may be observed once again (see Corollary 4.1) that the quantity $\epsilon/M^{(1)}_{Z_\epsilon}$ is of importance in determining the performance of different subordinators $Q_Z$ . In the following section we study specific NVM processes within this framework.

Theorem 5.1. Consider a truncated NVM Lévy process $X_{\epsilon}=(X_{\epsilon}(t))$ , and let the standardised process $Y_\epsilon$ be defined as in Theorem 4.1. Then the Kolmogorov distance $E_\epsilon$ between $Y_{\epsilon}(1)$ and $B\sim{\mathcal N}(0,1)$ satisfies

(5.1) \begin{align}E_\epsilon&\,:\!=\, \sup_{x \in \mathbb{R}}\left|\mathbb{P}\!\left[Y^\epsilon(1) \leq x\right]-\mathbb{P}\!\left[B \leq x\right]\right| \nonumber\\&\leq C \sigma_{W}^{3}\Phi\!\left({-}\frac{3}{2}, \frac{1}{2};\, -\frac{\mu_{W}^{2}}{2\sigma_{W}^{2}}\epsilon\right)\frac{M^{\big(\frac{3}{2}\big)}_{Z_\epsilon}}{\sigma_{\epsilon}^{3}} \end{align}
(5.2) \begin{align}=C \sigma_{W}^{3}\frac{M^{\big(\frac{3}{2}\big)}_{Z_\epsilon}}{\sigma_{\epsilon}^{3}}\big(1+{\mathcal O}\left(\epsilon\right)\big), \qquad\mbox{as}\;\epsilon\to 0, \end{align}

where $C=0.7975 \times 2\sqrt{2/\pi}$ , $\Phi(a, b;\, m)$ is the Kummer confluent hypergeometric function, and with the obvious extension of (4.3) to non-integer moments.

Furthermore, with the same constant C, $E_\epsilon$ may be bounded, for $\epsilon\in(0,1]$ , as

(5.3) \begin{align} E_\epsilon &\leq C\Phi\!\left({-}\frac{3}{2}, \frac{1}{2};\, -\frac{\mu_{W}^{2}}{2\sigma_{W}^{2}}\epsilon\right)\Bigg(\frac{\epsilon}{M^{(1)}_{Z_\epsilon}}\Bigg)^{1/2} \end{align}
(5.4) \begin{align} =C\Bigg(\frac{\epsilon}{M^{(1)}_{Z_\epsilon}}\Bigg)^{1/2}\big(1+{\mathcal O}\left(\epsilon\right)\big), \qquad\mbox{as}\;\epsilon\to 0. \end{align}

Proof. Arguing as in the proof of [Reference Godsill, Riabiz and Kontoyiannis23, Theorem 3.1], which was derived from [Reference Asmussen and Rosinski2, Theorem 2.1], the Kolmogorov distance $E_\epsilon$ between $Y^\epsilon(1)$ and $B\sim{\mathcal N}(0,1)$ is bounded above by

(5.5) \begin{align} E_\epsilon= \sup_{x \in \mathbb{R}}\left|\mathbb{P}\!\left[Y^\epsilon(1) \leq x\right]-\mathbb{P}\!\left[B \leq x\right]\right| \leq 0.7975\sigma_{\epsilon}^{-3}\int_{\mathbb{R}}|x|^{3}Q^{\epsilon}_{X}(dx), \end{align}

where $\sigma_{\epsilon}^{2}$ is the variance of the NVM process. From (2.9) it follows that

(5.6) \begin{equation} \sigma_{\epsilon}^{2} = \mu_{W}^{2}M_{Z_{\epsilon}}^{(2)} + \sigma_{W}^{2}M_{Z_{\epsilon}}^{(1)} \geq \sigma_{W}^{2}M_{Z_{\epsilon}}^{(1)}. \end{equation}

Using Fubini’s theorem, the third absolute moment of the residual process can be expressed as

\begin{align*} \mathcal{S} &\,:\!=\, \int_{\mathbb{R}}|x|^{3}Q^{\epsilon}_{X}(dx)\\ &= \int_{-\infty}^{\infty}|x|^{3}\int_{0}^{\epsilon}\mathcal{N}(x;\,\mu_{W}z, \sigma_{W}^{2}z)Q_{Z}(dz)dx\\ & = \int_{0}^{\epsilon}z^{\frac{3}{2}}\sigma_{W}^{3}2^{\frac{3}{2}}\frac{\Gamma(2)}{\sqrt{\pi}}\Phi\!\left({-}\frac{3}{2}, \frac{1}{2};\, -\frac{\mu_{W}^{2}}{2\sigma_{W}^{2}}z\right)Q_{Z}(dz), \end{align*}

and since the Kummer confluent hyper-geometric function is increasing for non-negative z [Reference Kummer32], we can bound ${\mathcal S}$ as

\begin{align*} \mathcal{S}& \leq \sigma_{W}^{3}2^{\frac{3}{2}}\frac{\Gamma(2)}{\sqrt{\pi}}\Phi\!\left({-}\frac{3}{2}, \frac{1}{2};\, -\frac{\mu_{W}^{2}}{2\sigma_{W}^{2}}\epsilon\right)\int_{0}^{\epsilon}z^{\frac{3}{2}}Q_{Z}(dz)\\ &= \sigma_{W}^{3}2^{\frac{3}{2}}\frac{\Gamma(2)}{\sqrt{\pi}}\Phi\!\left({-}\frac{3}{2}, \frac{1}{2};\, -\frac{\mu_{W}^{2}}{2\sigma_{W}^{2}}\epsilon\right)M^{\big(\frac{3}{2}\big)}_{Z_\epsilon}. \end{align*}

Substituting this bound into (5.5) and using (5.6), we obtain (5.1). Then, using the expansion

(5.7) \begin{equation} \Phi(a,b;\,z)=\sum_{n=0}^\infty \frac {a^{(n)} z^n} {b^{(n)} n!}, \end{equation}

where $a^{(0)}=1$ and $a^{(n)}=a(a+1)(a+2)\cdots(a+n-1)$ , we obtain

\begin{equation*} \Phi\!\left({-}\frac{3}{2}, \frac{1}{2};\, -\frac{\mu_{W}^{2}}{2\sigma_{W}^{2}}\epsilon\right)=1+\frac{3\mu_{W}^{2}}{2\sigma_{W}^{2}}\epsilon+{\mathcal O}\big(\epsilon^2\big), \end{equation*}

from which the asymptotic expansion (5.2) is obtained.

Now, the term $M^{\big(\frac{3}{2}\big)}_{Z_\epsilon}/\sigma_{\epsilon}^{3}$ is bounded using (4.4) and (5.6), for $\epsilon\in(0,1]$ , as

\begin{equation*}\frac{M^{\big(\frac{3}{2}\big)}_{Z_\epsilon}}{\sigma_{\epsilon}^{3}}\leq \frac{\epsilon^{1/2}M^{(1)}_{Z_\epsilon}}{\sigma_{W}^{3}{M^{(1)}_{Z_\epsilon}}^{3/2}}=\frac{\epsilon^{1/2}}{\sigma_{W}^{3}{M^{(1)}_{Z_\epsilon}}^{1/2}},\end{equation*}

which yields (5.3), and applying (5.7) once again leads to (5.4).

6. Examples

In this section, the validity of the conditions in Theorems 4.1 and 4.2 on the Gaussian convergence of the residual process is examined for several important cases of NVM Lévy processes. Simulation results validating the corresponding conclusions are also shown, and explicit bounds on the rate of convergence to the Gaussian are derived in some special cases, using the general framework of Theorem 5.1.

6.1. Normal-gamma process

The subordinator of the normal-gamma (NG) process is a gamma process, with parameters $\nu, \gamma>0$ and with Lévy density

\begin{equation*} Q_{Z}(z) = \nu z^{-1} \exp\!\left({-}\frac{1}{2}\gamma^{2}z\right), \quad z>0. \end{equation*}

Here, in view of Corollary 4.1,

\begin{align*} \lim_{\epsilon\rightarrow0}\frac{M_{Z_\epsilon}^{(2)}}{M_{Z_\epsilon}^{(1)^2}}= \lim_{\epsilon\rightarrow0}\frac{1}{2\epsilon Q_{Z}(\epsilon)} = \frac{1}{2\nu} > 0,\end{align*}

and also,

\begin{align*} \lim_{\epsilon\to0}\frac{M_{Z_\epsilon}^{(3)}}{\sigma_\epsilon^6}= \frac{1}{3\nu^{2}\sigma_{W}^{6}} >0.\end{align*}

Therefore, $L_1$ and $L_2$ in Theorem 4.2 are both non-zero, so we expect the residuals of the NG process not to be approximately normally distributed.

Furthermore, since $M^{(n)}_{Z_\epsilon}=\frac{\nu}{b^n}\gamma(n,b\epsilon)$ , where $b=\gamma^{2}/2$ , we have from (5.1) that

\begin{align*} E_\epsilon \leq C \sigma_{W}^{3}\Phi\!\left({-}\frac{3}{2}, \frac{1}{2};\, -\frac{\mu_{W}^{2}}{2\sigma_{W}^{2}}\epsilon\right)\frac{\frac{\nu}{b^{3/2}}\gamma(3/2,b\epsilon)}{\Big(\mu_W^2\frac{\nu}{b^2}\gamma(2,b\epsilon)+\sigma_W^2\frac{\nu}{b}\gamma(1,b\epsilon)\Big)^{3/2}}={\mathcal O}(1),\,\,{ as \, \epsilon\to 0}, \end{align*}

where we have obtained the asymptotic behaviour using $\gamma(s, x) = x^s\,\sum_{k=0}^\infty \frac{({-}x)^k}{k!(s+k)}$ (for positive, real s and x) and (5.7). Hence, as expected, the bound on the distance from Gaussianity, $E_\epsilon$ , does not tend to zero as $\epsilon\to 0$ .

In order to verify this empirically, we generate a random sample of $M=10^4$ residual NG paths with parameters $\mu=0$ , $\mu_w=1$ , $\sigma_W=\nu=2$ , and $\gamma=\sqrt{2}$ , and compare the empirical distribution of the values of the residual at time $t=1$ with a standard normal distribution by standardising the residual values to have zero mean and unit variance. As expected, the resulting histograms are not approximately Gaussian. Figures 4 and 5 show that the distribution of the residual (with truncation level $\epsilon=10^{-6}$ ) is in fact leptokurtic and heavier-tailed than the standard normal. Further simulations confirmed this empirical observation even for smaller truncation levels $\epsilon$ .

Figure 4. Histogram of $M=10^5$ NG residual path values at $t=1$ . The smooth curve represents the standard normal density.

Figure 5. Q–Q plot of $M=10^4$ NG residual path values at $t=1$ .

6.2. Normal tempered stable process

The subordinator for the normal tempered stable (NTS) process is the tempered stable (TS) process TS $(\kappa, \delta, \gamma)$ , for $ \kappa \in (0,1), \ \delta > 0, \ \gamma \geq 0$ , which has a Lévy density

\begin{equation*} Q_{Z}(z) = Az^{-1-\kappa}\exp\!\left({-}\frac{1}{2}\gamma^{\frac{1}{\kappa}}z\right), \quad z > 0, \end{equation*}

where $A = \delta\kappa 2^{\kappa}\Gamma^{-1}(1-\kappa)$ and $\Gamma^{-1}(\cdot)$ is the reciprocal of the gamma function. Here,

\begin{equation*} \lim_{\epsilon\rightarrow 0}\epsilon Q_{Z}(\epsilon) = \lim_{\epsilon\rightarrow 0}A\epsilon^{-\kappa}\exp\!\left({-}\frac{1}{2}\gamma^{\frac{1}{\kappa}}\epsilon\right) = +\infty, \end{equation*}

since $\kappa\in(0,1)$ . Therefore, in view of Corollary 4.1 and Theorem 4.1, the residuals are expected to be approximately Gaussian. Moreover, in this case we can derive a bound on the corresponding marginal convergence rate.

Lemma 6.1. For $\epsilon\in(0,1)$ , let $(Y_{\epsilon}(t))$ denote the standardised truncated process associated to an NVM process subordinated to the residual TS process ${\textrm{TS}}(\kappa, \delta, \gamma)$ . If $B\sim\mathcal{N}(0,1)$ , then the Kolmogorov distance $E_\epsilon$ between $Y_{\epsilon}(1)$ and B satisfies

(6.1) \begin{align} E_{\epsilon} \leq \frac{0.7975\times 2^{\frac{3}{2}}\sqrt{\Gamma(1-\kappa)}}{\sqrt{\delta\kappa\pi\gamma}}\times \Phi_{\epsilon}\gamma\!\left(1-\kappa,\frac{1}{2}\epsilon\gamma^{\frac{1}{\kappa}}\right)^{-\frac{3}{2}} \gamma\!\left(\frac{3}{2}-\kappa, \frac{1}{2}\gamma^{\frac{1}{\kappa}}\epsilon\right), \end{align}

where

\begin{align*} \Phi_{\epsilon} = \Phi\!\left({-}\frac{3}{2}, \frac{1}{2};\, -\frac{\mu_{W}^{2}}{2\sigma_{W}^{2}}\epsilon\right), \end{align*}

and $\Phi(a, b;\, m), \gamma(s,x)$ are the Kummer confluent hypergeometric function and the incomplete lower gamma function, respectively. Furthermore, as $\epsilon\rightarrow 0$ we have

(6.2) \begin{equation} E_{\epsilon} \leq \frac{0.7975\times 2^{\frac{3}{2}-\frac{\kappa}{2}}(1-\kappa)^\frac{3}{2}\sqrt{\Gamma(1-\kappa)}}{\big(\frac{3}{2}-\kappa\big)\sqrt{\delta\kappa\pi}}\epsilon^{\frac{\kappa}{2}}+\mathcal{O}\big(\epsilon^{1+\frac{\kappa}{2}}\big). \end{equation}

Proof. From Theorem 5.1 we obtain $E_{\epsilon} \leq C \sigma_{W}^{3}\Phi_{\epsilon}M^{\big(\frac{3}{2}\big)}_{Z_\epsilon}/\sigma_{\epsilon}^{3}$ , and from (2.9) it follows that

\begin{equation*} \sigma_{\epsilon}^{2} = \mu_{W}^{2}M_{Z_{\epsilon}}^{(2)} + \sigma_{W}^{2}M_{Z_{\epsilon}}^{(1)} \geq \sigma_{W}^{2}M_{Z_{\epsilon}}^{(1)}, \end{equation*}

where the residual first moment is given by

\begin{equation*} M^{(1)}_{Z_{\epsilon}} = \int_{0}^{\epsilon}zAz^{-1-\kappa}\exp\!\left({-}\frac{1}{2}\gamma^{\frac{1}{\kappa}}z\right)dz=A \gamma^{\frac{\kappa-1}{\kappa}}2^{1-\kappa}\gamma\!\left(1-\kappa,\frac{1}{2}\epsilon\gamma^{\frac{1}{\kappa}}\right). \end{equation*}

To find $M_{Z_{\epsilon}}^{\left(\frac{3}{2}\right)}$ , substitute the Lévy density of the TS process described earlier,

\begin{align*} M_{Z_{\epsilon}}^{\left(\frac{3}{2}\right)} &= A\int_{0}^{\epsilon}z^{\frac{1}{2}-\kappa}e^{-\frac{1}{2}\gamma^{\frac{1}{\kappa}}z}dz =A\!\left(\frac{1}{2}\gamma^{\frac{1}{\kappa}}\right)^{\kappa - \frac{3}{2}}\gamma\!\left(\frac{3}{2}-\kappa, \frac{1}{2}\gamma^{\frac{1}{\kappa}}\epsilon\right). \end{align*}

Substituting this in (5.5) and noting that $A = \delta \kappa 2^{\kappa}\Gamma^{-1}(1-\kappa)$ and $C = 0.7975 \times 2\sqrt{2/\pi}$ yields

\begin{align*} E_{\epsilon} \leq 0.7975\times \frac{2^{\frac{3}{2}}\sqrt{\Gamma(1-\kappa)}}{\sqrt{\delta\kappa\pi\gamma}}\Phi_{\epsilon}\gamma\!\left(1-\kappa,\frac{1}{2}\epsilon\gamma^{\frac{1}{\kappa}}\right)^{-\frac{3}{2}}\gamma\!\left(\frac{3}{2}-\kappa, \frac{1}{2}\gamma^{\frac{1}{\kappa}}\epsilon\right), \end{align*}

as claimed. Finally, recalling the series expansion for $\gamma(s,x)$ from the previous section, the asymptotic expression in (5.2) leads to (6.2).

Next we examine the empirical accuracy of the Gaussian approximation in this case. Figures 6 and 7 show the empirical distribution of the residual NTS process at $t=1$ , with parameter values $\mu=0$ , $\mu_W=\delta=1$ , $\sigma_W=2$ , $\kappa=1/2$ , $\gamma=1.35$ , and truncation level $\epsilon=10^{-6}$ .

Figure 6. Histogram of $M = 10^5$ NTS residual path values at $t=1$ . The smooth curve represents the standard normal density.

Figure 7. Q–Q plot of $M = 10^4$ NTS residual path values at $t=1$ .

With the same parameter values, Figure 8 shows the behaviour of the bound in (6.1) and the first term in the asymptotic bound (6.2).

Figure 8. Plot of the finite- $\epsilon$ bound in (6.1) and the first term in the asymptotic bound (6.2) for the approximation error $E_\epsilon$ in Lemma 6.1.

6.3. Generalised hyperbolic process

Finally, we consider the general class of generalised hyperbolic (GH) processes. The subordinator in this case is the generalised inverse Gaussian (GIG) process GIG( $\lambda, \delta, \gamma$ ), with constraints on parameter values as detailed in [Reference Godsill and Kndap22], and with Lévy density given by [Reference Godsill and Kndap22]

\begin{equation*} Q_{Z}(z) = \frac{\exp \!\left({-}z\gamma^{2}/2\right)}{z} \left[\max(0, \lambda)+\frac{2}{\pi^{2}}\int_{0}^{\infty}\frac{1}{y\big|H_{|\lambda|}(y)\big|^{2}} \exp \Big( -\frac{zy^{2}}{2\delta^{2}}\Big) dy\right]\mathbb{1}(z > 0), \end{equation*}

where $H_{v}(z)$ is the Hankel function of real order v. A direct verification of the sufficient condition in Corollary 4.1 is readily obtained as

\begin{align*} \lim_{\epsilon\rightarrow0}\epsilon Q_{Z}(\epsilon) &= \lim_{\epsilon\rightarrow 0}\exp\Big({-}\frac{\gamma^{2}}{2}\epsilon\Big)\max(0,\lambda) + \frac{2}{\pi^{2}}\lim_{\epsilon\rightarrow0}\int_{0}^{\infty}\frac{1}{y\big|H_{|\lambda|}(y)\big|^{2}} \exp\Big({-}\frac{\epsilon\gamma^{2}}{2} -\frac{\epsilon y^{2}}{2\delta^{2}}\Big) dy \\ &= \max(0,\lambda) + \frac{2}{\pi^{2}}\int_{0}^{\infty}\frac{1}{y\big|H_{|\lambda|}(y)\big|^{2}}dy\\ &= +\infty, \end{align*}

since ${y\big|H_{\nu}(y)\big|^{2}}$ is non-zero for $z\in [z_1,\infty)$ where

\begin{equation*}z_1 = \left(\frac{ 2^{1-2\nu}\pi}{\Gamma^2(\nu)}\right)^{1/(1-2\nu)};\end{equation*}

see e.g. [Reference Godsill and Kndap22, Theorem 2] and [Reference Watson45]. Once again, in view of Corollary 4.1 and Theorem 4.1, the residuals are expected to be approximately Gaussian. Indeed, in this case we can derive the following bound on the corresponding marginal convergence rate.

Lemma 6.2. For $\epsilon\in(0,1)$ , let $Y_{\epsilon}(t)$ denote the standardised truncated process associated to an NVM process subordinated to the residual ${\textrm{GIG}}(\lambda, \delta, \gamma)$ process. If $B\sim\mathcal{N}(0,1)$ , then for any $z_0\in(0,\infty)$ , the Kolmogorov distance $E_\epsilon$ between $Y_{\epsilon}(1)$ and B can be bounded as

(6.3) \begin{align} E_{\epsilon} &\leq \frac{{0.7975\Phi_\epsilon\gamma^{3/2}}}{{\textrm{erf}}\left(\gamma\frac{\sqrt{\epsilon}}{\sqrt{2}}\right)^{\frac{3}{2}}} \biggl( \frac{2\max(0,\lambda)}{{\tilde{\pi}}(b\delta)^{3/2}}\gamma\!\left(\frac{3}{2}, b\epsilon\right) + \frac{2^{|\lambda|{+1}}\delta^{2|\lambda|-\frac{3}{2}}\Gamma(|\lambda|)} {{\pi^{2}\tilde{\pi}} H_{0}z_{0}^{2|\lambda|-1}b^{3/2-|\lambda|}}\times \gamma\!\left(\frac{3}{2}-|\lambda|, b\epsilon\right) \nonumber\\ &\qquad \qquad\qquad\qquad \qquad\qquad + \frac{1}{{\tilde{\pi}^{4}}H_{0}b\sqrt{\delta}}\gamma\!\left(1, b\epsilon\right)\biggr),\qquad{for} \;|\lambda| \leq \frac{1}{2}, \nonumber\\[4pt] E_{\epsilon} & \leq \frac{{0.7975\Phi_\epsilon\left(\gamma H_0\right)^{3/2}}}{{\textrm{erfc}}\left( \frac{z_{0}}{\delta\sqrt{2}}\sqrt{\epsilon}\right)^{\frac{3}{2}}{\textrm{erf}}(\gamma\frac{\sqrt{\epsilon}}{\sqrt{2}})^{\frac{3}{2}}} \biggl( {\frac{\max(0,\lambda)\pi}{(b\delta)^{\frac{3}{2}}}} \times \gamma\!\left(\frac{3}{2}, b\epsilon\right) \nonumber\\[4pt] &\qquad \qquad\qquad\qquad \qquad\qquad + {\frac{\tilde{\pi}}{b\sqrt{\delta}}} \times \gamma\!\left(1, b\epsilon\right)\biggr), \qquad {for}\;|\lambda| > \frac{1}{2}, \end{align}

with $b = \frac{\gamma^{2}}{2}$ , $\tilde{\pi} = \sqrt{\frac{\pi}{2}}$ , $H_{0} = z_{0}\big|H_{|\lambda|}(z_{0})\big|^{2},$ and

\begin{align*} \Phi_{\epsilon} = \Phi\!\left({-}\frac{3}{2}, \frac{1}{2};\, -\frac{\mu_{W}^{2}}{2\sigma_{W}^{2}}\epsilon\right), \end{align*}

where, using the standard definitions ${{\textrm{erf}}} (x) = \frac{2}{\sqrt\pi}\int_0^x e^{-t^2}\,\textrm{dt}$ and ${{\textrm{erfc}}} (x)=1-{{\textrm{erf}}} (x)$ , we denote by $\Phi(a, b;\, m)$ , $\gamma(s,x)$ , ${\textrm{erf}}(x)$ , and ${\textrm{erfc}}(x)$ the Kummer confluent hypergeometric function, the incomplete lower gamma function, the error function, and the complementary error function, respectively. Furthermore, as $\epsilon\rightarrow 0$ we have

(6.4) \begin{align} E_{\epsilon} & \leq {\frac{0.7975}{\tilde{\pi}^{\frac{5}{2}}bH_{0}\sqrt{\delta}}}\epsilon^{\frac{1}{4}}+\mathcal{O}\big(\epsilon^{\frac{5}{4}}\big), \qquad {for}\;|\lambda| \leq \frac{1}{2}, \nonumber \\ E_{\epsilon} &\leq {\frac{0.7975 \tilde{\pi}^{\frac{5}{2}}H_{0}^{\frac{3}{2}}}{b\sqrt{\delta}}}\epsilon^{\frac{1}{4}} + \mathcal{O}\big(\epsilon^{\frac{5}{4}}\big), \qquad {for}\;|\lambda| > \frac{1}{2}. \end{align}

Proof. Recalling the definition of the Jaeger integral as

\begin{equation*}J(z)=\int_0^\infty\frac{e^{-\frac{x^2z}{2\delta^2}}}{x|H_{|\lambda|}(x)|^2}dx,\end{equation*}

we have from the bounds obtained in Appendix A, for $|\lambda|\leq 1/2$ ,

\begin{equation*} J(z) \geq {\delta{\left(\frac{\pi}{2}\right)}^{3/2}z^{-\frac{1}{2}}}, \end{equation*}

and for $|\lambda|>1/2$ ,

\begin{equation*} J(z) \geq \frac{\delta^{2|\lambda|}2^{|\lambda|-1}}{H_{0}z_{0}^{2|\lambda|-1}}z^{-|\lambda|}\gamma\!\left(|\lambda|, \frac{z_{0}^{2}}{2\delta^{2}}z\right)+\frac{\delta}{H_{0}\sqrt{2}}z^{-\frac{1}{2}}\Gamma\!\left(\frac{1}{2},\frac{z_{0}^{2}}{2\delta^{2}}z\right). \end{equation*}

Noting that the variance of the GH process satisfies $\sigma_{\epsilon}^{2} = \mu_{W}^{2}M^{(2)}_{Z_{\epsilon}} + \sigma_{W}^{2}M^{(1)}_{Z_{\epsilon}} \geq \sigma_{W}^{2}M^{(1)}_{Z_{\epsilon}}$ , for $|\lambda|\leq \frac{1}{2}$ we obtain

\begin{align*} M^{(1)}_{Z_{\epsilon}} &= \int_{0}^{\epsilon}\exp \!\left({-}\frac{\gamma^{2}z}{2}\right)\left[\max(0,\lambda) + \frac{2}{\pi^{2}}J(z)\right]dz\\ &\geq \frac{2}{\pi^{2}}\int_{0}^{\epsilon}\exp \!\left({-}\frac{\gamma^{2}z}{2}\right)J(z)dz\\ &\geq {\frac{\delta}{\gamma}{\textrm{erf}}\left(\frac{\gamma\sqrt{\epsilon}}{\sqrt{2}}\right)}, \end{align*}

and similarly for $|\lambda|>\frac{1}{2}$ we obtain

\begin{align*} M^{(1)}_{Z_{\epsilon}} &\geq \frac{2}{\pi^{2}}\int_{0}^{\epsilon}e^{-\frac{z\gamma^{2}}{2}}\left[\frac{\delta^{2|\lambda|}2^{|\lambda|-1}}{H_{0}z_{0}^{2|\lambda|-1}}z^{-|\lambda|}\gamma\!\left(|\lambda|, \frac{z_{0}^{2}}{2\delta^{2}}z\right)+\frac{\delta}{H_{0}\sqrt{2}}z^{-\frac{1}{2}}\Gamma\!\left(\frac{1}{2},\frac{z_{0}^{2}}{2\delta^{2}}z\right)\right]dz\nonumber\\ &\geq \frac{2}{\pi^{2}}\int_{0}^{\epsilon}e^{-\frac{z\gamma^{2}}{2}}\frac{\delta}{H_{0}\sqrt{2}}z^{-\frac{1}{2}}\Gamma\!\left(\frac{1}{2},\frac{z_{0}^{2}}{2\delta^{2}}z\right)dz\\ &\geq \frac{2\delta}{H_{0}\pi\gamma}{\textrm{erfc}}\left(\frac{z_{0}}{\delta\sqrt{2}}\sqrt{\epsilon}\right){\textrm{erf}}\left(\frac{\gamma\sqrt{\epsilon}}{\sqrt{2}}\right). \end{align*}

Therefore, we have the following bound on the variance:

\begin{align*} \sigma_{\epsilon}^{2} \geq \begin{cases} {\frac{\sigma_{W}^{2}\delta}{\gamma}}{\textrm{erf}}\left(\frac{\gamma\sqrt{\epsilon}}{\sqrt{2}}\right), &\mbox{for}\;|\lambda| \leq \frac{1}{2},\\[4pt] \frac{\sigma_{W}^{2}\delta}{H_{0}\tilde{\pi}^{2}\gamma}{\textrm{erfc}}\left( \frac{z_{0}}{\delta\sqrt{2}}\sqrt{\epsilon}\right) {\textrm{erf}}\left(\frac{\gamma\sqrt{\epsilon}}{\sqrt{2}}\right), &\mbox{for}\;|\lambda| > \frac{1}{2}. \end{cases} \end{align*}

Finding $M_{Z_{\epsilon}}^{\left(\frac{3}{2}\right)}$ in line with Theorem 5.1, we have

\begin{align*} M_{Z_{\epsilon}}^{\left(\frac{3}{2}\right)} &= \int_{0}^{\epsilon}z^{\frac{3}{2}}Q_{Z}(dz)\\ &=\max(0,\lambda) \int_{0}^{\epsilon}z^{\frac{1}{2}}\exp\!\left({-}\frac{\gamma^{2}}{2}z\right)dz +\frac{2}{\pi^{2}}\int_{0}^{\epsilon}z^{\frac{1}{2}}\exp\!\left({-}\frac{\gamma^{2}}{2}z\right)J(z)dz \\ &= \mathcal{S}_{1} + \mathcal{S}_{2}. \end{align*}

Writing $b = \frac{\gamma^{2}}{2}$ , we have

\begin{equation*} \mathcal{S}_{1} = \frac{\max(0,\lambda)}{b\sqrt{b}}\gamma\!\left(\frac{3}{2}, b\epsilon\right); \end{equation*}

for $|\lambda| > \frac{1}{2}$ we have ${J(z)\leq \delta{\left(\frac{\pi}{2}\right)}^{3/2}z^{-\frac{1}{2}}}$ and

\begin{equation*} \mathcal{S}_{2}\leq \frac{2}{\pi^{2}}\delta \!\left(\frac{\pi}{2}\right)^{\frac{3}{2}}\int_{0}^{\epsilon}\exp\!\left({-}bz\right)dz = \frac{\delta}{\sqrt{2\pi} b}\gamma\!\left(1, b\epsilon\right), \end{equation*}

and for $|\lambda| \leq \frac{1}{2}$ ,

\begin{align*} {\mathcal S}_{2} &\leq \frac{2^{|\lambda|}\delta^{2|\lambda|}}{\pi^{2}H_{0}z_{0}^{2|\lambda|-1}}\int_{0}^{\epsilon}z^{\frac{1}{2}-|\lambda|}e^{-bz}\gamma\!\left(|\lambda|, \frac{z_{0}^{2}}{2\delta^{2}}z\right)dz + \frac{\sqrt{2}\delta}{\pi^{2}H_{0}}\int_{0}^{\epsilon}e^{-bz}\Gamma\!\left(\frac{1}{2}, \frac{z_{0}^{2}}{2\delta^{2}}z\right)dz\\ &\leq \frac{2^{|\lambda|}\delta^{2|\lambda|}\Gamma(|\lambda|)}{\pi^{2}H_{0}z_{0}^{2|\lambda|-1}}\int_{0}^{\epsilon}z^{\frac{1}{2}-|\lambda|}e^{-bz}dz + \frac{\delta}{\pi\tilde{\pi}H_{0}}\int_{0}^{\epsilon}e^{-bz}dz\\ &\leq \frac{2^{|\lambda|}\delta^{2|\lambda|}\Gamma(|\lambda|)}{\pi^{2}H_{0}z_{0}^{2|\lambda|-1}b^{\frac{3}{2}-|\lambda|}}\gamma\!\left(\frac{3}{2}-|\lambda|, b\epsilon\right) + \frac{\delta}{\pi\tilde{\pi}H_{0}b}\gamma\!\left(1, b\epsilon\right). \end{align*}

Combining the above bounds, for $|\lambda|\leq 1/2$ ,

\begin{equation*} M_{Z_{\epsilon}}^{\left(\frac{3}{2}\right)} \leq \frac{\max(0,\lambda)}{b\sqrt{b}}\gamma\!\left(\frac{3}{2}, b\epsilon\right) + \frac{2^{|\lambda|}\delta^{2|\lambda|}\Gamma(|\lambda|)}{\pi^{2}H_{0}z_{0}^{2|\lambda|-1}b^{\frac{3}{2}-|\lambda|}}\gamma\!\left(\frac{3}{2}-|\lambda|, b\epsilon\right) + \frac{\delta}{\pi\tilde{\pi}H_{0}b}\gamma\!\left(1, b\epsilon\right), \end{equation*}

and for $|\lambda|>1/2,$

\begin{equation*} M_{Z_{\epsilon}}^{\left(\frac{3}{2}\right)}\leq \frac{\max(0,\lambda)}{b\sqrt{b}}\gamma\!\left(\frac{3}{2}, b\epsilon\right)+ {\frac{\delta}{\sqrt{2\pi}b}}\gamma\!\left(1, b\epsilon\right). \end{equation*}

Finally, substituting these bounds for $M_{Z_{\epsilon}}^{\left(\frac{3}{2}\right)}$ into (5.1), we obtain the bounds as stated in (6.3). The series expansions of the gamma and hypergeometric functions then lead to the asymptotic expansion (6.4).

Once again, we examine the validity of the Gaussian approximation empirically. Figures 9 and 10 show the empirical distribution of the residual GH process at time $t = 1$ , with parameter values $\mu = 0$ , $\mu_W =1$ , $\sigma_W = 2$ , $\delta=1.3$ , $\gamma=\sqrt{2}$ , $\lambda=0.2$ , and truncation level $\epsilon=10^{-6}$ .

Figure 9. Histogram of $M = 10^5$ GH residual path values at $t=1$ . The smooth curve represents the standard normal density.

Figure 10. Q–Q plot of $M = 10^4$ GH residual path values at $t=1$ .

With the same parameter values, Figure 11 shows behaviour of the bound in (6.3) and the first term in the asymptotic bound (6.4).

Figure 11. Plot of the finite- $\epsilon$ bound in (6.3) and the first term in the asymptotic bound (6.4) for the approximation error $E_\epsilon$ in Lemma 6.2.

Note that the bounds (6.3) and (6.4) are discontinuous at $|\lambda| = \frac{1}{2}$ . This discrepancy is likely due to the upper bound for $M_{Z_{\epsilon}}^{(1)}$ in the case $|\lambda| > \frac{1}{2}$ . Although we do expect this could be improved, obtaining such refined bounds is beyond the scope of this paper.

7. Linear SDEs

The Lévy state space model [Reference Godsill, Riabiz and Kontoyiannis23] defines a stochastic process having the following dynamics:

\begin{align*} d\boldsymbol{X}(t) = \boldsymbol{A}\boldsymbol{X}(t)dt + \boldsymbol{h}dW(t), \hspace{1cm} \boldsymbol{X}(t) \in \mathbb{R}^{P}, W(t) \in \mathbb{R},\end{align*}

where $\boldsymbol{A}$ is a $P\times P$ matrix, and $\boldsymbol{h}\in\mathbb{R}^P$ . In [Reference Godsill, Riabiz and Kontoyiannis23] (W(t)) is assumed to follow a stable law; here it is taken to be an NVM Lévy process. The solution of the state process takes the form

(7.1) \begin{equation} \boldsymbol{X}(t) = e^{\boldsymbol{A}(t-s)}\boldsymbol{X}(s) + \int_{s}^{t}e^{\boldsymbol{A}(t-u)}\boldsymbol{h}dW(u). \end{equation}

We first present a shot-noise representation of the stochastic integral in (7.1), and then prove the convergence of its small-jump residual to a Gaussian-driven SDE, under appropriate conditions.

7.1. Shot-noise representation of SDE

In order to apply the Lévy state space model to NVM Lévy processes, we first establish their representation as generalised shot-noise series. Theorem 7.1 gives the result for a general integrand.

Theorem 7.1. Let (X(u)) be an NVM process generating the filtration $(\mathcal{F}_{t})$ . Suppose $\boldsymbol{f}_{t} : [0, \infty) \rightarrow \mathbb{R}^{P}$ is an $L_{2}$ deterministic function, and let $\boldsymbol{I}({\textbf{f}_{t}})$ denote the integral

\begin{equation*}\boldsymbol{I}({\boldsymbol{f}_{t}}) = \int_{0}^{T}\boldsymbol{f}_{t}(u)dX(u), \qquad 0\leq t\leq T. \end{equation*}

Then $\boldsymbol{I}({\boldsymbol{f}_{t}})$ admits the series representation

\begin{equation*} \boldsymbol{I}({\boldsymbol{f}_{t}}) = \sum_{i=1}^{\infty}X_i\boldsymbol{f}_{t}(TV_{i})\mathbb{1} \!\left( V_{i} \leq \frac{t}{T}\right), \end{equation*}

where

\begin{align*} X_i=\mu_{W} Z_i+\sigma_{W}\sqrt{Z_i}U_{i}, \end{align*}

and $\mu_{W} \in \mathbb{R}, \ \sigma_{W} \in (0,\infty)$ are the variance-mean mixture parameters, $V_{i} \overset{i.i.d.}{\sim} \mathcal{U}[0,1]$ are normalised jump times, $Z_i$ are the jumps of the subordinator process arranged in non-increasing order, and $U_i \overset{i.i.d.}{\sim} \mathcal{N}(0, 1)$ .

Proof. Arguing as in [Reference Rosinski39, Section 7], we can extend (2.14) to any NVM process defined on $t\in[0, T]$ , with $\tilde{V}_i = TV_i\sim\mathcal{U}[0,T]$ , to obtain, for all $u\in[0,T]$ ,

\begin{align*} X(u) &= \sum_{i=1}^{\infty}\left[\mu_{W}Z_{i}+\sigma_{W}\sqrt{Z_{i}}U_{i}\right]\mathbb{1}(\tilde{V}_{i}\leq u)\\ &= \mu_{W}\sum_{i=1}^{\infty}Z_{i}\mathbb{1}(\tilde{V}_{i}\leq u) + \sigma_{W}\sum_{i=1}^{\infty}\sqrt{Z_{i}}U_{i}\mathbb{1}(\tilde{V}_{i}\leq u)\\ &= \mu_{W}M(u) + \sigma_{W}S(u), \end{align*}

where M(u) is a subordinator Lévy process and S(u) a symmetric Gaussian mixture process. Therefore,

\begin{equation*} dX(u) = \mu_{W}dM(u) + \sigma_{W}dS(u), \end{equation*}

and hence we obtain the following representation for $\boldsymbol{I}({\boldsymbol{f}_{t}})$ :

\begin{equation*} \boldsymbol{I}({\boldsymbol{f}_{t}}) = \mu_{W}\int_{0}^{T}\boldsymbol{f}_{t}(u)dM(u) + \sigma_{W}\int_{0}^{T}\boldsymbol{f}_{t}(u)dS(u). \end{equation*}

From [Reference Barndorff-Nielsen and Shephard3, Corollary 8.2], a stochastic integral with respect to a Lévy subordinator admits the a.s. generalised shot-noise representation

\begin{equation*} \int_{0}^{T}\boldsymbol{f}_{t}(u)dM(u) = \sum_{i=1}^{\infty}Z_{i}\boldsymbol{f}_{t}(\tilde{V}_{i})\mathbb{1}(\tilde{V}_{i} \leq t). \end{equation*}

Similarly, Rosinski proves in [Reference Rosinski38, Section 4] the a.s. representation of stochastic integrals with respect to Lévy processes of type G. These are symmetric normal variance mixture processes such as the symmetric Student-t, symmetric $\alpha$ -stable, and Laplace which are special cases of NVM Lévy processes having $\mu_W=0$ , in the form

\begin{equation*} \int_{0}^{T}\boldsymbol{f}_{t}(u)dS(u) = \sum_{i=1}^{\infty}\sqrt{Z_{i}}U_{i}\boldsymbol{f}_{t}(\tilde{V}_{i})\mathbb{1}(\tilde{V}_{i} \leq t). \end{equation*}

Combining the last three expressions proves the claimed result.

Applying the result of the theorem to X(t) in (7.1), with $\boldsymbol{f}_{t}(u) = e^{\boldsymbol{A}(t-u)}\boldsymbol{h}\mathbb{1}(s\leq u \leq t),$ yields

\begin{equation*} \boldsymbol{I}\left(\boldsymbol{f}_{t}\right) = \sum_{i=1}^{\infty}\left[\mu_{W}Z_{i}+\sigma_{W}\sqrt{Z_{i}}U_{i}\right]e^{\boldsymbol{A}(t-\tilde{V}_{i})}\boldsymbol{h}{\mathbb{1}(\tilde{V}_i\in (s,t ])} \end{equation*}

with $\tilde{V}_{i} \overset{i.i.d.}{\sim} \mathcal{U}(0,T]$ as before.

8. Convergence of residual SDE to a Gaussian SDE

In this section we prove that the residual series of the truncated shot-noise representation of the SDE in the previous section converges to Brownian-motion-driven SDE, as the truncation level $\epsilon\downarrow 0$ . Employing random Poisson truncations of the subordinator jumps $Z_i$ as before, we can write

\begin{equation*} \boldsymbol{X}(t) = e^{\boldsymbol{A}(t-s)}\boldsymbol{X}(s) + \boldsymbol{Z}_{\epsilon}(s,t) + \boldsymbol{R}_{\epsilon}(s,t), \end{equation*}

where ${\boldsymbol{I}(\boldsymbol{f}_{\boldsymbol{t}})} = \boldsymbol{Z}_{\epsilon}(s,t) + \boldsymbol{R}_{\epsilon}(s,t)$ , with

\begin{align*} &\boldsymbol{Z}_{\epsilon}(s,t) = \sum_{i: Z_{i} > \epsilon}\left[\mu_{W}Z_{i}+\sigma_{W}\sqrt{Z_{i}}U_{i}\right]e^{\boldsymbol{A}(t-\tilde{V}_{i})}\boldsymbol{h}{\mathbb{1}(\tilde{V}_i\in (s,t ])},\\ &\boldsymbol{R}_{\epsilon}(s,t) = \sum_{i: Z_{i} \leq \epsilon}\left[\mu_{W}Z_{i}+\sigma_{W}\sqrt{Z_{i}}U_{i}\right]e^{\boldsymbol{A}(t-\tilde{V}_{i})}\boldsymbol{h}{\mathbb{1}(\tilde{V}_i\in (s,t ])}, \end{align*}

and $\tilde{V}_{i} \sim {\mathcal{U}(0,T]} $ . Here, $\boldsymbol{R}_{\epsilon}(s,t)$ is the residual series driven by small jumps $Z_i < \epsilon$ , which will be approximated by a Brownian-driven SDE with matched moments. Theorem 4.1 and the results in Section 6 are the starting point of the proof of this approximation.

We first present a lemma concerning NVM Lévy processes.

Lemma 8.1. Let $(Y_{\epsilon}(t))$ denote the standardised residual NVM Lévy process as defined in Theorem 4.1, and let (B(t)) be a standard Brownian motion. Then there exists a coupling $(Y_{\epsilon}(t),B(t))$ such that, under the conditions of Theorem 4.1, we have

\begin{align*} \mathbb{E} \!\left[\sup_{t\in[0,T]}\left|Y_{\epsilon}(t)-B(t)\right|^{2}\right] \leq C_{T}\max\big\{S^{1/2}_{\epsilon}, S^{1/3}_{\epsilon}\big\}, \end{align*}

where $C_{T}$ is a constant independent of $\epsilon$ and

\begin{equation*} S_{\epsilon} = \mu_{W}^{4}\frac{M_{Z_{\epsilon}}^{(4)}}{\sigma_{\epsilon}^{4}}+6\mu_{W}^{2}\sigma_{W}^{2}\frac{M_{Z_{\epsilon}}^{(3)}}{\sigma_{\epsilon}^{4}}+ 3\sigma_{W}^{4}\frac{M_{Z_{\epsilon}}^{(2)}}{\sigma_{\epsilon}^{4}}. \end{equation*}

Proof. The process $(Y_{\epsilon}(t))$ is square-integrable, with drift parameter and diffusion coefficient $a=b=0$ . Write $Q_{X}$ for its Lévy measure, and define

\begin{align*} m_{n}(Q_{X})\,:\!=\, \int_{\mathbb{R}^{*}}|x|^{n}Q_{X}(dx), \quad n\geq 1. \end{align*}

The corresponding subordinator has Lévy measure $Q_{Z}$ , satisfying $Q_{Z}(\{z > \epsilon\}) = 0$ . Note that $m_{2}(Q_{X}) = 1$ . Following Theorem 4.1, the subordinator has no drift or diffusion component, so neither does $(Y_{\epsilon}(t))$ . From [Reference Fournier19, Theorem 3.1], if $m_{4}(Q_{X}) < \infty$ , we have directly

\begin{align*} \mathbb{E} \!\left[\sup_{[0,T]}\left|Y_{\epsilon}(t)-B(t)\right|^{2}\right] \leq C_{T}\max\big\{m_{4}(Q_{X})^{1/2}, m_{4}(Q_{X})^{1/3}\big\}, \end{align*}

where $C_{T}$ depends on T only. Finally, writing $\sigma_{\epsilon}^{2}$ for the variance of the NVM residual process, we can express $m_{4}(Q_{X})$ as

\begin{align} m_{4}(Q_{X}) &= \int_{-\infty}^{\infty}(x/\sigma_{\epsilon})^{4}\int_{0}^{\infty}\mathcal{N}\big(dx;\,\mu_{W}z, \sigma_{W}^{2}z\big)Q_{Z}(dz)\nonumber\\ &= \frac{1}{\sigma_{\epsilon}^{4}}\int_{0}^{\infty}Q_{Z}(dz)\int_{-\infty}^{\infty}x^{4}\mathcal{N}\big(dx;\,\mu_{W}z, \sigma_{W}^{2}z\big)dx\nonumber\\ &= \frac{1}{\sigma_{\epsilon}^{4}}\int_{0}^{\infty}\left[\mu_{W}^{4}z^{4}+6\mu_{W}^{2}\sigma_{W}^{2}z^{3} + 3\sigma_{W}^{4}z^{2}\right]Q_{Z}(dz)\nonumber\\ &= \mu_{W}^{4}\frac{M_{Z_{\epsilon}}^{(4)}}{\sigma_{\epsilon}^{4}}+6\mu_{W}^{2}\sigma_{W}^{2}\frac{M_{Z_{\epsilon}}^{(3)}}{\sigma_{\epsilon}^{4}}+ 3\sigma_{W}^{4}\frac{M_{Z_{\epsilon}}^{(2)}}{\sigma_{\epsilon}^{4}}. \nonumber \end{align}

Substituting this into the earlier bound yields the required result.

Theorem 8.1. Let $(X_{\epsilon}(t))$ denote the residual NVM Lévy process as in Theorem 4.1, and write $(\mathcal{F}_{t})$ for the filtration it generates. Let $\boldsymbol{f}_{t}(u)$ denote some deterministic, $L_{2}$ -integrable function, and define the process $(\boldsymbol{R}_{\epsilon}(t))$ via

\begin{align*} d\boldsymbol{R}_{\epsilon}(u) = \boldsymbol{f}_{t}(u)dX_{\epsilon}(u). \end{align*}

If $(X_{\epsilon}(t))$ satisfies the conditions of Theorem 4.1, then $(\boldsymbol{Y}_{\epsilon}(t))$ , the $\epsilon$ -normalised version of $(\boldsymbol{R}_{\epsilon}(t))$ , satisfies

\begin{align*} \lim_{\epsilon\rightarrow 0}\mathbb{E} \!\left[\sup_{t\in[0,T]}\left\|\boldsymbol{Y}_{\epsilon}(t)-\boldsymbol{B}(t)\right\|_{1}^{2}\right] = 0, \end{align*}

where, centring and normalising $(X_{\epsilon}(t))$ to obtain $(Y_{\epsilon}(t))$ as before, we define

\begin{align*} \boldsymbol{Y}_{\epsilon}(t) = \int_{0}^{t}\boldsymbol{f}_{t}(u)dY_{\epsilon}(u), \end{align*}

and also a moment-matched Gaussian process $(\boldsymbol{B}(t))$ driven by standard Brownian motion B(t) that can be coupled with $(Y_{\epsilon}(t))$ in the sense of Lemma 8.1,

\begin{align*} \boldsymbol{B}(t) = \int_{0}^{t}\boldsymbol{f}_{t}(u)dB(u)\,. \end{align*}

In particular, for each fixed time t, $\boldsymbol{Y}_{\epsilon}(t)$ converges in distribution to the Gaussian law

\begin{equation*} \mathcal{N} \!\left(\int_{0}^{t}\boldsymbol{f}_{t}(u)du,\int_{0}^{t}\boldsymbol{f}_{t}(u)\boldsymbol{f}_{t}(u)^{T}du\right). \end{equation*}

Proof. Consider the SDE

\begin{align*} d\boldsymbol{R}_{\epsilon}(u) = \boldsymbol{f}_{t}(u)dX_{\epsilon}(u), \end{align*}

with solution given by

\begin{align*} \boldsymbol{R}_{\epsilon}(t) = \int_{0}^{t}\boldsymbol{f}_{t}(u)dX_{\epsilon}(u). \end{align*}

Since $\left\|\boldsymbol{f}_{t}\right\|_{L_{2}}<\infty$ and $(X_{\epsilon}(t))$ is a semi-martingale, its quadratic variation is well defined; see Appendix B. Therefore, we can compute the mean and variance of $\boldsymbol{R}_{\epsilon}(t)$ as

\begin{align*} \mathbb{E}\!\left[\boldsymbol{R}_{\epsilon}(t)\right] = \mu_{W}M_{Z_{\epsilon}}^{(1)}\int_{0}^{t}\boldsymbol{f}_{t}(u)du,\quad \mathbb{Var}\!\left[\boldsymbol{R}_{\epsilon}(t)\right] = \left[\mu_{W}^{2}M_{Z_{\epsilon}}^{(2)}+\sigma_{W}^{2}M^{(1)}_{Z_{\epsilon}}\right]\int_{0}^{t}\boldsymbol{f}_{t}(u)\boldsymbol{f}_{t}(u)^{T}du. \end{align*}

Centring and normalising $(X_{\epsilon}(t))$ to obtain $(Y_{\epsilon}(t))$ as before, we now consider the process

\begin{align*} \boldsymbol{Y}_{\epsilon}(t) = \int_{0}^{t}\boldsymbol{f}_{t}(u)dY_{\epsilon}(u), \end{align*}

along with a matching process driven by standard Brownian motion,

\begin{align*} \boldsymbol{B}(t) = \int_{0}^{t}\boldsymbol{f}_{t}(u)dB(u), \end{align*}

where

\begin{equation*} \mathbb{E}\!\left[\boldsymbol{B}(t)\right] = \int_{0}^{t}\boldsymbol{f}_{t}(u)du, \hspace{1.6cm} \mathbb{Var}\!\left[\boldsymbol{B}(t)\right] = \int_{0}^{t}\boldsymbol{f}_{t}(u)\boldsymbol{f}_{t}(u)^{T}du. \end{equation*}

Letting $T(t) = Y_{\epsilon}(t)-B(t)$ , we have

\begin{align*} \left\|\boldsymbol{Y}_{\epsilon}(t)-\boldsymbol{B}(t)\right\|_{1} = \left\|\int_{0}^{t}\boldsymbol{f}_{t}(u)dT(u)\right\|_{1} \leq \sup_{u\in[0,t]}||\boldsymbol{f}_{t}(u)||_{1}\left|T(t)\right| = \sup_{u\in[0,t]}||\boldsymbol{f}_{t}(u)||_{1}|Y_{\epsilon}(t)-B(t)|, \end{align*}

so that

\begin{align*} \sup_{t\in[0,T]}\left\|\boldsymbol{Y}_{\epsilon}(t)-\boldsymbol{B}(t)\right\|_{1}^{2} \leq \sup_{t\in[0,T]}\sup_{u\in[0,t]}||\boldsymbol{f}_{t}(u)||_{1}^{2}\sup_{t\in[0,T]}|Y_{\epsilon}(t)-B(t)|^{2}, \end{align*}

and hence, applying Lemma 8.1, there exist coupled processes $(Y_{\epsilon}(t))$ and (B(t)) such that

\begin{align*} \mathbb{E} \!\left[\sup_{t\in[0,T]}\left\|\boldsymbol{Y}_{\epsilon}(t)-\boldsymbol{B}(t)\right\|_{1}^{2}\right]&\leq \sup_{t\in[0,T]}\sup_{u\in[0,t]}||\boldsymbol{f}_{t}(u)||_{1}^{2}\mathbb{E}\!\left[\sup_{t\in[0,T]}|Y_{\epsilon}(t)-B(t)|^{2}\right]\\ &\leq C_{T}\sup_{t\in[0,T]}\sup_{u\in[0,t]}\left\|\boldsymbol{f}_{t}(u)\right\|_{1}^{2}\max\big\{S_{\epsilon}^{1/2}, S_{\epsilon}^{1/3}\big\}. \end{align*}

By our assumptions on $\boldsymbol{f}_{t}(u), u \in [0, t]$ , and under the condition (4.6) of Theorem 4.1, we have $\lim_{\epsilon \rightarrow 0} S_{\epsilon} = 0$ , and since $C_{T}$ is independent of $\epsilon$ ,

\begin{align*} \lim_{\epsilon\rightarrow 0}\mathbb{E} \!\left[\sup_{t\in[0,T]}\left\|\boldsymbol{Y}_{\epsilon}(t)-\boldsymbol{B}(t)\right\|_{1}^{2}\right] = 0, \end{align*}

which completes the proof.

Building on the results in Sections 6.1, 6.2, and 6.3, Theorem 8.1 proves the Gaussian representation of $\boldsymbol{R}_{\epsilon}(t)$ when the underlying process is known to converge to a diffusion.

In Sections 6.2 and 6.3 we justified the Gaussian convergence of the NTS and the GH process residuals, respectively, and therefore Theorem 8.1 justifies the Gaussian representation of $(\boldsymbol{R}_{\epsilon}(t))$ . While an exact expression for $M^{(1)}_{Z_{\epsilon}}$ exists in the NTS case, we can only provide small-argument bounds for the GIG case; see Appendix D.

In Section 6.1 we showed that the NG residual process fails to converge to Brownian motion as $\epsilon\to 0$ . Therefore, there is no mathematically justified reason to model $\boldsymbol{R}_{\epsilon}(t)$ as a diffusion process there. While the large concentration of residual jumps near 0 shown in Figure 4 suggests it might be reasonable to set $\boldsymbol{R}_{\epsilon}(t) = 0$ , we could alternatively model the residual process via its mean, so that ${\hat{\boldsymbol{R}}}_{\epsilon}(t) = \mathbb{E}[{\hat{\boldsymbol{R}}}_{\epsilon}(t)]$ , as suggested in [Reference Asmussen and Rosinski2].

9. Conclusions

In this work, new theoretical properties of NVM processes were established, motivated by the desire to investigate the application of some of the Bayesian inference algorithms introduced in [Reference Godsill, Riabiz and Kontoyiannis23] to state space models driven by more general classes of non-Gaussian noise. We identified natural sufficient conditions that guarantee the Gaussian convergence of the error process associated with an NVM process subjected to random Poisson truncations of their shot-noise series representations. We also showed that this Gaussian convergence does not always occur, and provided sufficient conditions for cases when it fails.

Moreover, under the same Gaussian-convergence conditions, we established the process-level convergence of a family of associated stochastic integrals, thus justifying the Gaussian representation of the residuals of these integrals. In Section 6 we showed that Brownian motion with drift subordinated to the residual processes of a TS or a GIG-type Lévy process converges to a Wiener process. Furthermore, in Section 7.1 we established the validity of the Gaussian representation of the residual of the stochastic integral with respect to the NTS and GH processes (excluding the NG edge case).

Subordination to a gamma process is shown not to converge to a Gaussian limit. Therefore, the residuals of stochastic integrals with respect to NG processes cannot be represented by Gaussians. One alternative direction would be to investigate whether fitting a Gaussian to the residual would still yield improved accuracy in Bayesian inference procedures such as particle filtering. A more interesting possibility would be to explore the distribution to which the NG residual converges, as, for any $\epsilon = h(\tilde{\epsilon}) > 0$ , in Section 6.1 we showed that the residual is non-zero and heavier-tailed than the Gaussian.

Finally, we note that the current analysis applies to one-dimensional Lévy processes and the corresponding linear SDEs. An extension of the methodology to multivariate Lévy processes would allow the generalisation of the ‘Lévy state space model’ [Reference Godsill, Riabiz and Kontoyiannis23] and the associated methodological tools to more general state space models, importantly allowing for more sophisticated modelling of spatial dependencies in high-dimensional data.

Appendix A. Upper and lower bounds on Jaeger integrals

Define first the Jaeger integral [Reference Freitas20] as parametrised in [Reference Godsill and Kndap22, Section 3]:

\begin{equation*}J(z)=\int_0^\infty\frac{e^{-\frac{x^2z}{2\delta^2}}}{x|H_{|\lambda|}(x)|^2}dx.\end{equation*}

The integrand in the GIG Lévy measure in (2.12) depends on the value of $|\lambda|$ [Reference Godsill and Kndap22]. We first consider the region $|\lambda| \leq \frac{1}{2}$ . From [Reference Godsill and Kndap22, Theorem 3], a suitable upper bound on $\left[{z|H_{|\lambda|}(z)|^{2}}\right]^{-1}$ for $|\lambda| \leq \frac{1}{2}$ is given by

\begin{equation*} \frac{1}{z|H_{|\lambda|}(z)|^{2}} \leq \frac{1}{B(z)}, \end{equation*}

where

\begin{equation*} \frac{1}{B(z)} = \begin{cases} \frac{1}{H_{0}}\left(\frac{z}{z_{0}}\right)^{2|\lambda|-1}, \ & z < z_{0},\\ \frac{1}{H_{0}}, \ & z \geq z_{0}, \end{cases} \end{equation*}

with $z_{0} \in (0, \infty)$ and $H_{0} = z_{0}|H_{|\lambda|}(z_{0})|^{2}$ . This leads to the upper bound

\begin{align*} J(z) &\leq \frac{1}{H_{0}z_{0}^{2|\lambda|-1}}\int_{0}^{z_{0}}y^{2|\lambda|-1} \exp \!\left({-}\frac{zy^{2}}{2\delta^{2}}\right)dy + \frac{1}{H_{0}}\int_{z_{0}}^{\infty}\exp \!\left({-}\frac{zy^{2}}{2\delta^{2}}\right)dy\\ &=\frac{1}{H_{0}z_{0}^{2|\lambda|-1}}\left[\frac{\delta^{2|\lambda|}2^{|\lambda|-1}}{z^{|\lambda|}}\gamma\!\left(|\lambda|, \frac{z_{0}^{2}}{2\delta^{2}}z\right)\right]+\frac{1}{H_{0}}\left[\frac{\delta}{\sqrt{2}}z^{-\frac{1}{2}}\Gamma\!\left(\frac{1}{2},\frac{z_{0}^{2}}{2\delta^{2}}z\right)\right]. \end{align*}

Recall the series expansion for $\gamma(s,x)$ from Section 6.1 and also that for $\Gamma(s,x)$ we have [Reference Bateman and Erdélyi6], when $|\lambda| \neq 0$ ,

\begin{align*} \Gamma(s,x) = \Gamma(s) - \sum_{n=0}^{\infty}({-}1)^{n}\frac{x^{s+n}}{n!(s+n)}. \end{align*}

Combining these with the previous bound, we have

\begin{align} J(z) &\leq \frac{1}{H_{0}z_{0}^{2|\lambda|-1}}\left\{\frac{z_{0}^{2|\lambda|}}{2}\left[\frac{1}{|\lambda|}-\frac{z_{0}^{2}}{2\delta^{2}}\frac{1}{|\lambda|+1}z+\mathcal{O}\big(z^{2}\big)\right]\right\}\nonumber\\ &\quad +\frac{\delta}{H_0\sqrt{2}}\left\{\sqrt{\pi} z^{-\frac{1}{2}} - \frac{2z_{0}z^{\frac{1}{2}}}{\delta\sqrt{2}}z^{-\frac{1}{2}}+\frac{z_{0}^{3}z^{\frac{3}{2}}}{3\delta^{3}\sqrt{2}}z^{-\frac{1}{2}} +\mathcal{O}\big(z^{2}\big) \right\}\nonumber\\ &= \frac{1}{H_{0}z_{0}^{2|\lambda|-1}}\left\{\frac{z_{0}^{2|\lambda|}}{2}\left[\frac{1}{|\lambda|}+\mathcal{O}(z)\right]\right\}+\frac{\delta}{H_{0}}\sqrt{\frac{\pi}{2}}z^{-\frac{1}{2}} - \frac{z_{0}}{H_{0}}+\mathcal{O}(z)\nonumber\\ &= \frac{\delta}{H_{0}}\sqrt{\frac{\pi}{2}}z^{-\frac{1}{2}} + \frac{z_{0}}{H_{0}}\left(\frac{1}{2|\lambda|}-1\right) + \mathcal{O}(z).\nonumber \end{align}

Finally, we refer to [Reference Godsill and Kndap22, Theorem 1] for a lower bound on the Jaeger integral for $|\lambda|\leq \frac{1}{2}$ :

\begin{equation*} J(z) = \int_{0}^{\infty}\frac{1}{y|H_{|\lambda|}(y)|^{2}}\exp \Big({-}\frac{zy^{2}}{2\delta^{2}}\Big)dy \geq \frac{\pi}{2}\int_{0}^{\infty}\exp \Big({-}\frac{zy^{2}}{2\delta^{2}}\Big)dy = \frac{\delta\pi}{2}\sqrt{\frac{\pi}{2}}z^{-\frac{1}{2}}. \end{equation*}

When $|\lambda| > \frac{1}{2}$ , $y|H_{|\lambda|}(y)|^{2}$ is non-increasing rather than non-decreasing, and the relevant bounds become

(A.1) \begin{align} \int_{0}^{\infty}\frac{1}{y|H_{|\lambda|}(y)|^{2}}\exp \Big({-}\frac{zy^{2}}{2\delta^{2}}\Big)dy \leq \frac{\pi}{2}\int_{0}^{\infty}\exp \Big({-}\frac{zy^{2}}{2\delta^{2}}\Big)dy &= \frac{\delta\pi}{2}\sqrt{\frac{\pi}{2}}z^{-\frac{1}{2}}. \end{align}

From [Reference Godsill and Kndap22, Theorem 3], a suitable lower bound on $z|H_{|\lambda|}(z)|^{2}]^{-1}$ for $|\lambda| \geq \frac{1}{2}$ is given by

\begin{equation*} \frac{1}{z|H_{|\lambda|}(z)|^{2}} \geq \frac{1}{B(z)}, \end{equation*}

where B(z) is defined as above, from which we deduce that for small $z \in [0, \epsilon]$ and $|\lambda| \leq \frac{1}{2}$ ,

\begin{equation*} \delta \!\left(\frac{\pi}{2}\right)^{\frac{3}{2}}z^{-\frac{1}{2}} \leq J(z) \leq \frac{\delta}{H_{0}}\left(\frac{\pi}{2}\right)^{\frac{1}{2}}z^{-\frac{1}{2}} + \frac{z_{0}}{H_{0}}\left(\frac{1}{2|\lambda|}-1\right) +\mathcal{O}(z). \end{equation*}

Similarly, if $|\lambda|\geq \frac{1}{2}$ ,

\begin{equation*} \frac{\delta}{H_{0}}\left(\frac{\pi}{2}\right)^{\frac{1}{2}}z^{-\frac{1}{2}} + \frac{z_{0}}{H_{0}}\left(\frac{1}{2|\lambda|}-1\right) +\mathcal{O}(z) \leq J(z) \leq \delta \!\left(\frac{\pi}{2}\right)^{\frac{3}{2}}z^{-\frac{1}{2}}. \end{equation*}

Appendix B. Derivation of SDE moments

Recall the definition of $\boldsymbol{R}_{\epsilon}(t)$ as

\begin{align*} \boldsymbol{R}_{\epsilon}(t) = \int_{0}^{t}\boldsymbol{f}_{t}(u)dX^{\epsilon}_{u}. \end{align*}

Since $(X_{t}^{\epsilon})$ is a semi-martingale, it is clear that $\tilde{X}_{t}^{\epsilon} = X_{t}^{\epsilon} - t\mu_{W}M_{Z_{\epsilon}}^{(1)}$ is a martingale, and

\begin{align*} \boldsymbol{R}_{\epsilon}(t) = \mu_{W}M_{Z_{\epsilon}}^{(1)}\int_{0}^{t}\boldsymbol{f}_{t}(u)du + \int_{0}^{t}\boldsymbol{f}_{t}(u)d\tilde{X}^{\epsilon}_{u}. \end{align*}

Then, clearly, we have

\begin{equation*} \mathbb{E}\!\left[\boldsymbol{R}_{\epsilon}(t)\right] = \mu_{W}M_{Z_{\epsilon}}^{(1)}\int_{0}^{t}\boldsymbol{f}_{t}(u)du, \end{equation*}

and the Itô isometry yields

\begin{align*} \mathbb{Var}\!\left[\boldsymbol{R}_{\epsilon}(t)\right] = \mathbb{E}\!\left[\left(\int_{0}^{t}\boldsymbol{f}_{t}(u)d\tilde{X}^{\epsilon}_{u}\right)\left(\int_{0}^{t}\boldsymbol{f}_{t}(u)d\tilde{X}^{\epsilon}_{u}\right)^{T}\right] = \mathbb{E}\!\left[\int_{0}^{t}\boldsymbol{f}_{t}(u)\boldsymbol{f}_{t}(u)^{T}d\big[\tilde{X}^{\epsilon}\big]_{u}\right], \end{align*}

where $\big[\tilde{X}^{\epsilon}\big]_{t}$ is the quadratic variation of the compensated Lévy process, with expectation [Reference Cont and Tankov12]

\begin{align*} \mathbb{E}\!\left[d\!\left[\tilde{X}^{\epsilon}\right]_{t}\right]= \mathbb{E}\!\left[\int_{\mathbb{R} \backslash \{0\}}x^{2}{N}(dt,dx)\right] = \left[\mu_{W}^{2}M_{Z_{\epsilon}}^{(2)}+\sigma_{W}^{2}M_{Z_{\epsilon}}^{(1)}\right]dt. \end{align*}

Finally, the corresponding expression for the variance of $\boldsymbol{R}_{\epsilon}(t)$ is also easily obtained as

\begin{equation*} \mathbb{Var}\!\left[\boldsymbol{R}_{\epsilon}(t)\right] = \left[\mu_{W}^{2}M_{Z_{\epsilon}}^{(2)}+\sigma_{W}^{2}M_{Z_{\epsilon}}^{(1)}\right]\int_{0}^{t}\boldsymbol{f}_{t}(u)\boldsymbol{f}_{t}(u)^{T}du. \end{equation*}

Appendix C. Centring for NVM processes

The following lemma is probably known, but as we could not easily locate a specific reference, we provide a proof for completeness. Recall the convergent generalised shot-noise representation of a Lévy process from Section 2.4:

\begin{equation*} X(t) = \sum_{i=1}^{\infty}H(Z_i, U_{i})\mathbb{1}(V_{i} \leq t) -tb_{i}, \ 0 \leq t \leq T. \end{equation*}

Lemma C.1. For an NVM Lévy process (X(t)), the following generalised shot-noise representation converges a.s.:

\begin{equation*} X(t) = \sum_{i=1}^{\infty}H(Z_i, U_{i})\mathbb{1}(V_{i} \leq t), \quad {0\leq t\leq T}. \end{equation*}

Proof. Recall the form of the Lévy measure of NVM processes from (2.6). An NVM process need not be compensated, and hence we may take $b_i=0$ for all i in the shot-noise representation, provided

(C.1) \begin{equation} \int_{-\infty}^{\infty}\left(1\wedge|x|\right)Q_{X}(dx) = \int_{-\infty}^{\infty}\left(1\wedge|x|\right)\int_{0}^{\infty}\mathcal{N}\!\left(x;\mu_{W}z,\sigma_{W}^{2}z\right)Q_{Z}(dz)dx < \infty. \end{equation}

Since by the Lévy measure definition $\int_{-\infty}^{\infty}\left(1\wedge x^{2}\right)Q_{X}(dx)$ is finite, we must also have that $\int_{\left\{|x|>1\right\}}Q_{X}(dx)$ is finite. So, concentrating on the interval $|x|\leq1$ , we have

(C.2) \begin{align} I &\,:\!=\, \int_{|x|\leq1}|x|\int_{0}^{\infty}\mathcal{N}\!\left(x;\,\mu_{W}z,\sigma_{W}^{2}z\right)Q_{Z}(dz)dx\nonumber\\ &=\int_{|x|\leq1}\frac{|x|}{\sqrt{2\pi\sigma_{W}^{2}}}\int_{0}^{\infty}z^{-\frac{1}{2}}\exp\!\left[-\frac{1}{2\sigma_{W}^{2}z}\left(x-\mu_{W}z\right)^{2}\right]Q_{Z}(dz)dx\nonumber\\ &= \int_{|x|\leq1}\frac{|x|}{\sqrt{2\pi\sigma_{W}^{2}}}\exp\!\left(\frac{x\mu_{W}}{\sigma_{W}^{2}}\right)\int_{0}^{\infty}z^{-\frac{1}{2}}\exp\!\left({-}\frac{x^{2}}{2\sigma_{W}^{2}}z^{-1}-\frac{\mu_{W}^{2}}{2\sigma_{W}^{2}}z\right)Q_{Z}(dz)dx\nonumber \\ &\leq \int_{|x|\leq1}\frac{|x|}{\sqrt{2\pi\sigma_{W}^{2}}}\exp\!\left(\frac{\mu_{W}}{\sigma_{W}^{2}}\right)\int_{0}^{\infty}z^{-\frac{1}{2}}\exp\!\left({-}\frac{x^{2}}{2\sigma_{W}^{2}}z^{-1}\right)Q_{Z}(dz)dx. \end{align}

Note that the unimodal function $z^{-\frac{3}{2}}\exp\big\{-x^2/\big(2z\sigma_{W}^{2}\big)\big\}$ achieves its maximum at $z=\frac{x^{2}}{3\sigma_{W}^{2}}$ . Hence, the inner integrand may be bounded, for all $z\in (0,1]$ , by

\begin{align*} z^{-\frac{1}{2}}\exp\!\left({-}\frac{x^{2}}{2\sigma_{W}^{2}}z^{-1}\right)\leq \begin{cases} z \frac{|x|^{-3}}{(\sqrt{3}\sigma_{W})^3}e^{-3/2},& |x|>3\sigma_W^2,\\ z\exp\big({-}\frac{x^{2}}{3\sigma_{W}^{2}}\big),&|x|\leq{3\sigma_{W}^{2}}, \end{cases} \end{align*}

with the $|x|\leq{3\sigma_{W}^{2}}$ case corresponding to the supremum lying to the right of $z=1$ . Therefore, the inner integral in (C.2) over $z\in(0,1)$ can be bounded, using (2.3), by

\begin{align*}\int_{0}^{1}z^{-\frac{1}{2}}\exp\!\left({-}\frac{x^{2}}{2\sigma_{W}^{2}}z^{-1}\right)Q_{Z}(dz)dx\leq \begin{cases}C\frac{|x|^{-3}}{(\sqrt{3}\sigma_{W})^3}e^{-3/2},& |x|>3\sigma_W^2,\\C\exp\big({-}\frac{x^{2}}{3\sigma_{W}^{2}}\big),&|x|\leq{3\sigma_{W}^{2}},\end{cases}\end{align*}

where $C=\int_0^1zQ_Z(dz)<\infty$ is a constant that does not depend on x or $\epsilon$ . Moreover, using (2.3) again, we obtain

\begin{align*} \int_{1}^{\infty}z^{-\frac{1}{2}}\exp\!\left({-}\frac{x^{2}}{2\sigma_{W}^{2}}z^{-1}\right)Q_{Z}(dz)\leq \int_{1}^{\infty}Q_{Z}(dz)=C^{\prime}<\infty.\end{align*}

Combining the above bounds yields

\begin{align*}I \;\leq\; \frac{1}{\sqrt{2\pi\sigma_{W}^{2}}}\exp\!\left(\frac{\mu_{W}}{\sigma_{W}^{2}}\right) & \left[\int_{\left\{|x|\leq (3\sigma_W^2\wedge 1)\right\}}{|x|}C\exp\bigg({-}\frac{x^{2}}{3\sigma_{W}^{2}}\bigg)dx\right.\\ & \left.\;\;+\int_{\left\{(3\sigma_W^2\wedge 1)< |x|\leq 1\right\}}{|x|}C\frac{|x|^{-3}}{\big\{\sqrt{3}\sigma_{W}\big\}^3}e^{-3/2}dx\right.\\ & \left.\;\;+\int_{\left\{|x|\geq 1\right\}}{|x|}C^{\prime}dx\right]<\infty, \end{align*}

establishing (C.1) and confirming that compensation is not required for NVM Lévy processes.

Given that I is finite, the result of the lemma will follow from [Reference Rosinski39, Theorem 4.1], once we establish that, with

\begin{equation*}A(s)\,:\!=\,\int_0^s \int_{|x|\leq 1}x\sigma(r;\,dx)dr=\int_0^s \int_{|x|\leq 1}x{\mathcal{N}}(x;\,h(r)\mu_W,h(r)\sigma_W^2)dxdr, \quad s>0,\end{equation*}

the limit $a\,:\!=\,\lim_{s\to\infty} A(s)$ exists and is finite. In the definition of A(s), the term $\sigma(\cdot;\,\cdot)$ denotes the kernel, $\sigma(h(r);\,F)={\mathbb P}(H(h(r),U)\in F)$ for all measurable F, which in the present setting is a collection of Gaussian measures. Since $r=Q_Z([z,+\infty ))$ , we have $dr=-Q_Z(dz)$ , and hence

\begin{align*} A(s)=\int_0^s \int_{|x|\leq 1}x{\mathcal{N}}\!\left(x;\,h(r)\mu_W,h(r)\sigma_W^2\right)dxdr=\int_{h(s)}^\infty \int_{|x|\leq 1}x{\mathcal{N}}\!\left(x;\,z\mu_W,z\sigma_W^2\right)dxQ_Z(dz).\end{align*}

Since we already showed that

\begin{align*} \int_{|x|\leq1}|x|\int_{0}^{\infty}\mathcal{N}\!\left(x;\,\mu_{W}z,\sigma_{W}^{2}z\right)Q_{Z}(dz)dx=I<\infty,\end{align*}

we can exchange the order of integration, and since $h(s)\to 0$ as $s\to\infty$ by definition, it follows that the limit $a\,:\!=\,\lim_{s\to\infty} A(s)$ exists. Finally, we have

\begin{align*} \left|\lim_{s\rightarrow \infty}A(s)\right|&=\left|\int_{|x|\leq 1}x\int_{0}^{\infty}\mathcal{N}\!\left(x;\,\mu_{W}z,\sigma_{W}^{2}z\right)Q_{Z}(dz)dx\right|\\&\leq \int_{|x|\leq1}|x|\int_{0}^{\infty}\mathcal{N}\!\left(x;\,\mu_{W}z,\sigma_{W}^{2}z\right)Q_{Z}(dz)dx <\infty,\end{align*}

showing that a is finite and concluding the proof.

Appendix D. Moments for example processes

Recalling the discussion at the end of Section 8 in connection with the ‘Lévy state space model’ [Reference Godsill, Riabiz and Kontoyiannis23], we provide here expressions for $M_{Z_{\epsilon}}^{(1)}$ and $M_{Z_{\epsilon}}^{(2)}$ for our example processes.

For the NG process in Section 6.1, we have

\begin{align*} M_{Z_{\epsilon}}^{(1)} = \int_{0}^{\epsilon}z\nu z^{-1}\exp\!\left({-}\frac{1}{2}\gamma^{2}z\right)dz = \frac{2\nu}{\gamma^{2}}\gamma\!\left(1, \frac{1}{2}\gamma^{2}\epsilon\right) \end{align*}

and

\begin{align*} M_{Z_{\epsilon}}^{(2)} = \int_{0}^{\epsilon}z^{2}\nu z^{-1}\exp\!\left({-}\frac{1}{2}\gamma^{2}z\right)dz =\frac{4\nu}{\gamma^{4}}\gamma\!\left(2, \frac{1}{2}\gamma^{2}\epsilon\right). \end{align*}

Similarly, for the NTS process in Section 6.2,

\begin{align*} M_{Z_{\epsilon}}^{(1)} = \int_{0}^{\epsilon}zAz^{-1-\kappa}\exp\!\left({-}\frac{1}{2}\gamma^{\frac{1}{\kappa}}z\right)dz = A \gamma^{\frac{\kappa-1}{\kappa}}2^{1-\kappa}\gamma\!\left(1-\kappa,\frac{1}{2}\epsilon\gamma^{\frac{1}{\kappa}}\right) \end{align*}

and

\begin{align*} M_{Z_{\epsilon}}^{(2)} = \int_{0}^{\epsilon}z^{2}Az^{-1-\kappa}\exp\!\left({-}\frac{1}{2}\gamma^{\frac{1}{\kappa}}z\right)dz =A\gamma^{\frac{\kappa-2}{\kappa}}2^{2-\kappa}\gamma\!\left(2-\kappa,\frac{1}{2}\epsilon\gamma^{\frac{1}{\kappa}}\right). \end{align*}

The intractability of the Jaeger integral prohibits the derivation of an analytical expression for the moments of the GH process. However, for sufficiently small truncation levels $\epsilon$ , the use of asymptotic moment expansions provides useful approximations. For now, we restrict our analysis to the parameter range $|\lambda| \leq \frac{1}{2}$ ; the range $|\lambda| \geq \frac{1}{2}$ yields similar results.

To obtain a lower bound on the expected value of the subordinator jumps, $M_{Z_{\epsilon}}^{(1)}$ , we use the bound $\frac{1}{z|H_{|\lambda|}(z)|^{2}}\geq \frac{\pi}{2}$ :

\begin{align*} M_{Z_{\epsilon}}^{(1)} &= \int_{0}^{\epsilon}z\frac{e^{-\frac{\gamma^{2}z}{2}}}{z}\left[\max(0,\lambda) + \frac{2}{\pi^{2}}\int_{0}^{\infty}\frac{1}{z\!\left|H_{|\lambda|}(z)\right|^{2}}e^{-\frac{zy^{2}}{2\delta^{2}}}dy \right]dz\\ &\geq \max(0,\lambda)\frac{2}{\gamma^{2}}\gamma\!\left(1, \frac{1}{2}\gamma^{2}\epsilon\right) + \frac{1}{\pi}\int_{0}^{\epsilon}e^{-\frac{\gamma^{2}}{2}z}\int_{0}^{\infty}e^{-\frac{zy^{2}}{2\delta^{2}}}dydz\\ &= \frac{2\max(0,\lambda)}{\gamma^{2}}\gamma\!\left(1, \frac{1}{2}\gamma^{2}\epsilon\right) + \frac{\delta}{\gamma}{\textrm{erf}}\!\left(\frac{\gamma\sqrt{\epsilon}}{\sqrt{2}}\right). \end{align*}

Using the expansion for the erf function in Section 6.1 and the series expansion of the exponential function yields the lower bound

\begin{align} M_{Z_{\epsilon}}^{(1)} &\geq \frac{2\max(0,\lambda)}{\gamma^{2}}\gamma\!\left(1, \frac{1}{2}\gamma^{2}\epsilon\right) + \frac{\delta}{\gamma}{\textrm{erf}}\left(\frac{\gamma\sqrt{\epsilon}}{\sqrt{2}}\right) \nonumber\\ &=\frac{2\max(0,\lambda)}{\gamma^{2}}\gamma\!\left(1, \frac{1}{2}\gamma^{2}\epsilon\right) + \frac{\delta}{\gamma}\frac{2}{\sqrt{\pi}}\sum_{n=0}^{\infty}{\frac{({-}1)^{n}}{n!(2n+1)}\left(\frac{\gamma\sqrt{\epsilon}}{\sqrt{2}}\right)^{2n+1}},\nonumber \end{align}

which, for $\epsilon\to 0$ , is equal to

(D.1) \begin{equation} \frac{\delta\sqrt{2}}{\sqrt{\pi}}\sqrt{\epsilon}+ \max(0,\lambda)\epsilon + \mathcal{O}\!\left(\epsilon^{\frac{3}{2}}\right). \end{equation}

A corresponding upper bound can be derived using the bound in Appendix A. We have

\begin{align*} M_{Z_{\epsilon}}^{(1)} &=\max(0,\lambda)\int_{0}^{\epsilon}e^{-\frac{\gamma^{2}z}{2}}dz + \frac{2}{\pi^{2}}\int_{0}^{\epsilon}e^{-\frac{\gamma^{2}z}{2}}\int_{0}^{\infty}\frac{1}{z\!\left|H_{|\lambda|}(z)\right|^{2}}e^{-\frac{zy^{2}}{2\delta^{2}}}dydz\nonumber\\ &\leq \frac{2}{\gamma^{2}}\max(0,\lambda)\gamma\!\left(1, \frac{1}{2}\gamma^{2}\epsilon\right) + \frac{2}{\pi^{2}}\int_{0}^{\epsilon}e^{-\frac{z\gamma^{2}}{2}}\left[\frac{\sqrt{\pi}\delta}{H_{0}\sqrt{2}}z^{-\frac{1}{2}} + \frac{z_{0}}{H_{0}}\left(\frac{1}{2|\lambda|}-1\right)\right]dz\nonumber\\ &= \frac{2}{\gamma^{2}}\max(0,\lambda)\gamma\!\left(1, \frac{1}{2}\gamma^{2}\epsilon\right)+ \frac{\sqrt{2}\delta}{H_{0}\pi\sqrt{\pi}}\left[\frac{\sqrt{2\pi}}{\gamma}{\textrm{erf}}\left(\frac{\sqrt{\epsilon}\gamma}{\sqrt{2}}\right)\right]\\ & + \frac{4z_{0}}{\pi^{2}\gamma^{2}H_{0}}\left(\frac{1}{2|\lambda|}-1\right)\left(1-e^{-\frac{c\gamma^{2}}{2}}\right), \nonumber \end{align*}

where, as $\epsilon\to 0$ , the last expression is equal to

(D.2) \begin{equation} \frac{2\delta\sqrt{2}}{H_{0}\pi\sqrt{\pi}}\sqrt{\epsilon} + \left[\max(0,\lambda)+\frac{2z_{0}}{\pi^{2}H_{0}}\left(\frac{1}{2|\lambda|}-1\right)\right]\epsilon + \mathcal{O}\!\left(\epsilon^{\frac{3}{2}}\right). \end{equation}

Equations (D.1)–(D.2) imply that, for small $\epsilon$ , $M_{Z_\epsilon}^{(1)}$ is approximately bounded above and below by

\begin{equation*} \frac{\delta\sqrt{2}}{\sqrt{\pi}}\sqrt{\epsilon} \quad \mbox{and}\quad \frac{2\delta\sqrt{2}}{H_{0}\pi\sqrt{\pi}}\sqrt{\epsilon},\quad \mbox{respectively}, \end{equation*}

and we can therefore conclude that $M_{Z_{\epsilon}}^{(1)} \sim \sqrt{\epsilon}$ as $\epsilon\to 0$ , for $\delta \neq 0$ and $|\lambda| \leq \frac{1}{2}$ .

To characterise the behaviour of $M_{Z_{\epsilon}}^{(2)}$ , we again use the bound $\frac{1}{z|H_{|\lambda|}(z)|^{2}}\geq \frac{\pi}{2}$ , which gives

\begin{align} M_{Z_{\epsilon}}^{(2)} &= \max(0,\lambda)\int_{0}^{\epsilon}ze^{-\frac{\gamma^{2}}{2}z}dz+\frac{2}{\pi^{2}}\int_{0}^{\epsilon}ze^{-\frac{\gamma^{2}}{2}z}\int_{0}^{\infty}\frac{1}{z\!\left|H_{|\lambda|}(t)\right|^{2}}e^{-\frac{zy^{2}}{2\delta^{2}}}dydz \nonumber\\ &\geq \frac{4}{\gamma^{4}}\max(0,\lambda)\gamma\!\left(2, \frac{1}{2}\gamma^{2}\epsilon\right) + \frac{2\delta}{\gamma^{3}\sqrt{\pi}}\gamma\!\left(\frac{3}{2}, \frac{1}{2}\gamma^{2}\epsilon\right),\nonumber \end{align}

where, as $\epsilon\to 0$ , the last expression above is

(D.3) \begin{equation} \frac{\delta}{\sqrt{2\pi}}\left(\frac{2}{3}\epsilon\sqrt{\epsilon}\right)+\frac{1}{2}\max(0,\lambda)\epsilon^{2} + \mathcal{O}\big(\epsilon^{\frac{5}{2}}\big) = \frac{\sqrt{2}\delta}{3\sqrt{\pi}}\epsilon\sqrt{\epsilon} + \frac{1}{2}\max(0,\lambda)\epsilon^{2} + \mathcal{O}\big(\epsilon^{\frac{5}{2}}\big). \end{equation}

For the corresponding upper bound we similarly have

\begin{align} M_{Z_{\epsilon}}^{(2)} &\leq\max(0,\lambda)\int_{0}^{\epsilon}ze^{-\frac{\gamma^{2}}{2}z}dz+ \frac{2}{\pi^{2}}\int_{0}^{\epsilon}ze^{-z\frac{\gamma^{2}}{2}}\left[\frac{\sqrt{\pi}\delta}{H_{0}\sqrt{2}}z^{-\frac{1}{2}} + \frac{z_{0}}{H_{0}}\left(\frac{1}{2|\lambda|}-1\right) \right]dt\nonumber\\ &= \frac{4}{\gamma^{4}}\left[\max(0,\lambda)+\frac{2z_{0}}{\pi^{2}H_{0}}\left(\frac{1}{2|\lambda|}-1\right)\right]\gamma\left(2, \frac{1}{2}\gamma^{2}\epsilon\right) + \frac{\sqrt{2}\delta}{\pi\sqrt{\pi}H_{0}}\int_{0}^{\epsilon}t^{\frac{1}{2}}e^{-t\frac{\gamma^{2}}{2}}dt\nonumber\\ &=\frac{4}{\gamma^{4}}\left[\max(0,\lambda)+\frac{2z_{0}}{\pi^{2}H_{0}}\left(\frac{1}{2|\lambda|}-1\right)\right]\gamma\!\left(2, \frac{1}{2}\gamma^{2}\epsilon\right) + \frac{4\delta}{\gamma^{3}\pi\sqrt{\pi}H_{0}}\gamma\!\left(\frac{3}{2}, \frac{1}{2}\gamma^{2}\epsilon\right),\nonumber \end{align}

where the last expression above, for $\epsilon\to 0$ , is equal to

(D.4) \begin{equation} \frac{2\sqrt{2}\delta}{3\pi\sqrt{\pi}H_{0}}\epsilon\sqrt{\epsilon} +\frac{1}{2}\left[\max(0,\lambda)+\frac{2z_{0}}{\pi^{2}H_{0}}\left(\frac{1}{2|\lambda|}-1\right)\right]\epsilon^{2} + \mathcal{O}\big(\epsilon^{\frac{5}{2}}\big). \end{equation}

From (D.4) and (D.3) we have that, for small $\epsilon$ , $M_{Z_\epsilon}^{(2)}$ is approximately bounded above and below by

\begin{equation*} \frac{\sqrt{2}\delta}{3\sqrt{\pi}}\epsilon\sqrt{\epsilon} \quad \mbox{and} \quad \frac{2\sqrt{2}\delta}{3\pi\sqrt{\pi}H_{0}}\epsilon\sqrt{\epsilon}, \quad \mbox{respectively}, \end{equation*}

and therefore, $M_{Z_{\epsilon}}^{(2)} \sim \epsilon\sqrt{\epsilon}$ , for all $ \delta \neq 0, \left|\lambda\right|\leq \frac{1}{2}\ $ .

Finally, in the case $\left|\lambda\right|\geq \frac{1}{2}$ , the upper and lower bounds on $\frac{1}{z|H_{|\lambda|}(z)|^{2}}$ are reversed, and hence so are the bounds in (D.1), (D.2), (D.3), (D.4), so that $M_{Z_{\epsilon}}^{(1)} \sim \sqrt{\epsilon}$ and $M_{Z_{\epsilon}}^{(2)} \sim \epsilon\sqrt{\epsilon},$ for all $\delta \neq 0$ .

Acknowledgements

We wish to thank the anonymous reviewers for their careful reading of our original submission and their useful comments. In particular, we thank Reviewer I for suggesting the direct verification of the requirement (III) from [Reference Pollard36, Theorem V.19] in Theorem 4.1. We would also like to thank Professor Nicolas Fournier for some helpful discussion regarding Theorem 8.1.

Funding information

There are no funding bodies to thank in relation to the creation of this article.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

Data

The code used for all the simulations in this paper can be found at https://github.com/marcostapiac/PyLevy.

References

Alspach, D. and Sorenson, H. (1972). Nonlinear Bayesian estimation using Gaussian sum approximations. IEEE Trans. Automatic Control 17, 439448.CrossRefGoogle Scholar
Asmussen, S. and Rosinski, J. (2001). Approximations of small jumps of Lévy processes with a view towards simulation. J. Appl. Prob. 38, 482493.CrossRefGoogle Scholar
Barndorff-Nielsen, O. and Shephard, N. (2001). Modelling by Lévy processess for financial econometrics. In Lévy Processes: Theory and Applications, eds O. Barndorff-Nielsen, S. Resnick and T. Mikosch, Birkhäuser, Boston, pp. 283–318.CrossRefGoogle Scholar
Barndorff-Nielsen, O. E. and Shephard, N. (2001). Normal modified stable processes. Theory Prob. Math. Statist. 65, 119.Google Scholar
Barndorff-Nielsen, O. E. and Shephard, N. (2012). Basics of Lévy processes. Working paper. Available at http://ora.ox.ac.uk/objects/uuid:8787765a-1d95-45cd-97d8-930fe8816d97.Google Scholar
Bateman, H. and Erdélyi, A. (2006). Higher Transcendental Functions, Vol. 2. Dover, New York.Google Scholar
Black, F. and Scholes, M. (1973). The pricing of options and corporate liabilities. J. Political Econom. 81, 637654.CrossRefGoogle Scholar
Bondesson, L. (1982). On simulation from infinitely divisible distributions. Adv. Appl. Prob. 14, 855869.CrossRefGoogle Scholar
Box, G. and Muller, M. (1958). A note on the generation of random normal deviates. Ann. Math. Statist. 29, 610611.CrossRefGoogle Scholar
Cappé, O., Godsill, S. and Moulines, E. (2007). An overview of existing methods and recent advances in sequential Monte Carlo. Proc. IEEE 95, 899–924.CrossRefGoogle Scholar
Carpentier, A., Duval, C. and Mariucci, E. (2021). Total variation distance for discretely observed Lévy processes: a Gaussian approximation of the small jumps. Ann. Inst. H. Poincaré Prob. Statist. 57, 901939.CrossRefGoogle Scholar
Cont, R. and Tankov, P. (2004). Financial Modelling with Jump Processes. CRC Press, New York.Google Scholar
Deligiannidis, G., Maurer, S. and Tretyakov, M. (2021). Random walk algorithm for the Dirichlet problem for parabolic integro-differential equation. BIT Numer. Math. 61, 12231269.CrossRefGoogle Scholar
Dia, E. (2013). Error bounds for small jumps of Lévy processes. Adv. Appl. Prob. 45, 86105.CrossRefGoogle Scholar
Eberlein, E. (2001). Application of generalized hyperbolic Lévy motions to finance. In Lévy Processes: Theory and Applications, eds O. Barndorff-Nielsen, S. Resnick and T. Mikosch, Birkhäuser, Boston, pp. 319–336.CrossRefGoogle Scholar
Eberlein, E. (2009). Jump-type Lévy processes. In Handbook of Financial Time Series, Springer, Berlin, Heidelberg, pp. 439455.CrossRefGoogle Scholar
El Adlouni, S., Chebana, F. and Bobée, B. (2010). Generalized extreme value versus Halphen system: exploratory study. J. Hydrolog. Eng. 15, 7989.CrossRefGoogle Scholar
Ferguson, T. and Klass, M. (1972). A representation of independent increment processes without Gaussian components. Ann. Math. Statist. 43, 16341643.CrossRefGoogle Scholar
Fournier, N. (2011). Simulation and approximation of Lévy-driven stochastic differential equations. ESAIM Prob. Statist. 15, 233248.CrossRefGoogle Scholar
Freitas, P. (2018). Sharp bounds for the modulus and phase of Hankel functions with applications to Jaeger integrals. Math. Comput. 87, 289308.CrossRefGoogle Scholar
Gan, R., Ahmad, B. and Godsill, S. (2021). Lévy state-space models for tracking and intent prediction of highly maneuverable objects. IEEE Trans. Aerospace Electron. Systems 57, 20212038.CrossRefGoogle Scholar
Godsill, S. and Kndap, Y. (2022). Point process simulation of generalised inverse Gaussian processes and estimation of the Jaeger integral. Statist. Comput. 32, article no. 13.CrossRefGoogle Scholar
Godsill, S., Riabiz, M. and Kontoyiannis, I. (2019). The Lévy state space model. In 2019 53rd Asilomar Conference on Signals, Systems, and Computers, Institute of Electrical and Electronics Engineers, Piscataway, NJ, pp. 487494.CrossRefGoogle Scholar
Christensen, H. L., Murphy, J. and Godsill, S. J. (2012). Forecasting high-frequency futures returns using online Langevin dynamics. IEEE J. Sel. Topics Signal Process. 6, 366380.CrossRefGoogle Scholar
Iyengar, S. and Liao, Q. (1997). Modeling neural activity using the generalized inverse Gaussian distribution. Biol. Cybernet. 77, 289295.CrossRefGoogle ScholarPubMed
Kallenberg, O. (1997). Foundations of Modern Probability. Springer, Cham.Google Scholar
Kalman, R. (1960). A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng. 82, 3545.CrossRefGoogle Scholar
Kantas, N. et al. (2015). On particle methods for parameter estimation in state-space models. Statist. Sci. 30, 328351.CrossRefGoogle Scholar
Khintchine, A. (1937). Zur Theorie der unbeschränkt teilbaren Verteilungsgesetze. Mat. Sb. 2, 79119.Google Scholar
Kohatsu-Higa, A. and Tankov, P. (2010). Jump-adapted discretization schemes for Lévy-driven SDEs. Stoch. Process. Appl. 120, 22582285.CrossRefGoogle Scholar
Küchler, U. and Tappe, S. (2013). Tempered stable distributions and processes. Stoch. Process. Appl. 123, 42564293.CrossRefGoogle Scholar
Kummer, E. (1837). De integralibus quibusdam definitis et seriebus infinitis. J. reine angew. Math. 17, 228242.Google Scholar
Lemke, T. and Godsill, S. (2015). Inference for models with asymmetric $\alpha$ -stable noise processes. In Unobserved Components and Time Series Econometrics, Oxford University Press, pp. 190217.CrossRefGoogle Scholar
Madan, D., Carr, P. and Chang, E. (1998). The variance gamma process and option pricing. Rev. Finance 2, 79105.CrossRefGoogle Scholar
Landis, M. J., Schraiber, J. and Liang, M. (2013). Phylogenetic analysis using Lévy processes: finding jumps in the evolution of continuous traits. Systematic Biol. 62, 193204.CrossRefGoogle ScholarPubMed
Pollard, D. (1984). Convergence of Stochastic Processes. Springer, New York.CrossRefGoogle Scholar
Riabiz, M., Ardeshiri, T., Kontoyiannis, I. and Godsill, S. (2020). Non-asymptotic Gaussian approximation for inference with stable noise. IEEE Trans. Inf. Theory 66, 49664991.CrossRefGoogle Scholar
Rosinski, J. (1991). On a class of infinitely divisible processes represented as mixtures of Gaussian processes. In Stable Processes and Related Topics: A Selection of Papers from the Mathematical Sciences Institute Workshop, eds S. Cambanis, G. Samorodnitsky and M. Taqqu, Birkhäuser, Boston, pp. 27–41.CrossRefGoogle Scholar
Rosinski, J. (2001). Series representations of Lévy processes from the perspective of point processes. In Lévy Processes: Theory and Applications, eds O. E. Barndorff-Nielsen, S. I. Resnick and T. Mikosch, Birkhäuser, Boston, pp. 401–415.CrossRefGoogle Scholar
Rosinski, J. (2007). Tempering stable processes. Stoch. Process. Appl. 117, 677707.CrossRefGoogle Scholar
Russo, T. et al. (2009). Lévy processes and stochastic von Bertalanffy models of growth, with application to fish population analysis. J. Theoret. Biol. 258, 521529.CrossRefGoogle Scholar
Samorodnitsky, G. and Taqqu, M. (2017). Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance. Routledge, New York.CrossRefGoogle Scholar
Udoye, A. and Ekhaguere, G. (2022). Sensitivity analysis of interest rate derivatives in a variance gamma markets. Palestine J. Math. 11, 159176.Google Scholar
Vlad, B. and Yifeng, Q. (2022). Total variation distance between a jump-equation and its Gaussian approximation. Stoch. Partial Differential Equat. 10, 12111260.Google Scholar
Watson, G. (1944). A Treatise on the Theory of Bessel Functions, 2nd edn. Cambridge University Press.Google Scholar
Winkel, M. (2010). MS3b/MScMCF: Lévy processes and finance. Unpublished manuscript. Available at www.stats.ox.ac.uk/winkel/ms3b10.pdf.Google Scholar
Wolpert, R. and Ickstadt, K. (1998). Simulation of Lévy random fields. In Practical Nonparametric and Semiparametric Bayesian Statistics, eds D. Dey, P. Müller and D. Sinha, Springer, New York, pp. 227–242.CrossRefGoogle Scholar
Woodard, D., Wolpert, R. and O’Connell, M. (2010). Spatial inference of nitrate concentrates in groundwater. J. Agric. Biol. Environm. Statist. 15, 209227.CrossRefGoogle Scholar
Figure 0

Figure 1. Left: ten sample paths from a truncated gamma process. Right: histogram of $N=10^5$ process values at $t=1$. Both are generated with $\epsilon = 10^{-10}$. The solid line is the true density of the original process at time $t=1$.

Figure 1

Figure 2. Left: ten sample paths from a truncated tempered stable process. Right: Q–Q plot of $N=10^5$ truncated process values at $t=1$ versus $N=10^5$ samples from the true distribution of the process at $t=1$. Both are generated with $\epsilon = 10^{-10}$.

Figure 2

Figure 3. Left: ten sample paths from a truncated GIG process. Right: Q–Q plot of $N=10^4$ truncated process values at $t=1$ versus $N=10^4$ samples from the true distribution of the process at $t=1$. Both are generated with $\epsilon = 10^{-6}$.

Figure 3

Figure 4. Histogram of $M=10^5$ NG residual path values at $t=1$. The smooth curve represents the standard normal density.

Figure 4

Figure 5. Q–Q plot of $M=10^4$ NG residual path values at $t=1$.

Figure 5

Figure 6. Histogram of $M = 10^5$ NTS residual path values at $t=1$. The smooth curve represents the standard normal density.

Figure 6

Figure 7. Q–Q plot of $M = 10^4$ NTS residual path values at $t=1$.

Figure 7

Figure 8. Plot of the finite-$\epsilon$ bound in (6.1) and the first term in the asymptotic bound (6.2) for the approximation error $E_\epsilon$ in Lemma 6.1.

Figure 8

Figure 9. Histogram of $M = 10^5$ GH residual path values at $t=1$. The smooth curve represents the standard normal density.

Figure 9

Figure 10. Q–Q plot of $M = 10^4$ GH residual path values at $t=1$.

Figure 10

Figure 11. Plot of the finite-$\epsilon$ bound in (6.3) and the first term in the asymptotic bound (6.4) for the approximation error $E_\epsilon$ in Lemma 6.2.