1. Introduction
Motivated by measuring risks under model uncertainty, [Reference Peng31–Reference Peng33, Reference Peng36] introduced the notion of a sublinear expectation space, called a G-expectation space. The G-expectation theory has been widely used to evaluate random outcomes, not using a single probability measure, but using the supremum over a family of possibly mutually singular probability measures. One of the fundamental results in this theory is the robust central limit theorem introduced in [Reference Peng34, Reference Peng36]. The corresponding convergence rate was an open problem until recently. The first convergence rate was established in [Reference Fang, Peng, Shao and Song14, Reference Song37] using Stein’s method and later in [Reference Krylov28] using a stochastic control method under different model assumptions. More recently, [Reference Huang and Liang20] studied the convergence rate of a more general central limit theorem via a monotone approximation scheme for the G-equation.
On the other hand, nonlinear Lévy processes have been studied in [Reference Hu and Peng19, Reference Neufeld and Nutz29]. For $\alpha\in(1,2)$ , they considered a nonlinear $\alpha$ -stable Lévy process $(X_{t})_{t\geq0}$ defined on a sublinear expectation space $(\Omega,\mathcal{H},\hat{\mathbb{E}})$ , whose local characteristics are described by a set of Lévy triplets $\Theta=\{(0,0,F_{k_{\pm}})\colon k_{\pm}\in K_{\pm}\}$ , where $K_{\pm}\subset(\lambda_{1},\lambda_{2})$ for some $\lambda_{1},\lambda_{2}\geq0$ , and $F_{k_{\pm}}(\textrm{d} z)$ is the $\alpha$ -stable Lévy measure
Such a nonlinear $\alpha$ -stable Lévy process can be characterized via a fully nonlinear partial integro-differential equation (PIDE). For any $\phi \in C_{\textrm{b,Lip}}(\mathbb{R})$ , [Reference Neufeld and Nutz29] proved the representation result $u(t,x)\;:\!=\;\hat{\mathbb{E}}[\phi(x+X_{t})]$ , $(t,x)\in \lbrack 0,T]\times \mathbb{R}$ , where u is the unique viscosity solution of the fully nonlinear PIDE
with $\delta_{z}u(t,x)\;:\!=\;u(t,x+z)-u(t,x)-D_{x}u(t,x)z$ . In contrast to the fully nonlinear PIDEs studied in the partial differential equation (PDE) literature, (1) is driven by a family of $\alpha$ -stable Lévy measures rather than a single Lévy measure. Moreover, since $F_{k_{\pm}}(\textrm{d} z)$ possesses a singularity at the origin, the integral term degenerates and (1) is a degenerate equation.
The corresponding generalized central limit theorem for $\alpha$ -stable random variables under sublinear expectation was established in [Reference Bayraktar and Munk6]. For this, let $(\xi_{i})_{i=1}^{\infty}$ be a sequence of independent and identically distributed (i.i.d.) $\mathbb{R}$ -valued random variables on a sublinear expectation space $(\Omega,\mathcal{H},\tilde{\mathbb{E})}$ . After proper normalization, [Reference Bayraktar and Munk6] showed that
for any $\phi\in C_{\textrm{b,Lip}}(\mathbb{R})$ . We refer to the above convergence result as the $\alpha$ -stable central limit theorem under sublinear expectation.
Noting that $\hat{\mathbb{E}}[\phi(X_{1})]=u(1,0)$ , where u is the viscosity solution of (1), in this work we study the rate of convergence for the $\alpha$ -stable central limit theorem under sublinear expectation via the numerical analysis method for the nonlinear PIDE (1). To do this, we first construct a sublinear expectation space $(\mathbb{R},C_\textrm{Lip}(\mathbb{R}),\tilde{\mathbb{E}})$ and introduce a random variable $\xi$ on this space. For given $T>0$ and $\Delta \in(0,1)$ , using the random variable $\xi$ under $\tilde{\mathbb{E}}$ as input, we define a discrete scheme $u_{\Delta}\colon[0,T]\times \mathbb{R\rightarrow R}$ to approximate u by
Taking $T=1$ and $\Delta={1}/{n}$ , we can recursively apply the above scheme to obtain
In this way, the convergence rate of the $\alpha$ -stable central limit theorem is transformed into the convergence rate of the discrete scheme (2) for approximating the nonlinear PIDE (1).
The basic framework for convergence of numerical schemes to viscosity solutions of Hamilton–Jacobi–Bellman equations was established in [Reference Barles and Souganidis5], which showed that any monotone, stable, and consistent approximation scheme converges to the correct solution, provided that there exists a comparison principle for the limiting equation. The corresponding convergence rate was first obtained with the introduction of the shaking coefficients technique to construct a sequence of smooth subsolutions/supersolutions in [Reference Krylov25–Reference Krylov27]. This technique was further developed to general monotone approximation schemes (see [Reference Barles and Jakobsen2–Reference Barles and Jakobsen4] and references therein).
The design and analysis of numerical schemes for nonlinear PIDEs is a relatively new area of research. For nonlinear degenerate PIDEs driven by a family of $\alpha$ -stable Lévy measures, there are no general results giving error bounds for numerical schemes. Most of the existing results in the PDE literature only deal with a single Lévy measure and its finite difference method, e.g. [Reference Biswas, Chowdhury and Jakobsen7–Reference Biswas, Jakobsen and Karlsen9, Reference Jakobsen, Karlsen and La Chioma22]. One exception is [Reference Coclite, Reichmann and Risebro12], which considers a nonlinear PIDE driven by a set of tempered $\alpha$ -stable Lévy measures for $\alpha \in(0,1)$ by using the finite difference method.
To derive the error bounds for the discrete scheme (2), the key step is to interchange the roles of the discrete scheme and the original equation when the approximate solution has enough regularity. The classical regularity estimates of the approximate solution depend on the finite variance of random variables. Since $\xi$ has infinite variance, the method developed in [Reference Krylov28] cannot be applied to $u_{\Delta}$ . To overcome this difficulty, by introducing a truncated discrete scheme $u_{\Delta,N}$ related to a truncated random variable $\xi^{N}$ with finite variance, we construct a new type of regularity estimate of $u_{\Delta,N}$ that plays a pivotal role in establishing the space and time regularity properties for $u_{\Delta}$ . With the help of a precise estimate of the truncation $\tilde{\mathbb{E}}[|\xi-\xi^{N}|]$ , a novel estimate for $|u_{\Delta}-u_{\Delta,N}|$ is obtained. By choosing a proper N, we then establish the regularity estimates for $u_{\Delta}$ . Together with the concavity of (1) and (2), and the regularity estimates of their solutions, we are able to interchange their roles and thus derive the error bounds for the discrete scheme. To the best of our knowledge, these are the first error bounds for numerical schemes for fully nonlinear PIDEs associated with a family of $\alpha$ -stable Lévy measures, which in turn provide a nontrivial convergence rate result for the $\alpha$ -stable central limit theorem under sublinear expectation.
On the other hand, the classical probability literature mainly deals with $\Theta$ as a singleton, so $(X_{t})_{t\geq0}$ becomes a classical Lévy process with triplet $\Theta$ , and $X_{1}$ is an $\alpha$ -stable random variable. The corresponding convergence rate of the classical $\alpha$ -stable central limit theorem (with $\Theta$ as a singleton) has been studied using the Kolmogorov distance (see, e.g., [Reference Davydov and Nagaev13, Reference Hall15–Reference Häusler and Luschgy17, Reference Ibragimov and Linnik21, Reference Juozulynas and Paulauskas24]) and the Wasserstein-1 distance or the smooth Wasserstein distance (see, e.g., [Reference Arras, Mijoule, Poly and Swan1, Reference Chen, Goldstein and Shao10, Reference Chen and Xu11, Reference Jin, Li and Lu23, Reference Nourdin and Peccati30, Reference Xu38]). The first type is proved by the characteristic functions, which do not exist in the sublinear framework, while the second type relies on Stein’s method which also fails under the sublinear setting.
The rest of the paper is organized as follows. In Section 2, we review some necessary results about sublinear expectation and $\alpha$ -stable Lévy processes. In Section 3, we list the assumptions and our main results, the convergence rate of $\alpha$ -stable random variables under sublinear expectation. We present two examples to illustrate our results in Section 4. Finally, by using the monotone scheme method, the proof of our main result is given in Section 5.
2. Preliminaries
In this section, we recall some basic results of sublinear expectation and $\alpha$ -stable Lévy processes that are needed in the sequel. For more details, we refer the reader to [Reference Bayraktar and Munk6, Reference Neufeld and Nutz29, Reference Peng32, Reference Peng36] and the references therein.
We start with some notation. Let $C_\textrm{Lip}(\mathbb{R}^{n})$ be the space of Lipschitz functions on $\mathbb{R}^{n}$ , and $C_\textrm{b,Lip}(\mathbb{R}^{n})$ be the space of bounded Lipschitz functions on $\mathbb{R}^{n}$ . For any subset $Q\subset \lbrack0,T]\times \mathbb{R}$ and for any bounded function on Q, we define the norm $|\omega|_{0}\;:\!=\;\sup_{(t,x)\in Q}|\omega(t,x)|$ . We also use the following spaces: $C_\textrm{b}(Q)$ and $C_\textrm{b}^{\infty}(Q)$ , denoting, respectively, the space of bounded continuous functions on Q and the space of bounded continuous functions on Q with bounded derivatives of any order. For the rest of this paper we take a nonnegative function $\zeta \in C^{\infty} (\mathbb{R}^{2})$ with unit integral and support in $\{(t,x)\colon-1<t<0,|x|<1\}$ and, for $\varepsilon \in(0,1)$ , let $\zeta_{\varepsilon}(t,x)=\varepsilon^{-3}\zeta(t/\varepsilon^{2},x/\varepsilon)$ .
2.1. Sublinear expectation
Let $\mathcal{H}$ be a linear space of real-valued functions defined on a set $\Omega$ such that if $X_{1},\ldots,X_{n}\in\mathcal{H}$ , then $\varphi(X_{1},\ldots,X_{n})\in \mathcal{H}$ for each $\varphi \in C_{\textrm{Lip}}(\mathbb{R}^{n})$ .
Definition 1. A functional $\hat{\mathbb{E}}\colon\mathcal{H}\rightarrow\mathbb{R}$ is called a sublinear expectation if, for all $X,Y \in \mathcal{H}$ , it satisfies the following properties:
-
(i) Monotonicity: If $X\geq Y$ then $\hat{\mathbb{E}}[X] \geq \hat{\mathbb{E}}[Y]$ .
-
(ii) Constant preservation: $\hat{\mathbb{E}}[c]=c$ for any $c\in \mathbb{R}$ .
-
(iii) Subadditivity: $\hat{\mathbb{E}}[X+Y] \leq \hat{\mathbb{E}}[X] + \hat{\mathbb{E}}[Y]$ .
-
(iv) Positive homogeneity: $\hat{\mathbb{E}}[\lambda X] = \lambda\hat{\mathbb{E}}[X]$ for each $\lambda>0$ .
The triplet $(\Omega,\mathcal{H},\hat{\mathbb{E})}$ is called a sublinear expectation space. From the definition of the sublinear expectation $\hat{\mathbb{E}}$ , the following results can be easily obtained.
Proposition 1. For $X,Y \in \mathcal{H}$ ,
-
(i) if $\hat{\mathbb{E}}[X] = -\hat{\mathbb{E}}[-X]$ , then $\hat{\mathbb{E}}[X+Y] = \hat{\mathbb{E}}[X] + \hat{\mathbb{E}}[Y]$ ;
-
(ii) $|\hat{\mathbb{E}}[X] - \hat{\mathbb{E}}[Y]| \leq \hat{\mathbb{E}}[\vert X-Y\vert]$ ;
-
(iii) $\hat{\mathbb{E}}[\vert XY\vert] \leq (\hat{\mathbb{E}}[\vert X\vert^{p}])^{1/p} \cdot (\hat{\mathbb{E}}[\vert Y\vert^{q}])^{1/q}$ for $1\leq p,q<\infty$ with ${1}/{p}+{1}/{q}=1$ .
Definition 2. Let $X_{1}$ and $X_{2}$ be two n-dimensional random vectors defined respectively in sublinear expectation spaces $(\Omega_{1},\mathcal{H}_{1},\hat{\mathbb{E}}_{1})$ and $(\Omega_{2},\mathcal{H}_{2},\hat{\mathbb{E}}_{2})$ . They are called identically distributed, denoted by $X_{1}\overset{\textrm{d}}{=}X_{2}$ , if $\hat{\mathbb{E}}_{1}[\varphi(X_{1})] = \hat{\mathbb{E}}_{2}[\varphi(X_{2})]$ for all $\varphi \in C_\textrm{Lip}(\mathbb{R}^{n})$ .
Definition 3. In a sublinear expectation space $(\Omega,\mathcal{H},\hat{\mathbb{E})}$ , a random vector $Y=(Y_{1}$ , $\ldots,Y_{n}) \in \mathcal{H}^{n}$ is said to be independent of another random vector $X=(X_{1},\ldots,X_{m})\in\mathcal{H}^{m}$ under $\hat{\mathbb{E}}[\!\cdot\!]$ , denoted by $Y\perp X$ , if, for every test function $\varphi \in C_\textrm{Lip}(\mathbb{R}^{m}\times \mathbb{R}^{n})$ , $\hat{\mathbb{E}}[\varphi(X,Y)] = \hat{\mathbb{E}}[\hat{\mathbb{E}}[\varphi(x,Y)]_{x=X}]$ . $\bar{X}=(\bar{X}_{1},\ldots,\bar{X}_{m})\in \mathcal{H}^{m}$ is said to be an independent copy of X if $\bar{X}\overset{\textrm{d}}{=}X$ and $\bar{X}\perp X$ .
More details concerning general sublinear expectation spaces can be found in [Reference Peng33, Reference Peng36] and references therein.
2.2. $\alpha$ -stable Lévy process
Definition 4. Let $\alpha\in(0,2]$ . A random variable X on a sublinear expectation space $(\Omega,\mathcal{H},\hat{\mathbb{E})}$ is said to be (strictly) $\alpha$ -stable if, for all $a,b\geq0$ , $aX+bY\overset{\textrm{d}}{=}(a^{\alpha}+b^{\alpha})^{1/\alpha}X$ , where Y is an independent copy of X.
Remark 1. For $\alpha=1$ , X is the maximal random variable discussed in [Reference Hu and Li18, Reference Peng34, Reference Peng36]. When $\alpha=2$ , X becomes the G-normal random variable introduced in [Reference Peng35, Reference Peng36]. In this paper, we shall focus on the case of $\alpha\in(1,2)$ and consider X for a nonlinear $\alpha$ -stable Lévy process $(X_{t})_{t\geq0}$ in the framework of [Reference Neufeld and Nutz29].
Let $\alpha \in(1,2)$ , $K_{\pm}$ be a bounded measurable subset of $\mathbb{R}_{+}$ , and $F_{k_{\pm}}$ be the $\alpha$ -stable Lévy measure
for all $k_{-},k_{+}\in K_{\pm}$ , and denote by $\Theta\;:\!=\;\{(0,0,F_{k_{\pm}})\colon k_{\pm}\in K_{\pm}\}$ the set of Lévy triplets. From [Reference Neufeld and Nutz29, Theorem 2.1], we can define a nonlinear $\alpha$ -stable Lévy process $(X_{t})_{t\geq0}$ with respect to a sublinear expectation $\hat{\mathbb{E}}[\!\cdot\!] = \sup_{P\in\mathfrak{B}_{\Theta}}E^{P}[\!\cdot\!]$ , where $E^{P}$ is the usual expectation under the probability measure P, and $\mathfrak{B}_{\Theta}$ is the set of all semimartingales with $\Theta$ -valued differential characteristics. This implies the following:
-
• $(X_{t})_{t\geq0}$ is real-valued càdlàg process and $X_{0}=0$ .
-
• $(X_{t})_{t\geq0}$ has stationary increments, i.e. $X_{t}-X_{s}$ and $X_{t-s}$ are identically distributed for all $0\leq s\leq t$ .
-
• $(X_{t})_{t\geq0}$ has independent increments, i.e. $X_{t}-X_{s}$ is independent of $(X_{s_{1}},\ldots,X_{s_{n}})$ for each $n\in\mathbb{N}$ and $0\leq s_{1}\leq \cdots \leq s_{n}\leq s$ .
We now present some basic lemmas for the $\alpha$ -stable Lévy process $(X_{t})_{t\geq0}$ . We refer to [Reference Bayraktar and Munk6, Lemmas 2.6–2.9] and [Reference Neufeld and Nutz29, Lemmas 5.1–5.3] for the details of the proofs.
Lemma 1. $\hat{\mathbb{E}}[|X_{1}|]<\infty$ .
Lemma 2. For all $\lambda>0$ and $t\geq0$ , $X_{\lambda t}$ and $\lambda^{1/\alpha}X_{t}$ are identically distributed.
Lemma 3. Suppose that $\phi\in C_\textrm{b,Lip}(\mathbb{R})$ . Then, for any $(t,x)\in[0,T]\times\mathbb{R}$ , $u(t,x)=\hat{\mathbb{E}}[\phi(x+X_{t})]$ is the unique viscosity solution of the fully nonlinear PIDE
with $\delta_{z}u(t,x)\;:\!=\;u(t,x+z)-u(t,x)-D_{x}u(t,x)z$ . Moreover, for any $0\leq s\leq t\leq T$ , $u(t,x)=\hat{\mathbb{E}}[u(t-s,x+X_{s})]$ .
Lemma 4. Suppose that $\phi \in C_\textrm{b,Lip}(\mathbb{R})$ . Then the function u is uniformly bounded by $|\phi|_{0}$ and jointly continuous. More precisely, for any $t,s\in[0,T]$ and $x,y\in\mathbb{R}$ ,
where $C_{\phi,\mathcal{K}}$ is a constant depending only on the Lipschitz constant of $\phi$ and
3. Main results
First, we construct a sublinear expectation space and introduce random variables on it. For each $k_{\pm}\in K_{\pm}\subset(\lambda_{1},\lambda_{2})$ for some $\lambda_{1},\lambda_{2}\geq0$ , let $W_{k_{\pm}}$ be a classical mean-zero random variable with a cumulative distribution function (CDF)
for some functions $\beta_{1,k_{\pm}}\colon(\!-\infty,0] \rightarrow \mathbb{R}$ and $\beta_{2,k_{\pm}}\colon[0,\infty)\rightarrow \mathbb{R}$ such that $\lim_{z\rightarrow-\infty}\beta_{1,k_{\pm}}(z)=\lim_{z\rightarrow \infty}\beta_{2,k_{\pm}}(z)=0$ . Define a sublinear expectation $\tilde{\mathbb{E}}$ on $C_\textrm{Lip}(\mathbb{R})$ by
for all $\varphi \in C_\textrm{Lip}(\mathbb{R})$ . Clearly, $(\mathbb{R},C_\textrm{Lip}(\mathbb{R}),\tilde{\mathbb{E}})$ is a sublinear expectation space. Let $\xi$ be a random variable on this space given by $\xi(z)=z$ for all $z\in \mathbb{R}$ . Since $W_{k\pm}$ has mean zero, this yields $\tilde{\mathbb{E}}[\xi]=\tilde{\mathbb{E}}[-\xi]=0$ .
We need the following assumptions, which are motivated by [Reference Bayraktar and Munk6, Example 4.2].
Assumption 1. For each $k_{\pm}\in K_{\pm}$ , $\beta_{1,k_{\pm}}$ and $\beta_{2,k_{\pm}}$ are continuously differentiable functions in (4) satisfying $\int_{\mathbb{R}}z\,\textrm{d} F_{W_{k_{\pm}}}(z)=0$ .
Assumption 2. There exists a constant $M>0$ such that, for any $k_{\pm}\in K_{\pm}$ , the following quantities are less than M:
Assumption 3. There exists a constant $q>0$ such that, for any $k_{\pm}\in K_{\pm}$ and $\Delta \in(0,1)$ , the following quantities are less than $C\Delta^{q}$ :
where $C>0$ is a constant.
Remark 2. Note that by Assumption 1 alone, the terms in Assumption 2 are finite and the terms in Assumption 3 approach zero as $\Delta\rightarrow0$ . In other words, the content of Assumptions 2 and 3 are the uniform bounds and the existence of minimum convergence rates.
Remark 3. By (4), we can write $\beta_{1,k_{\pm}}$ and $\beta_{2,k_{\pm}}$ as
Under Assumption 1, it can be checked that for any $k_{\pm}\in K_{\pm}$ the following quantities are uniformly bounded (we also assume the uniform bound is M):
Remark 4. Under Assumptions 1 and (A2), it is easy to check that
where $\{P_{k_{\pm}}, k_{\pm}\in K_{\pm}\}$ is the set of probability measures related to uncertainty distributions $\{F_{W_{k\pm}},k_{\pm}\in K_{\pm}\}$ . It follows that
Similarly,
Let $(\xi_{i})_{i=1}^{\infty}$ be a sequence of i.i.d. $\mathbb{R}$ -valued random variables defined on $(\mathbb{R},C_\textrm{Lip}(\mathbb{R}),\tilde{\mathbb{E}})$ in the sense that $\xi_{1}=\xi$ , $\xi_{i+1}\overset{\textrm{d}}{=}\xi_{i}$ , and $\xi_{i+1}\perp(\xi_{1},\xi_{2},\ldots,\xi_{i})$ for each $i\in \mathbb{N}$ ; we write
Now we state our first main result.
Theorem 1. Suppose that Assumptions 1–3 hold. Let $(\bar{S}_{n})_{n=1}^{\infty}$ be a sequence as defined in (5), and $(X_{t})_{t\geq0}$ be a nonlinear $\alpha$ -stable Lévy process with the characteristic set $\Theta$ . Then, for any $\phi \in C_\textrm{b,Lip}(\mathbb{R})$ , $\big|\tilde{\mathbb{E}}[\phi(\bar{S}_{n})] - \hat{\mathbb{E}}[\phi(X_{1})]\big| \leq C_{0}n^{-\Gamma(\alpha,q)}$ , where
with $q>0$ given in Assumption 3, and $C_{0}$ is a constant depending on the Lipschitz constant of $\phi$ , which is given in Theorem 2.
Remark 5. The classical $\alpha$ -stable central limit theorem (see, e.g., [Reference Ibragimov and Linnik21, Theorem 2.6.7]) states that for a classical mean-zero random variable $\xi_{1}$ , the sequence $\bar{S}_{n}$ converges in law to $X_{1}$ as $n\rightarrow \infty$ if and only if the CDF of $\xi$ has the form given in (4), where $(X_{t})_{t\geq0}$ is a classical Lévy process with triplet $(0,0,F_{k_{\pm}})$ . In the framework of sublinear expectation, sufficient conditions for the $\alpha$ -stable central limit theorem are given in [Reference Bayraktar and Munk6], which showed that, for a mean-zero random variable $\xi_{1}$ under the sublinear expectation $\tilde{\mathbb{E}}$ defined above, $\bar{S}_{n}$ converges in law to $X_{1}$ as $n\rightarrow\infty$ , where $(X_{t})_{t\geq0}$ is a nonlinear Lévy process with triplet set $\Theta$ . Here, Theorem 1 further provides an explicit convergence rate of the limit theorem in [Reference Bayraktar and Munk6], which can be seen as a special $\alpha$ -stable central limit theorem under the sublinear expectation.
Remark 6. Assumptions 1–3 are sufficient conditions for [Reference Bayraktar and Munk6, Theorem 3.1]. Indeed, by [Reference Bayraktar and Munk6, Proposition 2.10], we know that for any $0<h<1$ , $u\in C_\textrm{b}^{1,2}([h,1+h]\times\mathbb{R})$ . Under Assumptions 1–3, by using II from (19) we get, for any $\phi \in C_\textrm{b,Lip}(\mathbb{R})$ and $0<h<1$ ,
uniformly on $[0,1]\times \mathbb{R}$ as $n\rightarrow \infty$ , where v is the unique viscosity solution of
In addition, the necessary conditions for the $\alpha$ -stable central limit theorem under sublinear expectation are still unknown.
4. Two examples
In this section we give two examples to illustrate our results.
Example 1. Let $(\xi_{i})_{i=1}^{\infty}$ be a sequence of i.i.d. $\mathbb{R}$ -valued random variables defined on $(\mathbb{R},C_\textrm{Lip}(\mathbb{R}),\tilde{\mathbb{E}})$ with CDF (4) satisfying $\beta_{1,k_{\pm}}(z)=0$ for $z\leq-1$ and $\beta_{2,k_{\pm}}(z)=0$ for $z\geq1$ with $\lambda_{2}<{\alpha}/{2}$ . The exact expressions for $\beta_{1,k_{\pm}}(z)$ and $\beta_{2,k_{\pm}}(z)$ for $0<|z|<1$ are not specified here, but we require $\beta_{1,k_{\pm}}(z)$ and $\beta_{2,k_{\pm}}(z)$ to satisfy Assumption 1. It is clear that Assumption 2 holds. In addition, for each $k_{\pm}\in K_{\pm}$ and $\Delta\in(0,1)$ ,
where $c\;:\!=\;\sup_{z\in(0,1)}|\beta_{2,k_{\pm}}(z)|<\infty$ , and similarly for the negative half-line. This indicates that Assumption 3 holds with $q=({2-\alpha})/{\alpha}$ . According to Theorem 1, we get the convergence rate $\big|\tilde{\mathbb{E}}[\phi(\bar{S}_{n})]-\hat{\mathbb{E}}[\phi(X_{1})]\big|\leq C_{0}n^{-\Gamma(\alpha)}$ , where
Example 2. Let $(\xi_{i})_{i=1}^{\infty}$ be a sequence of i.i.d. $\mathbb{R}$ -valued random variables defined on $(\mathbb{R},C_\textrm{Lip}(\mathbb{R}),\tilde{\mathbb{E}})$ with CDF (4) satisfying $\beta_{1,k_{\pm}}(z)=a_{1}|z|^{\alpha-\beta}$ for $z\leq-1$ and $\beta_{2,k_{\pm}}(z)= a_{2}z^{\alpha-\beta}$ for $z\geq1$ , with $\beta>\alpha$ and two proper constants $a_{1}, a_{2}$ . The exact expressions for $\beta_{1,k_{\pm}}(z)$ and $\beta_{2,k_{\pm}}(z)$ for $0<|z|<1$ are not specified here, but we require that $\beta_{1,k_{\pm}}(z)$ and $\beta_{2,k_{\pm}}(z)$ satisfy Assumption 1. For simplicity, we will only check the integral along the positive half-line; the negative half-line case is similar. Observe that
which shows that Assumption 2 holds. Also, it can be verified that, for each $k_{\pm}\in K_{\pm}$ and $\Delta \in(0,1)$ ,
where $c=\sup_{z\in(0,1)}|\beta_{2,k_{\pm}}(z)|<\infty$ . We further distinguish three cases based on the value of $\beta$ .
If $\beta=2$ ,
where $C=({c}/({2-\alpha}))+a_{2}$ , for any small $\varepsilon>0$ .
If $\alpha<\beta<2$ ,
where
If $\beta>2$ , it follows that
where
Then, Assumption 3 holds with
for any small $\varepsilon>0$ . From Theorem 1, we immediately obtain $\big|\tilde{\mathbb{E}}[\phi(\bar{S}_{n})]-\hat{\mathbb{E}}[\phi(X_{1})]\big|\leq C_{0}n^{-\Gamma(\alpha,\beta)}$ , where
with $\varepsilon>0$ .
5. Proof of Theorem 1: Monotone scheme method
In this section we introduce the numerical analysis tools of nonlinear partial differential equations to prove Theorem 1. Noting that $\hat{\mathbb{E}}[\phi(X_{1})]=u(1,0)$ , where u is the viscosity solution of (3), we propose a discrete scheme to approximate u by merely using the random variable $\xi$ under $\tilde{\mathbb{E}}$ as input. For given $T>0$ and $\Delta \in(0,1)$ , define $u_{\Delta}\colon[0,T]\times \mathbb{R\rightarrow R}$ recursively by
From the above recursive process, we can see that, for each $x\in \mathbb{R}$ and $n\in \mathbb{N}$ such that $n\Delta \leq T$ , $u_{\Delta}(\cdot,x)$ is a constant on the interval $[n\Delta,(n+1)\Delta \wedge T)$ , that is, $u_{\Delta}(t,x)=u_{\Delta}(n\Delta,x)$ for all $t\in \lbrack n\Delta,(n+1)\Delta \wedge T)$ .
By induction (see [Reference Huang and Liang20, Theorem 2.1]), we can derive that, for all $n\in \mathbb{N}$ such that $n\Delta \leq T$ and $x\in \mathbb{R}$ ,
In particular, taking $T=1$ and $\Delta={1}/{n}$ , we have $u_{\Delta}(1,0)=\tilde{\mathbb{E}[}\phi(\bar{S}_{n})]$ , and Theorem 1 follows from the following result.
Theorem 2. Suppose that Assumptions 1–3 hold, and $\phi\in C_\textrm{b,Lip}(\mathbb{R})$ . Then, for any $(t,x) \in \lbrack0,T]\times\mathbb{R}$ , $\vert u(t,x) - u_{\Delta}(t,x)\vert \leq C_{0}\Delta^{\Gamma(\alpha,q)}$ , where the Berry–Esseen constant $C_{0}=L_{0}\vee U_{0}$ , with $L_{0}$ and $U_{0}$ given explicitly in Lemmas 11 and 12, respectively, and
5.1. Regularity estimates
To prove Theorem 2, we first need to establish the space and time regularity properties of $u_{\Delta}$ , which are crucial for proving the convergence of $u_{\Delta}$ to u and determining its convergence rate. Before showing our regularity estimates of $u_{\Delta}$ , we set
Theorem 3. Suppose that Assumptions 1 and 3 hold, and $\phi\in C_\textrm{b,Lip}(\mathbb{R})$ . Then:
-
(i) for any $t\in\lbrack0,T]$ and $x,y\in \mathbb{R}$ , $\vert u_{\Delta}(t,x) - u_{\Delta}(t,y)\vert \leq C_{\phi}|x-y|$ ;
-
(ii) for any $t,s\in\lbrack0,T]$ and $x\in \mathbb{R}$ , $\vert u_{\Delta}(t,x) - u_{\Delta}(s,x)\vert \leq C_{\phi}I_{\Delta}(|t-s|^{1/2}+\Delta^{1/2})$ ;
where $C_{\phi}$ is the Lipschitz constant of $\phi$ and, $I_{\Delta}=\sqrt{I_{1,\Delta}}+2I_{2,\Delta}$ with $I_{\Delta}<\infty$ .
Notice that $\tilde{\mathbb{E}}[\xi^{2}]=\infty$ , so the classical method developed in [Reference Krylov28] fails. To prove Theorem 3, for fixed $N>0$ , we define $\xi^{N}\;:\!=\;\xi\mathbf{1}_{\{|\xi|\leq N\}}$ and introduce the truncated scheme $u_{\Delta,N}\colon[0,T]\times \mathbb{R\rightarrow R}$ recursively by
We get the following estimates.
Lemma 5. For each fixed $N>0$ , $\tilde{\mathbb{E}}[|\xi^{N}|^{2}]=N^{2-\alpha}I_{1,N}$ , where
Proof. Using Fubini’s theorem,
By changing variables, it is straightforward to check that
which immediately implies the result.
Lemma 6. For each fixed $N>0$ , $\tilde{\mathbb{E}}[|\xi-\xi^{N}|]=N^{1-\alpha}I_{2,N}$ , where
Proof. Notice that
Observe by Fubini’s theorem that
By changing variables, we immediately conclude the proof.
Lemma 7. Suppose that $\phi \in C_\textrm{b,Lip}(\mathbb{R})$ . Then:
-
(i) for any $k\in\mathbb{N}$ such that $k\Delta \leq T$ and $x,y\in \mathbb{R}$ , $\vert u_{\Delta,N}(k\Delta,x)-u_{\Delta,N}(k\Delta,y)\vert \leq C_{\phi}|x-y|$ ;
-
(ii) for any $k\in \mathbb{N}$ such that $k\Delta \leq T$ and $x\in \mathbb{R}$ ,
\begin{align*} \vert u_{\Delta,N}(k\Delta,x)-u_{\Delta,N}(0,x)\vert & \leq C_{\phi}\big((I_{1,N})^{{1}/{2}}N^{({2-\alpha})/{2}}\Delta^{({2-\alpha})/{2\alpha}} \\[5pt] & \qquad\quad + I_{2,N}N^{1-\alpha}\Delta^{({1-\alpha})/{\alpha}}\big)(k\Delta)^{{1}/{2}}; \end{align*}
where $C_{\phi}$ is the Lipschitz constant of $\phi$ , and $I_{1,N}$ , $I_{2,N}$ are given in Lemmas 5 and 6, respectively.
Proof. Assertion (i) is proved by induction using (7). Clearly, the estimate holds for $k=0$ . In general, we assume the assertion holds for some $k\in\mathbb{N}$ with $k\Delta\leq T$ . Then, using Proposition 1, we have
By the principle of induction the assertion is true for all $k\in \mathbb{N}$ with $k\Delta \leq T$ .
Now we establish the time regularity for $u_{\Delta,N}$ in (ii). Note that Young’s inequality implies that, for any $x,y>0$ , $xy\leq \frac{1}{2}(x^{2}+y^{2})$ . For any $\varepsilon>0$ , let $x=|x-y|$ and $y={1}/{\varepsilon}$ ; then it follows from (i) that
where $A=({\varepsilon}/{2})C_{\phi}$ and $B=({1}/{2\varepsilon})C_{\phi}$ .
We claim that, for any $k\in \mathbb{N}$ such that $k\Delta \leq T$ and $x,y\in \mathbb{R}$ ,
where $M_{N}^{2}=\tilde{\mathbb{E}}[|\xi^{N}|^{2}]$ and $D_{N}=\tilde{\mathbb{E}}[|\xi-\xi^{N}|]$ . Indeed, (10) obviously holds for $k=0$ . Assume that for some $k\in \mathbb{N}$ the assertion (10) holds. Notice that
Then, for any $k_{\pm}\in K_{\pm}$ ,
Seeing that $E_{P_{k_{\pm}}}\big[\xi^{N}-E_{P_{k_{\pm}}}[\xi^{N}]\big]=0$ and
we can deduce that
Also, since $E_{P_{k_{\pm}}}[\xi]=0$ , it follows from (i) that
Combining (11)–(14), we obtain
which shows that (10) also holds for $k+1$ . By the principle of induction our claim is true for all $k\in \mathbb{N}$ such that $k\Delta \leq T$ and $x,y\in \mathbb{R}$ . By taking $y=x$ in (10) we have, for any $\varepsilon>0$ ,
By minimizing of the right-hand side with respect to $\varepsilon$ , we obtain
Similarly, we also have
Combining with Lemmas 5 and 6, we obtain our desired result (ii).
Lemma 8. Suppose that $\phi \in C_\textrm{b,Lip}(\mathbb{R})$ and $N>0$ is fixed. Then, for any $k\in \mathbb{N}$ such that $k\Delta \leq T$ and $x\in \mathbb{R}$ ,
where $C_{\phi}$ is the Lipschitz constant of $\phi$ and $I_{2,N}$ is given in Lemma 6.
Proof. Let $(\xi_{i})_{i\geq1}$ be a sequence of random variables on $(\mathbb{R},C_\textrm{Lip}(\mathbb{R}),\tilde{\mathbb{E}})$ such that $\xi_{1}=\xi$ , $\xi_{i+1}\overset{\textrm{d}}{=}\xi_{i}$ , and $\xi_{i+1}\perp(\xi_{1},\xi_{2},\ldots,\xi_{i})$ for each $i\in \mathbb{N}$ , and let $\xi_{i}^{N}=\xi_{i}\wedge N\vee(\!-\!N)$ for each $i\in \mathbb{N}$ . In view of (6) and (7), by using the induction method of [Reference Huang and Liang20, Theorem 2.1] we have, for any $k\in \mathbb{N}$ such that $k\Delta \leq T$ and $x\in \mathbb{R}$ ,
Then, it follows from the Lipschitz condition of $\phi$ and Lemma 6 that
Now we start to prove the regularity results of $u_{\Delta}$ .
Proof of Theorem 3. The space regularity of $u_{\Delta}$ can be proved by induction using (6). We only focus on the time regularity of $u_{\Delta}$ and divide its proof into three steps.
Step 1. Consider the special case $\vert u_{\Delta}(k\Delta,\cdot)-u_{\Delta}(0,\cdot)\vert $ for any $k\in \mathbb{N}$ such that $k\Delta \leq T$ . Noting that $u_{\Delta,N}(0,x)=u_{\Delta}(0,x)=\phi(x)$ , we have
In view of Lemmas 7 and 8, by choosing $N=\Delta^{-{1}/{\alpha}}$ we obtain
where
In addition, by Assumption 1, it is easy to obtain that $I_{1,\Delta}$ and $I_{2,\Delta}$ are finite as $\Delta \rightarrow0$ .
Step 2. Let us turn to the case $\vert u_{\Delta}(k\Delta,\cdot)-u_{\Delta}(l\Delta,\cdot)\vert$ for any $k,l\in \mathbb{N}$ such that $(k\vee l)\Delta \leq T$ . Without loss of generality, we assume that $k\geq l$ . Let $(\xi_{i})_{i=1}^{\infty}$ be a sequence of random variables on $(\mathbb{R},C_\textrm{Lip}(\mathbb{R}),\tilde{\mathbb{E}})$ such that $\xi_{1}=\xi$ , $\xi_{i+1}\overset{\textrm{d}}{=}\xi_{i}$ , and $\xi_{i+1}\perp(\xi_{1},\xi_{2},\ldots,\xi_{i})$ for each $i\in\mathbb{N}$ . By using induction (6) and the estimate (15), it is easy to obtain that, for any $k\geq l$ and $x\in \mathbb{R}$ ,
Step 3. In general, for $s,t\in \lbrack0,T]$ , let $\delta_{s},\delta_{t}\in \lbrack0,\Delta)$ such that $s-\delta_{s}$ and $t-\delta_{t}$ are in the grid points $\{k\Delta\colon k\in \mathbb{N}\}$ . Then, from (16),
We can similarly prove that
and this yields (ii).
5.2. The monotone approximation scheme
In this section we first rewrite the recursive approximation (6) as a monotone scheme, and then derive its consistency error estimates and comparison result.
For $\Delta \in(0,1)$ , based on (6), we introduce the monotone approximation scheme as
where $S\colon(0,1)\times\mathbb{R}\times\mathbb{R}\times C_\textrm{b}(\mathbb{R}){\rightarrow R}$ is defined by
For a function f defined on $[0,T]\times \mathbb{R}$ , introduce its norm $|f|_{0}\;:\!=\;\sup_{[0,T]\times \mathbb{R}}|f(t,x)|$ . We now give some key properties of the approximation scheme (17).
Proposition 2. Suppose that $S(\Delta,x,p,v)$ is as given in (18). Then the following properties hold:
-
(i) Monotonicity. For any $c_{1},c_{2}\in \mathbb{R}$ and any function $u\in C_{b}(\mathbb{R})$ with $u\leq v$ ,
\[ S(\Delta,x,p+c_{1},u+c_{2})\geq S(\Delta,x,p,v)+\frac{c_{1}-c_{2}}{\Delta}. \] -
(ii) Concavity. For any $\lambda \in[0,1]$ , $p_{1},p_{2}\in \mathbb{R}$ , and $v_{1},v_{2}\in C_\textrm{b}(\mathbb{R})$ , $S(\Delta,x,p,v)$ is concave in (p,v), i.e.
\begin{align*} S(\Delta,x,\lambda p_{1}&+(1-\lambda)p_{2},\lambda v_{1}(\cdot)+(1-\lambda)v_{2}(\cdot)) \geq \lambda S(\Delta,x,p_{1},v_{1}(\cdot))\\[5pt] &+(1-\lambda)S(\Delta,x,p_{2},v_{2}(\cdot)). \end{align*} -
(iii) Consistency. For any $\omega \in C_\textrm{b}^{\infty}([\Delta,T]\times \mathbb{R})$ ,
\begin{align*} & \bigg\vert\partial_{t}\omega(t,x) - \sup_{k_{\pm}\in K_{\pm}}\bigg\{\int_{\mathbb{R}}\delta_{z}\omega(t,x)F_{k_{\pm}}(\textrm{d} z)\bigg\} - S(\Delta,x,\omega(t,x),\omega(t-\Delta,\cdot))\bigg\vert \\[5pt] &\leq \big(1+\tilde{\mathbb{E}}[|\xi|]\big) \big(|\partial_{t}^{2}\omega|_{0}\Delta + |\partial_{t}D_{x}\omega|_{0}\Delta^{{1}/{\alpha}}\big) + R^{0}|D_{x}^{2}\omega|_{0}\Delta^{({2-\alpha})/{\alpha}}\\[5pt] &\quad + |D_{x}^{2}\omega|_{0}R_{\Delta}^{1} + \vert D_{x}\omega\vert_{0}R_{\Delta}^{2}, \end{align*}where\begin{align*} R^{0} & = \sup_{k_{\pm}\in K_{\pm}}\bigg\{|\beta_{1,k_{\pm}}(\!-\!1)| + |\beta_{2,k_{\pm}}(1)| \\[5pt] & \qquad\qquad\quad + \int_{0}^{1}\big[|\alpha\beta_{1,k_{\pm}}(\!-z)+\beta_{1,k_{\pm}}^{\prime}(\!-z)z| + |\alpha\beta_{2,k_{\pm}}(z) - \beta_{2,k_{\pm}}^{\prime}(z)z|\big]z^{1-\alpha}\,\textrm{d} z\bigg\}, \\[5pt] R_{\Delta}^{1} & = 5\sup_{k_{\pm}\in K_{\pm}}\bigg\{\int_{0}^{1}\big[|\beta_{1,k_{\pm}}(\!-\!\Delta^{-{1}/{\alpha}}z)| + |\beta_{2,k_{\pm}}(\Delta^{-{1}/{\alpha}}z)|\big]z^{1-\alpha}\,\textrm{d} z\bigg\}, \\[5pt] R_{\Delta}^{2} & = 4\sup_{k_{\pm}\in K_{\pm}}\bigg\{|\beta_{1,k_{\pm}}(\!-\!\Delta^{-{1}/{\alpha}})| + |\beta_{2,k_{\pm}}(\Delta^{-{1}/{\alpha}})| \\[5pt] & \qquad\qquad\quad + \int_{1}^{\infty}\big[|\beta_{1,k_{\pm}}(\!-\!\Delta^{-{1}/{\alpha}}z)| + |\beta_{2,k_{\pm}}(\Delta^{-{1}/{\alpha}}z)|\big]z^{-\alpha}\,\textrm{d} z\bigg\}. \end{align*}
Proof. Parts (i) and (ii) are immediate, so we only prove (iii). To this end, we split the consistency error into two parts. Specifically, for $(t,x)\in[\Delta,T]\times \mathbb{R}$ ,
Applying Taylor’s expansion (twice) yields
Since $\tilde{\mathbb{E}}[\xi]=\tilde{\mathbb{E}}[-\xi]=0$ , (20) and the mean value theorem give
For II, by changing variables we get
We only consider the integral above along the positive half-line; the integral along the negative half-line is similar. For simplicity, we set
Using integration by parts, for any $k_{\pm}\in K_{\pm}$ ,
where we have used the fact that, for $\theta \in(0,1)$ ,
Notice that, for any $k_{\pm}\in K_{\pm}$ ,
By means of integration by parts and the mean value theorem, we obtain
by using the fact that, for $\theta \in(0,1)$ , $|\delta_{z}\omega(t,x)|=\frac{1}{2}|D_{x}^{2}\omega(t,x+\theta z)z^{2}|\leq|D_{x}^{2}\omega|_{0}z^{2}$ ; similarly,
In the same way, we can also obtain
Together with $J_{1}$ , $J_{2}$ , and $J_{3}$ , we conclude that
The desired conclusion follows from this and (21).
From Proposition 2(i) we can derive the following comparison result for the scheme (17), which is used throughout this paper.
Lemma 9. Suppose that $\underline{v},\bar{v}\in C_\textrm{b}([0,T]\times \mathbb{R})$ satisfy
where $h_{1},h_{2}\in C_\textrm{b}((\Delta,T]\times \mathbb{R})$ . Then
Proof. The basic idea of the proof comes from [Reference Barles and Jakobsen4, Lemma 3.2]; for the reader’s convenience, we give a sketch of the proof.
We first note that it suffices to prove the lemma in the case $\underline{v} \leq \bar{v}$ in $[0,\Delta]\times \mathbb{R}$ , $h_{1}\leq h_{2}$ in $(\Delta,T]\times \mathbb{R}$ . The general case follows from this after seeing that, from the monotonicity property in Proposition 2(i),
satisfies
for $(t,x)\in(\Delta,T]\times \mathbb{R}$ , and $\underline{v}\leq \omega$ in $[0,\Delta]\times \mathbb{R}$ .
For $c\geq0$ , let $\psi_{c}(t)\;:\!=\;ct$ and $g(c)\;:\!=\;\sup_{(t,x)\in \lbrack0,T]\times \mathbb{R}}\{\underline{v}-\bar{v}-\psi_{c}\}$ . Next, we have to prove that $g(0)\leq0$ and we argue by contradiction assuming $g(0)>0$ . From the continuity of g, we can find some $c>0$ such that $g(c)>0$ . For such c, take a sequence $\{(t_{n},x_{n})\}_{n\geq1}\subset [0,T]\times\mathbb{R}$ such that, as $n\rightarrow \infty$ ,
Since $\underline{v}-\bar{v}-\psi_{c}\leq0$ in $[0,\Delta]\times \mathbb{R}$ and $g(c)>0$ , we assert that $t_{n}>\Delta$ for sufficiently large n. For such n, by applying Proposition 2(i) (twice) we can deduce that
Since $h_{1}\leq h_{2}$ in $(\Delta,T]\times \mathbb{R}$ , this yields that $c-\delta_{n}\Delta^{-1}\leq0$ . By letting $n\rightarrow \infty$ , we obtain $c\leq0$ , which is a contradiction.
5.3. Convergence rate of the monotone approximation scheme
In this subsection we prove the convergence rate of the monotone approximation scheme $u_{\Delta}$ in Theorem 2. The convergence of the approximate solution $u_{\Delta}$ to the viscosity solution u follows from a nonlocal extension of the Barles–Souganidis half-relaxed limits method [Reference Barles and Souganidis5].
We start from the first time interval $[0,\Delta]\times \mathbb{R}$ .
Lemma 10. Suppose that $\phi \in C_\textrm{b,Lip}(\mathbb{R})$ . Then, for $(t,x)\in \lbrack0,\Delta]\times \mathbb{R}$ ,
where $C_{\phi}$ is the Lipschitz constant of $\phi$ , $M_{\xi}^{1}\;:\!=\;\tilde{\mathbb{E}}[|\xi|]$ and $M_{X}^{1}\;:\!=\;\hat{\mathbb{E}}[|X_{1}|]$ .
Proof. Clearly, (22) holds in $(t,x)\in \lbrack0,\Delta)\times \mathbb{R}$ , since $u(0,x)=u_{\Delta}(t,x)=\phi(x)$ , $(t,x)\in \lbrack0,\Delta)\times \mathbb{R}$ . For $t=\Delta$ , from Lemma 3 and (6), we obtain
which implies the desired result.
5.3.1. Lower bound for the approximation scheme error
In order to obtain the lower bound for the approximation scheme, we follow Krylov’s regularization results [Reference Krylov25–Reference Krylov27] (see also [Reference Barles and Jakobsen2, Reference Barles and Jakobsen3] for analogous results under PDE arguments). For $\varepsilon \in(0,1)$ , we first extend (3) to the domain $[0,T+\varepsilon^{2}]\times \mathbb{R}$ and still denote it as u. For $(t,x)\in \lbrack0,T]\times \mathbb{R}$ , we define the mollification of u by
In view of Lemma 4, the standard properties of mollifiers indicate that
where $M_{\zeta}\;:\!=\;\max_{k+l\geq1}\int_{-1<t<0}\int_{|x|<1}|\partial_{t}^{l}D_{x}^{k}\zeta(t,x)|\,\textrm{d} x\,\textrm{d} t < \infty$ .
We obtain the following lower bound.
Lemma 11. Suppose that Assumptions 1–3 hold, and $\phi \in C_\textrm{b,Lip}(\mathbb{R})$ . Then, for $(t,x)\in \lbrack0,T]\times \mathbb{R}$ , $u_{\Delta}(t,x)\leq u(t,x)+L_{0}\Delta^{\Gamma(\alpha,q)}$ , where
and $L_{0}$ is a constant depending on $C_{\phi}$ , $C_{\phi,\mathcal{K}}$ , $M_{X}^{1}$ , $M_{\xi}^{1}$ , $M_{\zeta}$ , and M, and is given in (26).
Proof. Step 1. Notice that $u(t-\tau,x-e)$ is a viscosity solution of (3) in $[0,T]\times \mathbb{R}$ for any $(\tau,e)\in(\!-\varepsilon^{2},0)\times B(0,\varepsilon)$ . Multiplying it by $\zeta_{\varepsilon}(\tau,e)$ and integrating it with respect to $(\tau,e)$ , from the concavity of (3) with respect to the nonlocal term we can derive that $u^{\varepsilon}(t,x)$ is a supersolution of (3) in $(0,T]\times\mathbb{R}$ , i.e. for $(t,x)\in(0,T]\times \mathbb{R}$ ,
Step 2. Since $u^{\varepsilon}\in C_\textrm{b}^{\infty}([0,T]\times \mathbb{R})$ , together with the consistency property in Proposition 2(iii) and (24), using (23), we can deduce that
Applying the comparison principle in Lemma 9 to $u_{\Delta}$ and $u^{\varepsilon}$ , by (17) and (25), we have, for $(t,x)\in[0,T]\times\mathbb{R}$ ,
Step 3. In view of the previous equation and Lemma 10, we obtain
Assumptions 1–3 indicate that $R^{0}\leq4M$ , $R_{\Delta}^{1}\leq10C\Delta^{q}$ , and $R_{\Delta}^{2}\leq16C\Delta^{q}$ . When $\alpha\in\big(1,\frac{4}{3}\big]$ and $q\in \big[\frac{1}{2},\infty\big)$ , by choosing $\varepsilon=\Delta^{{1}/{4}}$ we have $u_{\Delta}-u\leq L_{0}\Delta^{{1}/{4}}$ , where
when $\alpha \in\big(1,\frac{4}{3}\big]$ and $q\in \big[0,\frac{1}{2})$ , by choosing $\varepsilon=\Delta^{{q}/{2}}$ , we have $u_{\Delta}-u\leq L_{0}\Delta^{{q}/{2}}$ ; when $\alpha\in\big(\frac{4}{3},2\big)$ and $q\in \lbrack({2-\alpha})/{\alpha},\infty)$ , by letting $\varepsilon=\Delta^{({2-\alpha})/{2\alpha}}$ we get $u_{\Delta}-u\leq L_{0}\Delta^{({2-\alpha})/{2\alpha}}$ ; when $\alpha\in\big(\frac{4}{3},2\big)$ and $q\in(0,({2-\alpha})/{\alpha})$ , by letting $\varepsilon=\Delta^{{q}/{2}}$ we get $u_{\Delta}-u\leq L_{0}\Delta^{{q}/{2}}$ . To sum up, we conclude that $u_{\Delta}-u\leq L_{0}\Delta^{\Gamma(\alpha,q)}$ , where
This leads to the desired result.
5.3.2. Upper bound for the approximation scheme error
To obtain an upper bound for the approximation scheme error, we are not able to construct approximate smooth subsolutions of (3) due to the concavity of (3). Instead, we interchange the roles of the PIDE (3) and the approximation scheme (17). For $\varepsilon\in(0,1)$ , we extend (17) to the domain $[0,T+\varepsilon^{2}]\times \mathbb{R}$ and still denote it as $u_{\Delta}$ . For $(t,x)\in\lbrack0,T]\times \mathbb{R}$ , we define the mollification of u by
In view of Theorem 3, the standard properties of mollifiers indicate that
We obtain the following upper bound.
Lemma 12. Suppose that Assumptions 1–3 hold and $\phi\in C_\textrm{b,Lip}(\mathbb{R})$ . Then, for $(t,x)\in \lbrack0,T]\times \mathbb{R}$ , $u(t,x)\leq u_{\Delta}(t,x)+U_{0}\Delta^{\Gamma(\alpha,q)}$ , where
and $U_{0}$ is a constant depending on $C_{\phi}$ , $C_{\phi,\mathcal{K}}$ , $M_{X}^{1}$ , $M_{\xi}^{1}$ , $M_{\zeta}$ , M, and $I_{\Delta}$ , and is given in (29).
Proof. Step 1. Note that for any $(t,x)\in[\Delta,T]\times \mathbb{R}$ and $(\tau,e)\in(\!-\varepsilon^{2},0)\times B(0,\varepsilon)$ ,
Multiplying the above equality by $\zeta_{\varepsilon}(\tau,e)$ and integrating with respect to $(\tau,e)$ , from the concavity of the approximation scheme (17), we have, for $(t,x)\in(\Delta,T]\times\mathbb{R}$ ,
Step 2. Since $u_{\Delta}^{\varepsilon}\in C_{b}^{\infty}([0,T]\times\mathbb{R})$ , by substituting $u_{\Delta}^{\varepsilon}$ into the consistency property in Proposition 2(iii), together with (27) and (28), we can compute that
where $C(\varepsilon,\Delta)$ is defined in (25). Then, the function
is a supersolution of (3) in $(\Delta,T]\times \mathbb{R}$ with initial condition $\bar{v}(\Delta,x)=u_{\Delta}^{\varepsilon}(\Delta,x)$ . In addition,
is a viscosity solution of (3) in $(\Delta,T]\times \mathbb{R}$ . From (27) and Lemma 10, we can further obtain
By means of the comparison principle for PIDE (3) (see [Reference Neufeld and Nutz29, Proposition 5.5]), we conclude that $\underline{v}(t,x)\leq \bar{v}(t,x)$ in $[\Delta,T]\times \mathbb{R}$ , which implies, for $(t,x)\in \lbrack\Delta,T]\times \mathbb{R}$ ,
Step 3. Using the previous equation and (27), we have
Under Assumptions 1–3, we have $I_{\Delta}<\infty$ , $R^{0}\leq4M$ , $R_{\Delta}^{1}\leq10C\Delta^{q}$ , and $R_{\Delta}^{2}\leq16C\Delta^{q}$ . In the same way as for Lemma 11, by minimizing with respect to $\varepsilon$ we can derive that, for $(t,x)\in \lbrack \Delta,T]\times\mathbb{R}$ , $u-u_{\Delta}\leq U_{0}\Delta^{\Gamma(\alpha,q)}$ , where
and
Combining this and Lemma 10, we obtain the desired result.
Funding information
Hu’s research is supported by National Natural Science Foundation of China (Nos. 12326603, 11671231) and National Key R&D Program of China (No. 2018YFA0703900). Jiang’s research is supported by National Natural Science Foundation of Shandong Province (No. ZR2023QA090). Liang’s research is supported by National Natural Science Foundation of China (No. 12171169) and Laboratory of Mathematics for Nonlinear Science, Fudan University.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.