1. Introduction
Let $(\Omega,\mathcal{F},\mathbb{P})$ be equipped with a right continuous and increasing family of $\sigma$ -algebras $\{\mathcal{F}_t,t\geq 0\}$ , and let $\{W_t\}_{t\geq 0}$ be a given standard Brownian motion defined on the probability space $(\Omega,\mathcal{F},\mathbb{P})$ . In this paper we consider a reflected stochastic linear differential equation of a large signal,
where the initial value $X_0=x_0>0, \varepsilon\in(0,1]$ , $\theta\in\mathbb{R}$ is unknown, and $L=\{L_t,\,t\geq 0\}$ is the minimal increasing non-negative process which makes the reflected stochastic process (1) satisfy $X_t\geq 0$ for all $t\geq 0$ . The process L increases only when X hits the boundary zero, so that
It can be easily proved (see e.g. [Reference Harrison9] and [Reference Whitt22]) that the process L has the following explicit expression:
Usually in applications to financial engineering, queueing systems, storage models, etc., the reflecting barrier is assumed to be zero. This is principally because of the physical restriction of the state processes. For example, inventory levels, stock prices, and interest rates should take non-negative values. We refer to [Reference Bo, Tang, Wang and Yang2], [Reference Bo, Wang and Yang3], [Reference Han, Hu. and Lee8], [Reference Harrison9], [Reference Krugman14], [Reference Linetsky17], [Reference Lions and Sznitman18], [Reference Ricciardi and Sacerdote19], [Reference Ward and Glynn20], [Reference Ward and Glynn21], and [Reference Whitt22] for more details on reflected stochastic differential equations (RSDEs) and their wide applications.
But in reality, the drift parameter in RSDEs is seldom known. Parametric inference is one of the effective methods for solving this type of problem. In the case of statistical inference for RSDEs driven by Brownian motion, a popular approach is the maximum likelihood estimation method, based on the Girsanov density (see e.g. [Reference Zang and Zhang23], [Reference Zang and Zhang24], and [Reference Zang and Zhu26]). For example, Bo et al. [Reference Bo, Wang, Yang and Zhang4] established the maximum likelihood estimator (MLE) for the stationary reflected Ornstein–Uhlenbeck processes (OUPs) and studied the strong consistency and asymptotic normality of the MLE. Jiang and Yang [Reference Jiang and Yang12] considered asymptotic properties of the MLE of the parameter occurring in ergodic reflected Ornstein–Uhlenbeck processes (ROUPs) with a one-sided barrier. Zang and Zhu [Reference Zang and Zhu26] investigated the strong consistency and limiting distribution of the MLE in both the stationary and non-stationary cases for reflected OUPs. It is well known that the TFE was introduced by Kutoyants [Reference Kutoyants15] as a numerically attractive alternative to the well-investigated MLE. Recently, Zang and Zhang [Reference Zang and Zhang25] used the trajectory fitting estimation to investigate the asymptotic behaviour of the estimator for non-stationary reflected OUPs, including strong consistency and asymptotic distribution. Further, they have shown that the TFE for ergodic reflected OUPs is not strongly consistent.
On the other hand, trajectory fitting estimation for stochastic process without reflection have drawn increasing attention (see e.g. [Reference Dietz5], [Reference Dietz and Kutoyants6], [Reference Dietz and Kutoyants7], [Reference Kutoyants15], and [Reference Kutoyants16]). For instance, Abi-ayad and Mourid [Reference Abi-ayad and Mourid1] discussed the strong consistency and Gaussian limit distribution of the TFE for non-recurrent diffusion processes. Jiang and Xie [Reference Jiang and Xie11] studied the asymptotic behaviours for the TFE in stationary OUPs with linear drift.
Motivated by the aforementioned works, in this paper we extend the work of Zang and Zhang [Reference Zang and Zhang25] and study the consistency and asymptotic distributions of the TFE for RSDE (1) based on continuous observation of $X=\{X_t, 0\leq t\leq T \}$ . In order to obtain our estimators, we divide RSDE (1) by $\varepsilon^{{{1}/{2}}}$ and change the variable $t_\varepsilon=t\varepsilon^{-1}$ . So $t_\varepsilon\in[0,T_\varepsilon]$ with $T_\varepsilon=T\varepsilon^{-1}$ . From the scaling properties of Brownian motion, we find that there exists another standard Brownian motion $\{\widetilde{W}_t\}_{t\geq 0}$ on the enlarged probability space such that $\widetilde{W}_t\stackrel{d}{=}\varepsilon^{-{{1}/{2}}}W_{\varepsilon t}$ . Denote $Y_{t_\varepsilon}=X_{t_\varepsilon\varepsilon}\varepsilon^{-{{1}/{2}}}$ . Then, for reflected stochastic process (1), we have
where the realizations of $\widetilde{L}_{t_\varepsilon}=\varepsilon^{-{{1}/{2}}}L_{\varepsilon t_\varepsilon}$ . It follows from (2) that
Let
RSDE (3) can be written as
The TFE of $\theta$ should minimize
It can easily be seen that the minimum is attained when $\theta$ is given by
By simple calculations, we have
2. Consistency of the TFE $\widehat{\theta}_\varepsilon$
In this section we discuss the consistency of the TFE $\widehat{\theta}_\varepsilon$ in both the non-ergodic and ergodic cases, respectively. We shall use the notation ‘ $\rightarrow_p$ ’ to denote ‘convergence in probability’ and the notation ‘ $\Rightarrow$ ’ to denote ‘convergence in distribution’. We write ‘ $\stackrel{d}{=}$ ’ for equality in distribution.
We introduce two important lemmas as follows.
Lemma 2.1. (Dietz and Kutoyants [Reference Dietz and Kutoyants6].) If $\varphi_T$ is a probability measure defined on $[0,\infty)$ such that $\varphi_T([0,T])=1$ and $\varphi_T([0,K])\rightarrow0$ as $T\rightarrow\infty$ for each $K>0$ , then
for every bounded and measure function $f\colon [0,\infty)\rightarrow\mathbb{R}$ for which the limit $f_\infty\;:\!=\; \lim_{t\rightarrow\infty}f_t$ exists.
Lemma 2.2. (Karatzas and Shreve [Reference Karatzas and Shreve13].) Let $z\geq 0$ be a given number and let $y(\!\cdot\!)=\{y(t);\; 0 \leq t < \infty\}$ be a continuous function with $y(0)= 0$ . There exists a unique continuous function $k(\!\cdot\!)=\{k(t);\; 0 \leq t < \infty\}$ such that
-
(i) $x(t)\;:\!=\; z+y(t)+k(t)\geq 0, 0\leq t<\infty$ ,
-
(ii) $k(0)=0$ , $k(\!\cdot\!)$ is non-decreasing,
-
(iii) $k(\!\cdot\!)$ is flat off $\{t\geq 0;\; x(t) = 0\}$ , that is,
\begin{align*} \int_0^\infty I_{\{x(s)> 0\}} \,{\mathrm{d}} k(s)=0. \end{align*}
Then the function $k(\!\cdot\!)$ is given by
Theorem 2.1.
-
(a) Under $\theta>0$ , we have
(6) \begin{align}\lim_{\varepsilon\rightarrow 0}(\widehat{\theta}_\varepsilon-\theta)=0\quad \textit{a.s.}\end{align} -
(b) Under $\theta=0$ , we have
(7) \begin{align}\widehat{\theta}_\varepsilon-\theta\rightarrow_p0\quad\textit{as}\; \text{$ \varepsilon\rightarrow 0$.}\end{align} -
(c) Under $\theta<0$ , we have
(8) \begin{align}\lim_{\varepsilon\rightarrow 0}\widehat{\theta}_\varepsilon=0\quad \textit{a.s.,}\end{align}that is, the TFE $\widehat{\theta}_\varepsilon$ is not strongly consistent.
Proof. (a) (i) If $x_0> 0$ , it is easy to see that
Because the process $\widetilde{L}=\{\widetilde{L}_{t_\varepsilon}\}_{t_\varepsilon\geq 0}$ increases only when $Y=\{Y_{t_\varepsilon}\}_{t_\varepsilon\geq 0}$ hits the boundary zero, $\int_0^{t_\varepsilon}\,{\mathrm{e}}^{-\theta s} \,{\mathrm{d}} \widetilde{L}_s$ is a continuous non-decreasing process for which
and increases only when
It follows from Lemma 2.2 that
For
by time change for a continuous martingale, there exists another standard Brownian motion $\{\widehat{W}_{t}\}_{t\geq 0}$ on the enlarged probability space such that $M_{t}\stackrel{d}{=}\widehat{W}_{\frac{1-{\mathrm{e}}^{-2\theta t}}{2\theta}}$ . It follows from (10) that
Then, under $\theta>0$ and $x_0>0$ , by the fact that $\widehat{W}_s$ , $s\in[0,{{1}/{(2\theta)}}]$ is almost surely finite, we have
In view of the definition of the quadratic variation of a continuous local martingale, we find that
and
is a continuous local martingale. Writing this as
it follows from the convergence theorem of non-negative semi-martingales that
and hence
Applying (9), (12), and (13), we obtain
Combining Lemma 2.1 with (14) yields
By (4) and the self-similarity property for the Brownian motion, we get
where $\{\breve{W}_u,\,u\geq 0\}$ is a fixed Wiener process on the enlarged probability space. Using (16) and the fact that $\breve{W}_u$ , $u\in[0,1]$ is almost surely finite, we see that
Obviously,
By the strong law of large numbers, we obtain
This implies that
By (15), (17), (18), and Lemma 2.1, we have
This completes the desired proof.
(ii) If $x_0=0$ , by Theorem 2.1 of Zang and Zhang [Reference Zang and Zhang23], it follows that (6) holds. This completes the proof of Theorem 2.1(a).
(b) Under $\theta=0$ , we have $Y_{T_\varepsilon}=x_0\varepsilon^{-{{1}/{2}}}+\widetilde{W}_{T_{\varepsilon}}+\widetilde{L}_{T_{\varepsilon}}$ . Then
and
By the scaling properties of Brownian motion and Lemma 2.2, it follows that
where $\{\widehat{W}_\nu,\,\nu\geq 0\}$ is another standard Wiener process on the enlarged probability space. By the continuous mapping theorem, we have
where
Combining (5) with (20) implies that (7) holds. This completes the desired proof.
(c) We consider the case $\theta<0$ . We first introduce the reflected OUP
It is easy to see that
Similarly to the discussion of (10), we see that
By (9), (10), (22), and (23), we have
For the linear system (21), according to the mean ergodic theorem (see Hu et al. [Reference Hu, Lee, Lee and Song10]), we have
Combining (24) with (25) yields
It follows from Zang and Zhang [Reference Zang and Zhang25] that
Note that
Then we have
By (26), (27), Lemma 2.1, and the strong law of large numbers, we have
and
By (5), (28), and (29), we can conclude that (8) holds. This completes the proof.
3. Asymptotic distribution of the TFE $\widehat{\theta}_\varepsilon$
In this section we investigate the asymptotic distribution of the TFE $\widehat{\theta}_\varepsilon$ .
Theorem 3.1. Let $\{\widehat{W}_u,\,u\geq 0\}$ be another standard Wiener process on the enlarged probability space.
-
(a) Assume $\theta>0$ .
-
(i) If $x_0> 0$ , then
(30) \begin{align}(\widehat{\theta}_\varepsilon-\theta)\,{\mathrm{e}}^{\theta T_\varepsilon}\Rightarrow \dfrac{2\theta}{x_0}T^{{{1}/{2}}}N\quad \textit{as}\; \text{$ \varepsilon\rightarrow 0$,}\end{align}where N is a random variable with the standard normal distribution. -
(ii) If $x_0=0$ , then
(31) \begin{align}\dfrac{{\mathrm{e}}^{\theta T_\varepsilon}}{\sqrt{T_\varepsilon}}(\widehat{\theta}_\varepsilon-\theta)\Rightarrow \dfrac{2\theta N}{\bigl|\widehat{W}_{{{1}/{(2\theta)}}}+\widehat{L}_{{{1}/{(2\theta)}}}\bigr|}\quad \textit{as}\; \text{$ \varepsilon\rightarrow 0$,}\end{align}where\begin{align*} \widehat{L}_{{{1}/{(2\theta)}}}=\max\Bigl[ 0,\max_{0\leq u\leq {{1}/{(2\theta)}}}(\!-\!\widehat{W}_u)\Bigr],\end{align*}and N is a standard normal random variable which is independent of $\widehat{W}_{{{1}/{(2\theta)}}}$ and $\widehat{L}_{{{1}/{(2\theta)}}}$ .
-
-
(b) If $\theta =0$ , then
(32) \begin{align}\varepsilon^{-1}\widehat{\theta}_\varepsilon\Rightarrow \dfrac{1}{T}\dfrac{\int_0^{1}\bigl(\widehat{W}_{s}+\widehat{L}_{s}\bigr)\int_0^{s}\bigl(x_0T^{-{{1}/{2}}}+\widehat{W}_{u}+\widehat{L}_{u}\bigr)\,{\mathrm{d}} u \,{\mathrm{d}} s}{\int_0^{1}\bigl(\int_0^{t}\bigl(x_0T^{-{{1}/{2}}}+\widehat{W}_{s}+\widehat{L}_{s}\bigr)\,{\mathrm{d}} s\bigr)^2\,{\mathrm{d}} t}\quad \textit{as}\; \textit{$ \varepsilon\rightarrow 0$,}\end{align}where\begin{equation*} \widehat{L}_s=\max\Bigl\{0,\max_{0\leq u\leq s}\bigl({-}x_0T^{-{{1}/{2}}}-\widehat{W}_{u}\bigr)\Bigr\}\end{equation*}
Proof. (a) (i) If $x_0> 0$ , we have
We shall study the asymptotic behaviour of $I_i(\varepsilon)$ , $i=1,\ldots,4$ . By (15) and Lemma 2.1, we can see that
It follows that
Now we consider $I_2(\varepsilon)$ . Using (15) and Lemma 2.1 again, we have
For the second factor in $I_2(\varepsilon)$ , we find that
It is easy to see that the random variable
has a normal distribution $N\bigl(0,1-T_\varepsilon^{-{{1}/{2}}}\bigr)$ , which converges weakly to a standard normal random variable N as $\varepsilon\rightarrow0$ . By the strong law of large numbers, we see that
Hence we have
Combining (35) with (36) gives
Next, we show that $I_3(\varepsilon)\rightarrow 0$ in probability as $\varepsilon\rightarrow0$ . Note that
which converges to zero in probability as $\varepsilon\rightarrow 0$ . In fact, using Markov’s inequality and Fubini’s theorem, we find that for given $\delta>0$ ,
which tends to zero as $\varepsilon\rightarrow0$ . Finally, we consider $I_4(\varepsilon)$ . Combining the self-similarity property for the Brownian motion with (4) yields
Note that for any given $u\in(0,1]$ we can choose a positive number $\epsilon>0$ such that $\epsilon<u$ . Then we have
By (15), we have
By (40), (41), and the fact that $\widehat{W}_{u},u\in(0,1]$ is almost surely finite, we have
If $u=0$ , we find that
Hence we see that
Applying (39) and (43), we obtain
By (15) and (44) as well as Lemma 2.1, we have
Therefore, by (33), (34), (37), (38), and (45), we conclude that (30) holds. This completes the desired proof.
(ii) Under $x_0=0$ , by Theorem 2.2 of Zang and Zhang [Reference Zang and Zhang23], it follows that (31) holds. This completes the desired proof.
(b) Combining (5) with (20) implies that (32) holds. This completes the proof.
4. Discussion
In this section we discuss the properties of the TFE $\widehat{\theta}_\varepsilon$ and MLE
separately in terms of the range of $\theta$ , i.e. $\theta>0$ , $\theta=0$ , and $\theta<0$ . By comparing the results of Zhang and Shu [Reference Zhang and Shu27], we obtain the following claims.
-
(a) Under $\theta>0$ , both estimators are consistent. In addition, the following hold.
-
(i) If $x_0> 0$ , then
(46) \begin{align}\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta T_\varepsilon}(\widehat{\theta}_\varepsilon^{\textrm{MLE}}-\theta)\Rightarrow N\biggl(0,\dfrac{2\theta}{x_0^2}\biggr)\quad \text{as $ \varepsilon\rightarrow 0$,}\end{align} -
(ii) If $x_0=0$ , then
(47) \begin{align}{\mathrm{e}}^{\theta T_\varepsilon}(\widehat{\theta}_\varepsilon^{\textrm{MLE}}-\theta)\Rightarrow \dfrac{\sqrt{2\theta}N}{\bigl|\widehat{W}_{{{1}/{(2\theta)}}}+\widehat{L}_{{{1}/{(2\theta)}}}\bigr|}\quad \text{as $ \varepsilon\rightarrow 0$,}\end{align}where\begin{align*} \widehat{L}_{{{1}/{(2\theta)}}}=\max\Bigl[ 0,\max_{0\leq u\leq {{1}/{(2\theta)}}}(\!-\!\widehat{W}_u)\Bigr],\end{align*}and N is a standard normal random variable which is independent of $\widehat{W}_{{{1}/{(2\theta)}}}$ and $\widehat{L}_{{{1}/{(2\theta)}}}$ . For both estimators, the order of the convergence depends heavily on the true value of the parameter. It can also be seen that the MLE $\widehat{\theta}_\varepsilon^{\textrm{MLE}}$ converges in distribution of higher order than the TFE $\widehat{\theta}_\varepsilon$ .
-
-
(b) Under $\theta=0$ , it is easy to see that both estimators are consistent. The MLE $\widehat{\theta}_\varepsilon^{\textrm{MLE}}$ has the limiting distribution
\begin{align*}\varepsilon^{-1}\widehat{\theta}_\varepsilon^{\textrm{MLE}}\Rightarrow\dfrac{1}{T}\dfrac{\int_0^1\bigl(x_0T^{-{{1}/{2}}}+\widehat{W}_u+\widehat{L}_u\bigr)\,{\mathrm{d}} \widehat{W}_u}{x_0T^{-1}+\int_0^1\bigl(\widehat{W}_u+\widehat{L}_u\bigr)^2\,{\mathrm{d}} u+2x_0T^{-{{1}/{2}}}\int_0^1\bigl(\widehat{W}_u+\widehat{L}_u\bigr)\,{\mathrm{d}} u}\quad \text{as $ \varepsilon\rightarrow 0$,}\end{align*}where\begin{align*} \widehat{L}_u=\max\Bigl\{0,\max_{0\leq r\leq u}\bigl({-}x_0T^{-{{1}/{2}}}-\widehat{W}_{r}\bigr)\Bigr\}. \end{align*}Both estimators are neither normal nor a mixture of normals. Further, both estimators have the same order of convergence in this case. -
(c) Under $\theta<0$ , the MLE $\widehat{\theta}_\varepsilon^{\textrm{MLE}}$ of $\theta$ is strongly consistent. But the TFE $\widehat{\theta}_\varepsilon$ is not strongly consistent. The MLE $\widehat{\theta}_\varepsilon^{\,\textrm{MLE}}$ has the limiting distribution
\begin{align*}\dfrac{\widehat{\theta}_\varepsilon^{\textrm{MLE}}-\theta}{\varepsilon^{{{1}/{2}}}}\Rightarrow N\biggl(0,-\dfrac{2\theta}{x_0^2+T}\biggr)\quad \text{as $ \varepsilon\rightarrow 0$.} \end{align*}
Acknowledgements
The authors thank the anonymous referee for their constructive comments and suggestions, which improved the quality of the paper significantly.
Funding information
This work was supported by the National Natural Science Foundation of China (grants 12101004, 62073071, and 12271003), the Natural Science Research Project of Anhui Educational Committee (grants 2023AH030021 and 2023AH010011), and the Research Start-up Foundation for Introducing Talent of Anhui Polytechnic University (grant 2020YQQ064).
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.