1. Introduction
The study of last exit times has received much attention in several areas of applied probability, e.g. risk theory, finance, and reliability, in the past few years. Consider the Cramér–Lundberg process, a process consisting of a deterministic drift and a compound Poisson process with only negative jumps (see Figure 1), which is typically used to model the capital of an insurance company. Of particular interest is the moment of ruin, $\tau_0^-$ , which is defined to refer to the first moment when the process becomes negative. Within the framework of the insurance company having sufficient funds to endure negative capital for a considerable amount of time, another quantity of interest is the last time, $g_{e_{\theta}}$ , that the process is below zero before an exponential time $e_{\theta}$ . In a more general setting, we can consider a spectrally negative Lévy process instead of the classical risk process. Several works, for example [Reference Baurdoux1] and [Reference Chiu and Yin9], have studied the Laplace transform of the last time before an exponential time that a spectrally negative Lévy process is below some given level.
Last passage time is increasingly becoming a vital factor in financial modelling, as shown in [Reference Madan, Roynette and Yor18] and [Reference Madan, Roynette and Yor19], where the authors conclude that the price of a European put and call option, modelled by non-negative and continuous martingales that vanish at infinity, can be expressed in terms of the probability distributions of some last passage times.
Another application of last passage times is in degradation models. The authors of [Reference Paroissin and Rabehasaina20] propose a spectrally positive Lévy process to model the ageing of a device in which they consider a subordinator perturbed by an independent Brownian motion. A motivation for considering this model is that the presence of a Brownian motion can model small repairs of the device, while the jumps represent major deterioration. In the literature, the failure time of a device is defined as the first hitting time of a critical level b. An alternative approach is to consider instead the last time that the process is under the level b, since the paths of this process are not necessarily monotone and this allows the process to return below the level b after it goes above b.
The aim of this work is to predict the last time a spectrally negative Lévy process is below zero before an independent exponential time, where the term ‘to predict’ is understood to mean to find a stopping time that is closest (in the $L^1$ sense) to this random time. This problem is an example of the optimal prediction problems which have been widely investigated by many researchers. For example, [Reference Graversen, Peskir and Shiryaev13] predicted the value of the ultimate maximum of a Brownian motion in a finite-horizon setting, whereas [Reference Shiryaev23] focused on the last time of the attainment of the ultimate maximum of a (driftless) Brownian motion and proceeded to show that it is equivalent to predicting the last zero of the process in this setting. The work of the latter was generalised by [Reference Du Toit, Peskir and Shiryaev10] for a linear Brownian motion. The paper [Reference Bernyk, Dalang and Peskir4] studied the time at which a stable spectrally negative Lévy process attains its ultimate supremum in a finite horizon of time; this was later generalised by [Reference Baurdoux and van Schaik3] for any Lévy process in an infinite horizon of time. Investigations on the time of the ultimate minimum and the last zero of a transient diffusion process were carried out by [Reference Glover, Hulley and Peskir12] and [Reference Glover and Hulley11], respectively, within a subclass of functions.
In [Reference Baurdoux and Pedraza2] the last zero of a spectrally negative Lévy process in a infinite-horizon setting is predicted. It is shown that an optimal stopping time that minimises the $L_1$ -distance to the last zero of a spectrally negative Lévy process with drift is the first time the Lévy process crosses above a fixed level $a^*\geq 0$ (which is characterised in terms of the cumulative distribution of the overall infimum of the process). As is the case in the Canadisation of American-type options (see e.g. [Reference Carr8]), given the memoryless property of the exponential distribution, one would expect that the generalisation of the aforementioned problem to an exponential time horizon would result in an infinite-horizon optimal prediction problem, and hence have a non-time-dependent solution. However, it turns out that this is not the case. Indeed, we show the existence of a continuous, non-increasing, and non-negative boundary such that an optimal stopping time is given by the first passage time, before the median of the exponential time, above this curve. The proof relies on solving an equivalent (finite-horizon) optimal stopping problem that depends on time and the process itself. Moreover, based on the ideas of [Reference Du Toit, Peskir and Shiryaev10] we characterise the boundary and the value function as the unique solutions of a system of nonlinear integral equations. Such a system can be thought of as a generalisation of the free boundary equation (see e.g. [Reference Peskir and Shiryaev21, Section 14]) allowing for the presence of jumps. We consider two examples where numerical calculations are implemented to find the optimal boundary.
This paper is organised as follows. In Section 2 we introduce some important notation regarding Lévy processes, and we outline some known fluctuation identities that will be useful later. We then formulate the optimal prediction problem and prove that it is equivalent to an optimal stopping problem whose solution is given in Theorem 2.1 Section 3 is dedicated to the solution of the optimal stopping problem. The main result of this paper is stated in Theorem 3.1, and its proof is detailed in Section 3.1. The last section makes use of Theorem 3.1 to find numerical solutions of the optimal stopping problem for the case of a Brownian motion with drift and a compound Poisson process perturbed by a Brownian motion.
2. Prerequisites and formulation of the problem
We start this section by introducing some important notation, and we give an overview of some fluctuation identities for spectrally negative Lévy processes. Readers can refer to [Reference Bertoin5], [Reference Sato22], or [Reference Kyprianou15] for more details about Lévy processes.
A Lévy process $X=\{X_t,t\geq 0 \}$ is an almost surely (a.s.) càdlàg process that has independent and stationary increments such that ${\mathbb{P}}(X_0=0)=1$ . Every Lévy process X is also a strong Markov $\mathbb{F}$ -adapted process. For all $x\in {\mathbb{R}}$ , denote by ${\mathbb{P}}_x$ the law of X when started at the point $x\in {\mathbb{R}}$ ; that is, ${\mathbb{E}}_x(\cdot)={\mathbb{E}}(\cdot|X_0=x)$ . Because of the spatial homogeneity of Lévy processes, the law of X under ${\mathbb{P}}_x$ is the same as that of $X+x$ under ${\mathbb{P}}$ .
Let X be a spectrally negative Lévy process, that is, a Lévy process starting from 0 with only negative jumps and non-monotone paths, defined on a filtered probability space $(\Omega,{\mathcal{F}}, \mathbb{F}, {\mathbb{P}})$ where $\mathbb{F}=\{{\mathcal{F}}_t,t\geq 0 \}$ is the filtration generated by X which is naturally enlarged (see [6, Definition 1.3.38]). We suppose that X has Lévy triplet $(\mu,\sigma, \Pi)$ where $\mu \in {\mathbb{R}}$ , $\sigma\geq 0$ , and $\Pi$ is a measure (Lévy measure) concentrated on $({-}\infty,0)$ satisfying $\int_{({-}\infty,0)} \big(1\wedge x^2\big)\Pi({{d}} x)<\infty$ .
Let $\psi$ be the Laplace exponent of X, defined as
Then $\psi$ exists in ${\mathbb{R}}_+$ , and it is strictly convex and infinitely differentiable with $\psi(0)=0$ and $\psi(\infty)=\infty$ . From the Lévy–Khintchine formula, we know that $\psi$ takes the form
for all $\beta\geq 0$ . Denote by $\tau_a^+$ the first time the process X is above the level $a \in {\mathbb{R}}$ , i.e.,
Then it can be shown that, for any $a\geq 0$ , its Laplace transform is given by
where $\Phi$ corresponds to the right inverse of $\psi$ , which is defined by
for any $q\geq 0$ .
Now we introduce the scale functions. This family of functions is the key to the derivation of fluctuation identities for spectrally negative Lévy processes. The notation used is mainly based on [Reference Kuznetsov, Kyprianou and Rivero14] and [Reference Kyprianou15] (see Chapter 8). For $q\geq 0$ , the function $W^{(q)}$ is such that $W^{(q)}=0$ for $x<0$ and $W^{(q)}$ is characterised on $[0,\infty)$ as a strictly increasing and continuous function whose Laplace transform satisfies
We further define the function $Z^{(q)}$ by
Denote by $\tau_0^-$ the first passage time of X into the set $({-}\infty,0)$ , that is,
It turns out that the Laplace transform of $\tau_0^-$ can be written in terms of the scale functions. Specifically,
for all $q\geq 0$ and $x\in {\mathbb{R}}$ . It can be shown that the paths of X are of finite variation if and only if
In this case, we may write
where
Note that monotone processes are excluded from the definition of spectrally negative Lévy processes, so we assume that $\delta>0$ when X is of finite variation. The value of $W^{(q)}$ at zero depends on the path variation of X. In the case where X is of infinite variation we have that $W^{(q)}(0)=0$ ; otherwise,
For any $a \in {\mathbb{R}}$ and $q\geq 0$ , the q-potential measure of X killed upon entering the set $[a,\infty)$ is absolutely continuous with respect to the Lebesgue measure. A density is given for all $x,y\leq a$ by
Let $g_{\theta}$ be the last passage time below zero before an exponential time, i.e.,
where ${e_{\theta}}$ is an exponential random variable with parameter $\theta \geq 0$ . Here we use the convention that an exponential random variable with parameter 0 is taken to be infinite with probability 1. In the case of $\theta=0$ , we simply write $g=g_0$ .
Note that $g_{\theta} \leq {e_{\theta}}<\infty$ ${\mathbb{P}}$ -a.s. for all $\theta > 0$ . However, in the case where $\theta=0$ , g could be infinite. Therefore, we assume that $\theta>0$ throughout this paper. Moreover, we have that $g_{\theta}$ has finite moments for all $ \theta>0$ .
Remark 2.1. Since X is a spectrally negative Lévy process, we can exclude the case of a compound Poisson process, and hence the only way of exiting the set $({-}\infty,0]$ is by creeping upwards. This tells us that $X_{g_{\theta}-}=X_{g_{\theta}}=0$ in the event of $\{g_{ \theta}<{e_{\theta}}\}$ and that $g_{\theta}=\sup\{0\leq t \leq {e_{\theta}}\,:\, X_t<0\}$ holds ${\mathbb{P}}$ -a.s.
Clearly, up to any time $t\geq 0$ the value of $g_{\theta}$ is unknown (unless X is trivial), and it is only with the realisation of the whole process that we know that the last passage time below 0 has occurred. However, this is often too late: typically, at any time $t\geq 0$ , we would like to know how close we are to the time $g_{\theta}$ so that we can take some action based on this information. We search for a stopping time $\tau_*$ of X that is as ‘close’ as possible to $g_{\theta}$ . Consider the optimal prediction problem
where $\mathcal{T}$ is the set of all stopping times.
Note that the random time $g_{\theta}$ is only $\mathbb{F}$ -measurable, so it is not immediately obvious how to solve the optimal prediction problem by using the theory of optimal stopping. Hence, in order to find the solution we solve an equivalent optimal stopping problem. In the next lemma we establish an equivalence between the optimal prediction problem (2.7) and an optimal stopping problem. This equivalence is mainly based on the work of [Reference Urusov24].
Lemma 2.1. Suppose that $\{X_t, t\geq 0\}$ is a spectrally negative Lévy process. Let $g_{\theta}$ be the last time that X is below the level zero before an exponential time ${e_{\theta}}$ with $\theta > 0$ , as defined in (2.6). For any $\tau \in \mathcal{ T}$ we have
where the function $G^{(\theta)}$ is given by
for all $x\in {\mathbb{R}}$ . Then the stopping time which minimises (2.7) is the same one that minimises the optimal stopping problem given by
In particular,
Proof. Fix any stopping time $\tau \in \mathcal{T}$ . We have that
From Fubini’s theorem and the tower property of conditional expectations, we obtain
Note that in the event of $\{{e_{\theta}}\leq s\}$ , we have $g_{\theta}\leq s$ , so that
On the other hand, for $\{{e_{\theta}} >s\}$ , as a consequence of Remark 2.1, the event $\{g_{\theta}\leq s\}$ is equal to $\{ X_u \geq 0 \text{ for all } u\in [s,{e_{\theta}}]\}$ (up to a ${\mathbb{P}}$ -null set). Hence we get that for all $s\geq 0$ ,
where $\underline{X}_t=\inf_{0\leq s\leq t} X_s$ for any $t \geq 0$ , and the last equality follows from the lack-of-memory property of the exponential distribution and the Markov property for the Lévy process. Hence, we have that
where for all $x\in {\mathbb{R}}$ , $F^{(\theta)}(x)={\mathbb{P}}_x\big(\underline{X}_{{e_{\theta}} } \geq 0\big)$ . Then, since ${e_{\theta}}$ is independent of X, we have that for $x\in {\mathbb{R}}$ ,
where the last equality follows from Equation (2.2). Thus,
Therefore,
The conclusion holds.
Note that if we evaluate at $\theta=0$ , the function $G^{(0)}$ coincides with the gain function found in [Reference Baurdoux and Pedraza2] (see Lemma 3.2 and Remark 3.3). In order to find the solution to the optimal stopping problem (2.8) (and hence (2.7)), we extend its definition to the Lévy process (and hence strong Markov process) $\{(t,X_t),t\geq 0\}$ in the following way. Define the function $V\,:\, {\mathbb{R}}_+\times {\mathbb{R}} \mapsto {\mathbb{R}}$ as
so that
The next theorem states the solution of the optimal stopping problem (2.10) and hence the solution of (2.7).
Theorem 2.1. Let $\{X_t, t\geq 0\}$ be any spectrally negative Lévy process and ${e_{\theta}}$ an exponential random variable with parameter $\theta > 0$ independent of $\mathbb{F}$ . There exists a non-increasing and continuous curve ${b^{(\theta)}}\,:\,[0,{m_{\theta}}] \mapsto {\mathbb{R}}_+$ such that ${b^{(\theta)}}\geq {h^{(\theta)}}$ , where $h^{(\theta)}(t)\,:\!=\,\inf\{ x \in {\mathbb{R}}\,:\, G^{(\theta)}(t,x)\geq 0\}$ , and the infimum in (2.10) is attained by the stopping time
when $(t,x)\in [0,{m_{\theta}})\times {\mathbb{R}}$ and by $\tau_D=0$ when $(t,x)\in [{m_{\theta}},\infty)\times {\mathbb{R}} $ , where ${m_{\theta}}=\log\!(2)/\theta$ . Moreover, the function ${b^{(\theta)}}$ is uniquely characterised as in Theorem 3.1.
Note that the proof of Theorem 2.1 is rather long and hence is split into a series of lemmas. We dedicate Section 3 to the proof.
3. Solution to the optimal stopping problem
In this section we solve the optimal stopping problem (2.10). The proof relies on showing that $\tau_D$ as defined in Theorem 2.1 is indeed an optimal stopping time, by using the general theory of optimal stopping and properties of the function ${V^{(\theta)}}$ . Hence, some properties of ${b^{(\theta)}}$ are derived. The main contribution of this section (Theorem 3.1) characterises ${V^{(\theta)}}$ and ${b^{(\theta)}}$ as the unique solution of a nonlinear system of integral equations within a certain family of functions.
Recall that ${V^{(\theta)}}$ is given by
From the proof of Lemma 2.1 we note that $G^{(\theta)}$ can be written as
where $F^{(\theta)}$ is the distribution function of the positive random variable $-\underline{X}_{{e_{\theta}}}$ given by
for all $x\in {\mathbb{R}}$ . Note that, for each $\theta>0$ , the random variable $-\underline{X}_{{e_{\theta}}}$ has support on $[0,\infty)$ , and hence the function $F^{(\theta)}$ is strictly increasing on $[0,\infty)$ . Indeed, from the Wiener–Hopf factorisation (see e.g. [Reference Kyprianou15, Theorem 6.15, pp. 171--172]), we know that for any $\theta>0$ , the random variable $\underline{X}_{{e_{\theta}}}$ is infinitely divisible with no Gaussian component and with Lévy measure given by $\pi^-({{d}} x)=\int_0^{\infty}\frac{1}{t}e^{-pt} {\mathbb{P}}(X_t\in {{d}} x)$ for $x<0$ . Moreover, since X creeps upwards and is not a subordinator, we have that $\pi^-$ has support on $({-}\infty,0]$ . Then, from the Lévy–Khintchine formula and [Reference Sato22, Theorem 24.10(iii), p. 152], we deduce that the support of the random variable $\underline{X}_{{e_{\theta}}}$ is $({-}\infty,0]$ as claimed.
Now we give some intuition about the function $G^{(\theta)}$ . Recall that for all $\theta \geq 0$ , $W^{\theta}$ and $Z^{(\theta)}$ are continuous and strictly increasing functions on $[0,\infty)$ such that $W^{(\theta)}(x)=0$ and $Z^{(\theta)}(x)=1$ for $x \in ({-}\infty,0)$ . From the above and Equation (3.1) we have that for a fixed $t\geq 0$ , the function $x\mapsto G^{(\theta)}(t,x)$ is strictly increasing and continuous in $[0,\infty)$ , with a possible discontinuity at 0, depending on the path variation of X. Moreover, we have that $\lim_{x\rightarrow \infty} G^{(\theta)}(t,x)=1$ for all $t\geq 0$ . For $x<0$ and $t\geq 0$ , we have that the function $G^{(\theta)}$ takes the form $G^{(\theta)}(t,x)=1-2e^{-\theta t}$ . Similarly, from the fact that $F^{(\theta)}(x)-1\leq 0$ for all $x\in {\mathbb{R}}$ , we have that for a fixed $x\in {\mathbb{R}}$ the function $t\mapsto G^{(\theta)}(t,x)$ is continuous and strictly increasing on $[0,\infty)$ . Furthermore, from the fact that $0\leq F^{(\theta)}(x) \leq 1$ , we have that the function G is bounded by
which implies that $|G^{(\theta)}|\leq 1$ . Recall that ${m_{\theta}}$ is defined as the median of the random variable ${e_{\theta}}$ , that is,
Hence from (3.2) we have that $G^{(\theta)}(t,x)\geq 0$ for all $x\in {\mathbb{R}}$ and $t\geq {m_{\theta}}$ . The above observations tell us that, to solve the optimal stopping problem (2.10), we are interested in a stopping time such that before stopping, the process X spends most of its time in the region where ${G^{(\theta)}}$ is negative, taking into account that (t, X) can live in the set $\{ (s,x) \in {\mathbb{R}}_+\times {\mathbb{R}}\,:\, {G^{(\theta)}}(s,x)>0\}$ and then return to the set $\{ (s,x) \in {\mathbb{R}}_+\times {\mathbb{R}}\,:\, {G^{(\theta)}}(s,x)\leq 0 \}$ . The only restriction that applies is that if a considerable amount of time has passed, then we must stop.
Recall that the function $h^{(\theta)}\,:\,{\mathbb{R}}_+\mapsto {\mathbb{R}}$ is defined as
Hence, we can see that the function $h^{(\theta)}$ is a non-increasing continuous function on $[0,{m_{\theta}})$ such that $\lim_{t\uparrow {m_{\theta}}} h^{(\theta)}(t)=0$ and ${h^{(\theta)}}(t)=-\infty$ for $t\in[{m_{\theta}},\infty)$ . Then ${h^{(\theta)}}$ must satisfy ${h^{(\theta)}}(t)\geq \lim_{s\uparrow {m_{\theta}}} {h^{(\theta)}}(s)=0 $ for $t \in [0,{m_{\theta}})$ .
In order to characterise the stopping time that minimises (2.10), we first derive some properties of the function ${V^{(\theta)}}$ .
Lemma 3.1. The function ${V^{(\theta)}}$ is non-decreasing in each argument. Moreover, we have $V^{(\theta)}(t,x) \in ({-}{m_{\theta}},0]$ for all $x\in {\mathbb{R}}$ and $t \geq 0$ . In particular, $V^{(\theta)}(t,x)<0$ for any $t\geq 0$ with $x < h^{(\theta)}(t)$ and ${V^{(\theta)}}(t,x)=0$ for all $(t,x)\in [{m_{\theta}},\infty)\times {\mathbb{R}}$ .
Proof. First, note that $V^{(\theta)}\leq 0$ follows from taking $\tau \equiv 0$ in the definition of ${V^{(\theta)}}$ . Moreover, recall that ${G^{(\theta)}}(t,x) > 0$ for $t> {m_{\theta}}$ and $x\in {\mathbb{R}}$ , so we have that ${V^{(\theta)}} $ vanishes on $[{m_{\theta}},\infty)\times {\mathbb{R}}$ . The fact that ${V^{(\theta)}}$ is non-decreasing in each argument follows from the non-decreasing property of the functions $t \mapsto G^{(\theta)}(t,x)$ and $x \mapsto G^{(\theta)}(t,x)$ , as well as the monotonicity of the expectation. Moreover, using standard arguments we can see that
Next we will show that $V^{(\theta)}(t,x)>-{m_{\theta}}$ for all $(t,x)\in [0,{m_{\theta}})\times {\mathbb{R}}$ and for all $\theta > 0$ . Note that $t<{m_{\theta}}$ if and only if $1-2e^{-\theta t}<0$ . Then for all $(s,x)\in {\mathbb{R}}_+\times {\mathbb{R}}$ we have that
Hence, for all $x\in {\mathbb{R}}$ and $t<{m_{\theta}}$ ,
The term in the last integral is non-negative, so we obtain for all $t<{m_{\theta}}$ and $x\in {\mathbb{R}}$ that
By using a dynamic programming argument and the fact that ${V^{(\theta)}}$ vanishes on the set $[{m_{\theta}},\infty)\times {\mathbb{R}}$ , we can see that
so that (since $|{G^{(\theta)}}|\leq 1$ ) we have that for all $t\geq 0$ and $x\in {\mathbb{R}}$ ,
Moreover, as a consequence of the properties of $F^{(\theta)}$ we have that the function ${G^{(\theta)}}$ is upper semi-continuous, so we can see that ${V^{(\theta)}}$ is upper semi-continuous (since ${V^{(\theta)}}$ is the infimum of upper semi-continuous functions). Next, consider the Markov process $\{ (t,L_t,X_t), t\geq 0 \}$ , where
For each $t\geq 0$ and $a, x\in {\mathbb{R}}$ , we have that
Therefore, from the general theory of optimal stopping (see [Reference Peskir and Shiryaev21, Corollary 2.9, p. 46]) we have that an optimal stopping time for (2.10) is given by
where $D=\{(t,x)\in {\mathbb{R}}_+\times {\mathbb{R}}\,:\, {V^{(\theta)}}(t,x)=0 \}$ is a closed set.
Hence, from Lemma 3.1, we derive that $D=\{ (t,x) \in {\mathbb{R}}_+ \times {\mathbb{R}}\,:\, x \geq b^{(\theta)}(t)\}$ , where the function $b^{(\theta)}\,:\,{\mathbb{R}}_+ \mapsto {\mathbb{R}}$ is given by
for each $t\geq 0$ . It follows from Lemma 3.1 that $b^{(\theta)}$ is non-increasing and $b^{(\theta)}(t)\geq h^{(\theta)}(t)\geq 0$ for all $ t\geq 0$ . Moreover, ${b^{(\theta)}}(t)=-\infty$ for $t \in [{m_{\theta}},\infty)$ , since $V^{(\theta)}(t,x)=0$ for all $t\geq {m_{\theta}}$ and $x \in {\mathbb{R}}$ , giving us $\tau_D \leq ({m_{\theta}}-t)\vee 0$ . In the case that $t<{m_{\theta}}$ , we have that $b^{(\theta)}(t)$ is finite-valued, as we will prove in the following lemma.
Lemma 3.2. Let $\theta> 0$ . The function ${b^{(\theta)}}$ is finite-valued for all $t\in [0,{m_{\theta}})$ .
Proof. For any $\theta >0$ and fixed $t\geq 0$ , consider the optimal stopping problem
where $\mathcal{T}_{{m_{\theta}}-t}$ is the set of all stopping times of $\mathbb{F}$ bounded by ${m_{\theta}}-t$ . From the fact that for all $s \geq 0$ and $x\in {\mathbb{R}}$ , $G(s+t,x)\geq 1+2e^{-\theta t} (F^{(\theta)}(x)-1)$ , and that $\tau_D\in \mathcal{T}_{{m_{\theta}}-t}$ (under ${\mathbb{P}}_{t,x}$ for all $x\in {\mathbb{R}}$ and $t<{m_{\theta}}$ ), we have that
for all $x\in {\mathbb{R}}$ . Hence it suffices to show that there exists $\tilde{x}_t$ (finite) sufficiently large so that $\mathcal{V}_t^{(\theta)}(x)=0$ for all $x\geq \tilde{x}_t$ .
It can be shown that an optimal stopping time for $\mathcal{V}_t^{(\theta)}$ is $\tau_{\mathcal{D}_t}$ , the first entry time before ${m_{\theta}}-t$ to the set $\mathcal{D}_t=\{ x\in {\mathbb{R}}\,:\, \mathcal{V}_t^{(\theta)}(x)=0 \}$ . We proceed by contradiction. Assume that $\mathcal{D}_t=\emptyset$ ; then $\tau_{\mathcal{D}_t}={m_{\theta}}-t$ . Hence, by the dominated convergence theorem and the spatial homogeneity of Lévy processes, we have that
which is a contradiction. Therefore, we conclude that for each $t \geq 0$ , there exists a finite value $\tilde{x}_t$ such that ${b^{(\theta)}}(t)\leq \tilde{x}_t$ .
Remark 3.1. From the proof of Lemma 3.2, we find an upper bound of the boundary ${b^{(\theta)}}$ . Define, for each $t\in [0,{m_{\theta}})$ , $u^{(\theta)}(t)=\inf\{x \in {\mathbb{R}}\,:\, \mathcal{V}^{(\theta)}_t(x)=0 \}$ . Then it follows that $u^{(\theta)}$ is a non-increasing finite function such that
for all $t \in [0,{m_{\theta}})$ .
Next we show that the function ${V^{(\theta)}}$ is continuous.
Lemma 3.3. The function ${V^{(\theta)}}$ is continuous. Moreover, for each $x\in {\mathbb{R}}$ , $t \mapsto {V^{(\theta)}}(t,x)$ is Lipschitz on ${\mathbb{R}}_+$ , and for every $t \in {\mathbb{R}}_+$ , $x \mapsto {V^{(\theta)}}(t,x)$ is Lipschitz on ${\mathbb{R}}$ .
Proof. First, we are showing that for a fixed $t\geq 0$ , the function $x \mapsto V^{(\theta)}(t,x)$ is Lipschitz on ${\mathbb{R}}$ . Recall that if $t\geq {m_{\theta}}$ , then $V^{(\theta)}(t,x)=0$ for all $x\in {\mathbb{R}}$ , so the assertion is clear. Suppose that $t<{m_{\theta}}$ . Let $x, y \in {\mathbb{R}}$ and define $\tau_{x}^*= \tau_{D}(t,x)=\inf\{s\geq 0\,:\, X_s+x\geq {b^{(\theta)}}(s+t) \}$ . Since $\tau_x^*$ is optimal in $V^{(\theta)}(t,x)$ (under ${\mathbb{P}}$ ), we have that
Define the stopping time
Then we have that $\tau_x^* \leq \tau_{b^{(\theta)}(0)-x}^+$ (since $b^{(\theta)}$ is a non-increasing function). From the fact that $F^{(\theta)}$ is non-decreasing, we obtain that for ${b^{(\theta)}}(0)\geq y\geq x$ ,
Using Fubini’s theorem and a density of the potential measure of the process killed upon exiting $({-}\infty,{b^{(\theta)}}(0)]$ (see Equation (2.5)), we get that
where in the last inequality we used the fact that $W^{(\theta)}$ is strictly increasing and non-negative and that $F^{(\theta)}$ vanishes at $({-}\infty,0)$ . By an integration-by-parts argument, we obtain that
Moreover, it can be checked (see [Reference Kuznetsov, Kyprianou and Rivero14, Lemma 3.3]) that $z\mapsto e^{-\Phi(\theta)(z)} W^{(\theta)}(z)$ is a continuous function in the interval $[0,\infty)$ such that
This implies that there exists a constant $M>0$ such that for every $z\in {\mathbb{R}}$ , $ 0\leq e^{-\Phi(\theta)(z)}W^{(\theta)}$ $(z)<M$ . Then we obtain that for all $x\leq y \leq {b^{(\theta)}}(0)$ ,
On the other hand, since ${b^{(\theta)}}(0)\geq {b^{(\theta)}}(t)$ for all $t\in [0,{m_{\theta}})$ , we have that for all $(t,x) \in [0,{m_{\theta}}) \times [{b^{(\theta)}}(0),\infty)$ , ${V^{(\theta)}}(t,x)=0$ . Hence we obtain that for all $x,y \in {\mathbb{R}}$ and $t\geq 0$ ,
Therefore we conclude that for a fixed $t\geq 0$ , the function $x \mapsto {V^{(\theta)}}(t,x)$ is Lipschitz on ${\mathbb{R}}$ .
Using a similar argument and the fact that the function $t \mapsto e^{-\theta t}$ is Lipschitz continuous on $[0,\infty)$ , we can show that for any $s,t< {m_{\theta}}$ ,
and therefore $t\mapsto {V^{(\theta)}}(t,x)$ is Lipschitz continuous for all $x\in {\mathbb{R}}$ .
In order to derive more properties of the boundary ${b^{(\theta)}}$ , we first state some auxiliary results. Recall that if $f \in C_b^{1,2}({\mathbb{R}}_+ \times {\mathbb{R}})$ , the set of real bounded $C^{1,2}$ functions on $ {\mathbb{R}}_+ \times {\mathbb{R}}$ with bounded derivatives, the infinitesimal generator of (t, X) is given by
Let $C={\mathbb{R}}_+ \times {\mathbb{R}} \setminus D=\{ (t,x) \in {\mathbb{R}}_+ \times {\mathbb{R}}\,:\, x<{b^{(\theta)}}(t)\}$ be the continuation region. Then we have that the value function ${V^{(\theta)}}$ satisfies a variational inequality in the sense of distributions. The proof is analogous to the one presented in [Reference Lamberton and Mikou16] (see Proposition 2.5), so the details are omitted.
Lemma 3.4. Fix $\theta > 0$ . The distribution $\mathcal{A}_{(t,X)} {V^{(\theta)}} +{G^{(\theta)}}$ is non-negative on ${\mathbb{R}}_+\times {\mathbb{R}}$ . Moreover, we have that $ \mathcal{A}_{(t,X)} {V^{(\theta)}} +{G^{(\theta)}}=0$ on C.
We define a special function which is useful in proving the left-continuity of the boundary ${b^{(\theta)}}$ . For $\theta>0$ , we define an auxiliary function in the set D. Let
From the fact that ${V^{(\theta)}}$ vanishes on D and that $\Pi$ is finite on sets of the form $({-}\infty,-\varepsilon)$ for $\varepsilon>0$ , we can see that $|\varphi^{(\theta)}(t,x)|<\infty$ for all (t, x) in the interior of D. Moreover, by the lemma above and the properties of ${V^{(\theta)}}$ and ${G^{(\theta)}}$ , it can be shown that $\varphi$ is strictly positive, continuous, and strictly increasing (in each argument) in the interior of D.
Now we are ready to give further properties of the curve ${b^{(\theta)}}$ in the set $[0,{m_{\theta}})$ .
Lemma 3.5. The function $b^{(\theta)}$ is continuous on $[0,{m_{\theta}})$ . Moreover we have that $\lim_{t\uparrow {m_{\theta}}} {b^{(\theta)}}(t)=0$ .
Proof. The method of proof of the continuity of ${b^{(\theta)}}$ in $[0,{m_{\theta}})$ is heavily based on the work of [Reference Lamberton and Mikou16] (see Theorem 4.2, where the continuity of the boundary is shown in the American option context), so the proof is omitted.
We then show that the limit holds. Define ${b^{(\theta)}}({m_{\theta}}{-})\,:\!=\,\lim_{t\uparrow {m_{\theta}}} {b^{(\theta)}}(t)$ . We obtain ${b^{(\theta)}}({m_{\theta}}{-})\geq 0$ , since ${b^{(\theta)}}(t) \geq {h^{(\theta)}}(t)\geq 0$ for all $t\in [0,{m_{\theta}})$ . The proof is by contradiction, so we assume that ${b^{(\theta)}}({m_{\theta}}{-})y>0$ . Note that for all $x \in {\mathbb{R}}$ , we have that ${V^{(\theta)}}({m_{\theta}},0)=0$ and ${G^{(\theta)}}({m_{\theta}},x)=F^{(\theta)}(x)$ . Moreover, we have that
in the sense of distributions on $(0,{m_{\theta}})\times \big(0,{b^{(\theta)}}({m_{\theta}}{-}) \big)$ . Hence, by continuity, we can derive, for $t\in [0,{m_{\theta}})$ , that $\mathcal{A}_X \big({V^{(\theta)}}\big)(t,\cdot)+{G^{(\theta)}}(t,\cdot) \leq 0 $ on the interval $\big(0,{b^{(\theta)}}({m_{\theta}}{-})y\big)$ . Hence, by taking $t\uparrow {m_{\theta}}$ we obtain that
in the sense of distributions, where we used the continuity of ${V^{(\theta)}}$ and ${G^{(\theta)}}$ , the fact that ${V^{(\theta)}}({m_{\theta}},x)=0$ for all $x\in {\mathbb{R}}$ , and the fact that $F^{(\theta)}(x)>0$ for all $x>0$ . Note that we have got a contradiction. We conclude that ${b^{(\theta)}}({m_{\theta}})=0$ .
Define the value
Note that in the case where X is a process of infinite variation, we have that the distribution function of the random variable $-\underline{X}_{{e_{\theta}}}$ , $F^{(\theta)}$ , is continuous on ${\mathbb{R}}$ , strictly increasing, and strictly positive in the open set $(0,\infty)$ , with $F^{(\theta)}(0)=0$ . This fact implies that the inverse function of $F^{(\theta)}$ exists on $(0,\infty)$ , and the function ${h^{(\theta)}}$ can be written for $t\in [0,{m_{\theta}})$ as
Hence we conclude that ${h^{(\theta)}}(t)>0$ for all $t\in [0,{m_{\theta}})$ . Therefore, when X is a process of infinite variation, we have ${b^{(\theta)}}(t)>0$ for all $t\in [0,{m_{\theta}})$ and hence $t_b={m_{\theta}}$ . For the case of finite variation, we have that $t_b \in [0,{m_{\theta}})$ , which implies that ${b^{(\theta)}}(t)=0$ for all $t\in [t_b,{m_{\theta}})$ and ${b^{(\theta)}}(t)>0$ for all $t\in [0,t_b)$ . In the next lemma, we characterise its value.
Lemma 3.6. Let $\theta>0$ and let X be a process of finite variation. We have that for all $t\geq 0$ and $x\in {\mathbb{R}}$ ,
Moreover, for any Lévy process, $t_b$ is given by
where $V^{(\theta)}_B$ is given by
for all $t\in [0,{m_{\theta}})$ and $y \in {\mathbb{R}}$ .
Proof. Assume that X is a process of finite variation. We first show that
for all $t\geq 0$ and $x\in {\mathbb{R}}$ . The case $t\geq {m_{\theta}}$ is straightforward since ${V^{(\theta)}}(t,x)=0$ for all $x\in {\mathbb{R}}$ . The case $t<{m_{\theta}}$ follows from the Lipschitz continuity of the mapping $x\mapsto {V^{(\theta)}}(t,x)$ , the fact that $\Pi$ is finite on intervals away from zero, and since $\int_{({-}1,0)} y\Pi({{d}} y)>-\infty$ when X is of finite variation. Moreover, from Lemma 3.4, we obtain that
on C in the sense of distributions, where the last inequality follows since ${V^{(\theta)}}$ is non-decreasing in each argument and $\delta>0$ is defined in (2.3). Then by the continuity of the functions ${V^{(\theta)}}$ and ${G^{(\theta)}}$ (recall that ${G^{(\theta)}}$ is at least continuous on $(0,\infty)\times (0,\infty)$ and right-continuous at points of the form (t, 0) for $t\geq 0$ ) we can derive
for all $t\in [0,t_b)$ .
Next, we show that the set $\big\{t\in [0,{m_{\theta}})\,:\, {b^{(\theta)}}(t)=0 \big\}$ is non-empty. We proceed by contradiction. Assume that ${b^{(\theta)}}(t)>0$ for all $t\in [0,{m_{\theta}})$ , so that $t_b={m_{\theta}}$ . Taking $t\uparrow {m_{\theta}}$ in (3.11) and applying the dominated convergence theorem, we obtain that
where the strict inequality follows from
since X is of finite variation. Therefore we observe a contradiction, which shows that $\big\{t\in [0,{m_{\theta}})\,:\, {b^{(\theta)}}(t)=0 \big\}\neq \emptyset$ . Moreover, by definition, we have that $t_b=\inf\{ t\in [0,{m_{\theta}})\,:$ ${b^{(\theta)}}(t)=0\}$ .
Next we find an expression for ${V^{(\theta)}}(t,x)$ when $t\in (0,{m_{\theta}})$ and $x< 0$ . Since ${b^{(\theta)}}(t)\geq 0$ for all $t\in [0,{m_{\theta}})$ , we have that
where the first equality follows since $X_s\leq 0$ for all $s\leq \tau_0^+$ and $G(t,x)=1-2e^{-\theta t}$ for all $x<0$ . Hence, in particular, we have that ${V^{(\theta)}}(t,x)=V^{(\theta)}_B(t,x)$ for all $t\in [t_b,{m_{\theta}})$ and $x\in {\mathbb{R}}$ .
We show that (3.10) holds. From the discussion after Lemma 3.4, we know that
for all $x>0$ and $t\geq t_b$ . Then by taking $x\downarrow 0$ , making use of the right continuity of $x\mapsto G(t,x)$ and the continuity of ${V^{(\theta)}}$ (see Lemma 3.3), and applying the dominated convergence theorem, we derive that
In particular, if $t_b=0$ , (3.10) holds since $t\mapsto {V^{(\theta)}}(t,y)$ (for all $y\in {\mathbb{R}}$ ) and ${G^{(\theta)}}(t,0)$ are non-decreasing functions. If $t_b>0$ , taking $t\uparrow t_b$ in (3.11) gives us
Hence we have that $\int_{({-}\infty,0)}V^{(\theta)}_B(t_b,y)\Pi({{d}} y)+{G^{(\theta)}}(t_b,0)=0$ , with (3.10) becoming clear because $t\mapsto V^{(\theta)}_B(t,x)$ is non-decreasing. If X is a process of infinite variation, we have that ${h^{(\theta)}}(t)>0$ for all $t\in [0,{m_{\theta}})$ and therefore ${G^{(\theta)}}(t,x)< 0$ for all $t\in [0,{m_{\theta}})$ and $x\leq 0$ , which implies that
Now we prove that the derivatives of ${V^{(\theta)}}$ exist at the boundary ${b^{(\theta)}}$ for those points in which ${b^{(\theta)}}$ is strictly positive.
Lemma 3.7. For all $t\in [0,t_b)$ , the partial derivatives of ${V^{(\theta)}}(t,x)$ at $\big(t,{b^{(\theta)}}(t)\big)$ exist and are equal to zero, i.e.,
Proof. First we prove the assertion in the first argument. Using a similar idea as in Lemma 3.3, we have that for any $t<t_b$ , $x\in {\mathbb{R}}$ , and $h>0$ ,
where $\tau^*_h=\inf\{r \in [0,{m_{\theta}}-t+h]\,:\, X_r \geq {b^{(\theta)}}(r+t-h) \}$ is the optimal stopping time for ${V^{(\theta)}}(t-h,x)$ and the second inequality follows since b is non-increasing. Hence, by (2.1) and the continuity of ${b^{(\theta)}}$ , we obtain that
Now we show that the partial derivative of the second argument exists at ${b^{(\theta)}}(t)$ and is equal to zero. Fix any time $t \in [0,t_b)$ , $\varepsilon>0$ , and $x\leq b^{(\theta)}(t)$ (without loss of generality, we assume that $\varepsilon<x$ ). By a similar argument to that provided in Lemma 3.3, we obtain that
Dividing by $\varepsilon$ , we have that for $t\in [0,{m_{\theta}})$ and $\varepsilon<x $ ,
where
By using that W and F are non-decreasing, that $W^{(\theta)}$ (and hence $F^{(\theta)}$ ) has left and right derivatives, and the dominated convergence theorem, we can show that for $t\in [0,t_b)$ , $\lim_{\varepsilon \downarrow 0} R_i^{(\varepsilon)}\big(t,{b^{(\theta)}}(t)\big)=0$ for $i=1,2, 3$ . Hence, we have that
This proves that $x \mapsto V^{(\theta)}(x,t)$ is differentiable at $b^{(\theta)}(t)$ , with $ \partial/\partial x V^{(\theta)}(t,b^{(\theta)}(t))=0$ for $t\in [0,t_b)$ .
Remark 3.2. Note that when X is of infinite variation we have that $W^{(\theta)}$ (and hence $F^{(\theta)}$ ) is a $C^1(0,\infty)$ function (see Lemma 8.2 and the discussion thereafter in [Reference Kyprianou15, pp. 240--241]). Hence, we deduce from the mean value theorem and Equation (3.13) that for each $t \in [0,{m_{\theta}})$ , there exists a constant $C_t>0$ such that
for any $\varepsilon>0$ . Therefore, by Lemma 3.6 and the above, we deduce that $|\varphi\big(t,{b^{(\theta)}}(t)\big)|<\infty $ for all $t\in [0,{m_{\theta}})$ for any spectrally negative Lévy process X.
The next theorem looks at how the value function ${V^{(\theta)}}$ and the curve ${b^{(\theta)}}$ can be characterised as a solution of nonlinear integral equations within a certain family of functions. These equations are in fact generalisations of the free boundary equation (see e.g. [Reference Peskir and Shiryaev21, Section 14.1, pp. 219--221], in a diffusion setting) in the presence of jumps. It is important to mention that the proof of Theorem 3.1 is mainly inspired by the ideas of [Reference Du Toit, Peskir and Shiryaev10], with some extensions to allow for the presence of jumps.
Theorem 3.1. Let X be a spectrally negative Lévy process and let $t_b$ be as characterised in (3.10). For all $t\in [0,t_b)$ and $x\in {\mathbb{R}}$ , we have that
and ${b^{(\theta)}}(t)$ solves the equation
If $t\in [t_b,{m_{\theta}})$ , we have that ${b^{(\theta)}}(t)=0$ and
for all $x\in {\mathbb{R}}$ . Moreover, the pair $\big({V^{(\theta)}},{b^{(\theta)}}\big)$ are uniquely characterised as the solutions to Equations (3.14)–(3.16) in the class of continuous functions on ${\mathbb{R}}_+\times{\mathbb{R}}$ and ${\mathbb{R}}_+$ , respectively, such that ${b^{(\theta)}}\geq {h^{(\theta)}}$ , ${V^{(\theta)}}\leq 0$ , and $\int_{({-}\infty,0)} {V^{(\theta)}}(t,x+y)\Pi({{d}} y) +{G^{(\theta)}}(t,x) \geq 0$ for all $t\in [0,t_b)$ and $x\geq {b^{(\theta)}}(t)$ .
3.1. Proof of Theorem 3.1
Since the proof of Theorem 3.1 is rather long, we split it into a series of lemmas. This subsection is entirely dedicated to this purpose. With the help of Itô’s formula, and following an analogous argument to that of [Reference Lamberton and Mikou17] (in the infinite-variation case), we prove that ${V^{(\theta)}}$ and ${b^{(\theta)}}$ are solutions to the integral equations listed above. The finite-variation case is proved using an argument that considers the consecutive times at which X hits the curve ${b^{(\theta)}}$ .
Lemma 3.8. The pair $\big({V^{(\theta)}},{b^{(\theta)}}\big)$ are solutions to Equations (3.14)–(3.16).
Proof. Recall from Lemma 3.6 that when $t_b<{m_{\theta}}$ , the value function ${V^{(\theta)}}(t,x)$ satisfies Equation (3.16) for $t\in [t_b,{m_{\theta}})$ and $x\in {\mathbb{R}}$ . We also have that Equation (3.15) follows from (3.14) by letting $x={b^{(\theta)}}(t)$ and using that ${V^{(\theta)}}\big(t,{b^{(\theta)}}(t)\big)=0$ .
We proceed to show that $({V^{(\theta)}},{b^{(\theta)}})$ solves Equation (3.14). First, we assume that X is a process of infinite variation. We follow an argument analogous to that used for [Reference Lamberton and Mikou17, Theorem 3.2]. Consider a regularising sequence $\{\rho_n \}_{n\geq 1}$ of non-negative $C^{\infty}({\mathbb{R}}_+\times {\mathbb{R}})$ functions with support in $[{-}1/n,0]\times [{-}1/n,0]$ such that $\int_{-\infty}^0 \int_{-\infty}^0 \rho_n(s,y){{d}} s {{d}} y=1$ . For every $n\geq 1$ , define the function $V^{(\theta)}_n$ by
for any $(t,x)\in [1/n,\infty)\times {\mathbb{R}}$ . Then for each $n\geq 1$ , the function $V^{(\theta)}_n$ is a $C^{1,2}({\mathbb{R}}_+\times {\mathbb{R}})$ bounded function (since ${V^{(\theta)}}$ is bounded). Moreover, it can be shown that $ V^{(\theta)}_n \uparrow V$ on ${\mathbb{R}}_+\times {\mathbb{R}}$ when $n\rightarrow \infty$ and that (see the proof of [Reference Lamberton and Mikou16, Proposition 2.5])
where $\mathcal{A}_{X}$ is the infinitesimal generator of X given in (3.7) and $C={\mathbb{R}}_+\times {\mathbb{R}} \setminus D$ . Let $t\in (0,t_b]$ , $m>0$ such that $t>1/m$ , and $x\in {\mathbb{R}}$ . Applying Itô’s formula to $V^{(\theta)}_n(t+s,X_{s}+x )$ , for $s\in[0,{m_{\theta}}-t]$ , we obtain that for any $n\geq m$ ,
where $\{ M_{s}^{t,n}, t\geq 0 \}$ is a zero-mean martingale. Hence, taking the expectation and using (3.17), we derive that
where we used the fact that ${b^{(\theta)}}(s)$ is finite for all $s\geq 0$ and that ${\mathbb{P}}(X_s+x=b(t+s))=0$ for all $s>0$ and $x\in {\mathbb{R}}$ when X is of infinite variation (see [Reference Sato22, Theorem 27.4, p. 175]). Taking $s={m_{\theta}}-t$ , using the fact that ${V^{(\theta)}}({m_{\theta}},x)=0$ for all $x\in {\mathbb{R}}$ , and letting $n\rightarrow \infty$ (by the dominated convergence theorem), we obtain that (3.14) holds for any $(t,x)\in (0,t_b)\times{\mathbb{R}}$ . The case when $t=0$ follows by continuity.
For the finite-variation case, we define the auxiliary function
for all $(t,x)\in {\mathbb{R}}_+\times {\mathbb{R}}$ . We then prove that $R^{(\theta)}={V^{(\theta)}}$ . First, note that from the discussion after Lemma 3.4 we have that $\int_{({-}\infty,0)} {V^{(\theta)}}(t,x+y) +{G^{(\theta)}}(t,x)\geq 0$ for all $(t,x)\in D$ . Then we have that for all $(t,x)\in [0,{m_{\theta}}]\times {\mathbb{R}}$ ,
where we used that $|{G^{(\theta)}}|\leq 1$ in the last inequality. For each $(t,x)\in {\mathbb{R}}_+\times {\mathbb{R}}$ , we define the times at which the process X hits the curve ${b^{(\theta)}}$ . Let $\tau_b^{(1)}=\inf\{s\in [0,{m_{\theta}}-t]\,:\, X_s\geq {b^{(\theta)}}(s+t) \}$ , and for $k\geq 1$ ,
where in this context we understand that $\inf \emptyset ={m_{\theta}}-t$ . Taking $t\in [0,{m_{\theta}}]$ and $x>{b^{(\theta)}}(t)$ gives us
where the last equality follows from the strong Markov property applied at times $\sigma_b^{(1)}$ and $\tau_b^{(2)}$ , respectively, and from the fact that $\tau_D$ is optimal for ${V^{(\theta)}}$ . Using the compensation formula for Poisson random measures (see [Reference Kyprianou15, Theorem 4.4, p. 99]), it can be shown that
Hence, for all $(t,x)\in D$ , we have that
Using an induction argument, it can be shown that for all $(t,x)\in D$ and $n\geq 2$ ,
where the last equality follows since $R^{(\theta)}({m_{\theta}},x)=0$ for all $x\in {\mathbb{R}}$ . Since X is of finite variation, it can be shown that for all $x\in {\mathbb{R}}$ , $\lim_{n \rightarrow \infty} \tau_b^{(n)}={m_{\theta}}-t$ ${\mathbb{P}}_x$ -a.s. Therefore, from (3.18) and taking $n\rightarrow \infty$ , we conclude that for all $(t,x)\in D$ ,
where the last inequality follows from the dominated convergence theorem. On the other hand, if we take $t\in [0,{m_{\theta}}]$ and $x<{b^{(\theta)}}(t)$ , then by the strong Markov property applied to the filtration at time $\tau_b^{(1)}$ , we have that
where we used the fact that $\tau_b^{(1)}$ is an optimal stopping time for ${V^{(\theta)}}$ and that $R^{(\theta)}$ vanishes on D. So then (3.14) also holds in the finite-variation case.
Next we proceed to show the uniqueness result. Suppose that there exist a non-positive continuous function ${U^{(\theta)}}\,:\, [0,{m_{\theta}}]\times {\mathbb{R}} \mapsto ({-}\infty,0]$ and a continuous function ${c^{(\theta)}}$ on $[0,{m_{\theta}})$ such that ${c^{(\theta)}}\geq {h^{(\theta)}}$ and ${c^{(\theta)}}(t)=0$ for all $t\in [t_b,{m_{\theta}})$ . We assume that the pair $\big({U^{(\theta)}},{c^{(\theta)}}\big)$ solves the equations
and
when $t\in [0,t_b)$ and $x\in {\mathbb{R}}$ . For $t\in [t_b,{m_{\theta}})$ and $x\in {\mathbb{R}}$ , we assume that
In addition, we assume that
Note that $({U^{(\theta)}},{c^{(\theta)}})$ solving the above equations means that ${U^{(\theta)}}(t,{c^{(\theta)}}(t))=0$ for all $t \in [0,{m_{\theta}})$ and ${U^{(\theta)}}({m_{\theta}},x)=0$ for all $x\in {\mathbb{R}}$ . Denote by $D_c$ the ‘stopping region’ under the curve ${c^{(\theta)}}$ , i.e., $D_c=\{(t,x) \in [0,{m_{\theta}}]\times {\mathbb{R}}\,:\, x \geq {c^{(\theta)}}(t) \}$ , and recall that $D=\{(t,x) \in [0,{m_{\theta}}]\times {\mathbb{R}}\,:\, x \geq {b^{(\theta)}}(t) \}$ is the ‘stopping region’ under the curve ${b^{(\theta)}}$ . We show that ${U^{(\theta)}}$ vanishes on $D_c$ in the next lemma.
Lemma 3.9. We have that ${U^{(\theta)}}(t,x)=0$ for all $(t,x)\in D_c$ .
Proof. Since the statement is clear for $(t,x)\in [t_b,{m_{\theta}})\times [0,\infty)$ , we take $t\in [0,t_b)$ and $x\geq {c^{(\theta)}}(t)$ . Define $\sigma_c$ to be the first time that the process is outside $D_c$ before time ${m_{\theta}}-t$ , i.e.,
where in this context we understand that $\inf \emptyset= {m_{\theta}}-t$ . From the fact that $X_{r}\geq {c^{(\theta)}}(t+r)$ for all $r< \sigma_c$ and the strong Markov property at time $\sigma_c$ , we obtain that
where the last equality follows since ${U^{(\theta)}}({m_{\theta}},x)=0$ for all $x \in {\mathbb{R}}$ and ${U^{(\theta)}}(t,{c^{(\theta)}}(t))=0$ for all $t\in [0,t_b)$ . Then, applying the compensation formula for Poisson random measures (see [Reference Kyprianou15, Theorem 4.4, p. 99]), we get
Hence ${U^{(\theta)}}(t,x)=0$ for all $(t,x)\in D_c$ , as claimed.
The next lemma shows that ${U^{(\theta)}}$ can be expressed as an integral involving only the gain function ${G^{(\theta)}}$ stopped at the first time the process enters the set $D_c$ . As a consequence, ${U^{(\theta)}}$ dominates the function ${V^{(\theta)}}$ .
Lemma 3.10. We have that ${U^{(\theta)}}(t,x)\geq {V^{(\theta)}}(t,x)$ for all $(x,t)\in {\mathbb{R}}\times [0,{m_{\theta}}]$ .
Proof. Note that we can assume that $t\in [0,t_b)$ , because for $(t,x)\in D_c$ we have ${U^{(\theta)}}(t,x)=0\geq {V^{(\theta)}}(t,x)$ , and for $t\in [t_b,{m_{\theta}})$ we have ${U^{(\theta)}}(t,x)={V^{(\theta)}}(t,x)$ , for all $x\in {\mathbb{R}}$ . Consider the stopping time
Let $x\leq {c^{(\theta)}} (t)$ . Using the fact that $X_{r}< {c^{(\theta)}}(t+r)$ for all $r\leq \tau_c$ and the strong Markov property at time $\tau_c$ , we obtain that
where the second equality follows since X creeps upwards, and therefore $X_{\tau_c}={c^{(\theta)}}(t+\tau_c)$ for $\{\tau_c<{m_{\theta}}-t \}$ and ${U^{(\theta)}}({m_{\theta}},x)=0$ for all $x\in {\mathbb{R}}$ . Then from the definition of ${V^{(\theta)}}$ (see (2.10)), we have that
Therefore ${U^{(\theta)}} \geq {V^{(\theta)}}$ on $ [0,{m_{\theta}}]\times {\mathbb{R}}$ .
We proceed by showing that the function ${c^{(\theta)}}$ is dominated by ${b^{(\theta)}}$ . In the upcoming lemmas, we show that equality indeed holds.
Lemma 3.11. We have that ${b^{(\theta)}}(t)\geq {c^{(\theta)}}(t)$ for all $t\in [0,{m_{\theta}})$ .
Proof. The statement is clear for $t\in [t_b,{m_{\theta}})$ . We prove the statement by contradiction. Suppose that there exists a value $t_0\in [0,t_b)$ such that ${b^{(\theta)}}(t_0)<{c^{(\theta)}}(t_0)$ , and take $x\in ({b^{(\theta)}}(t_0),{c^{(\theta)}}(t_0))$ . Consider the stopping time
Applying the strong Markov property to the filtration at time $\sigma_b$ , we obtain that
where we used the fact that ${U^{(\theta)}}(t,x)=0$ for all $(t,x)\in D_c$ . From Lemma 3.10 and the fact that ${U^{(\theta)}}\leq 0$ (by assumption), we have that for all $t\in [0,{m_{\theta}})$ and $x>{b^{(\theta)}}(t)$ , $ {U^{(\theta)}}(t,x)=0$ . Hence, by the compensation formula for Poisson random measures, we obtain that
Recall from the discussion after Lemma 3.4 that the function $\varphi_t^{(\theta)}$ is strictly positive on D. Hence we obtain that for all $(t,x)\in D$ ,
The assumption that ${b^{(\theta)}}(t_0)<{c^{(\theta)}}(t_0)$ together with the continuity of the functions ${b^{(\theta)}}$ and ${c^{(\theta)}}$ means that there exists $s_0\in(t_0,{m_{\theta}})$ such that ${b^{(\theta)}}(r)<{c^{(\theta)}}(r)$ for all $r\in [t_0,s_0]$ . Consequently, the ${\mathbb{P}}_{x}$ -probability of X spending a strictly positive amount of time (with respect to Lebesgue measure) in this region is strictly positive. We can then conclude that
This is a contradiction, and therefore we conclude that ${b^{(\theta)}}(t) \geq {c^{(\theta)}}(t)$ for all $t\in [0,{m_{\theta}})$ .
Note that the definition of ${U^{(\theta)}}$ on $[t_b,{m_{\theta}})\times {\mathbb{R}}$ (see Equation (3.21)) together with the condition (3.22) implies that
for all $t\in [0,{m_{\theta}})$ and $x>{c^{(\theta)}}(t)$ . The next lemma shows that ${U^{(\theta)}}$ and ${V^{(\theta)}}$ coincide.
Lemma 3.12. We have that ${b^{(\theta)}}(t)={c^{(\theta)}}(t)$ for all $t\geq 0$ , and hence ${V^{(\theta)}}={U^{(\theta)}}$ .
Proof. We prove that ${b^{(\theta)}}={c^{(\theta)}}$ by contradiction. Assume that there exists $s_0$ such that ${b^{(\theta)}}(s_0)>{c^{(\theta)}}(s_0)$ . Since ${c^{(\theta)}}(t)={b^{(\theta)}}(t)=0$ for all $t\in [t_b,{m_{\theta}})$ , we deduce that $s_0\in [0,t_b)$ . Let $\tau_b$ be the stopping time
With the Markov property applied to the filtration at time $\tau_b$ , we obtain that for any $x\in ({c^{(\theta)}}(s_0),{b^{(\theta)}}(s_0))$ ,
where the second inequality follows from the fact that ${U^{(\theta)}}\geq {V^{(\theta)}}$ (see Lemma 3.10) and the last equality follows as $\tau_b$ is the optimal stopping time for ${V^{(\theta)}}(s_0,x)$ . Note that since X creeps upwards, we have that ${U^{(\theta)}}(s_0+\tau_b,X_{\tau_b})={U^{(\theta)}}(s_0+\tau_b,{b^{(\theta)}}(s_0+\tau_b))=0$ .
Hence,
However, the continuity of the functions ${b^{(\theta)}}$ and ${c^{(\theta)}}$ gives the existence of $s_1\in (s_0,{m_{\theta}})$ such that $ {c^{(\theta)}}(r)<{b^{(\theta)}}(r)$ for all $r\in [s_0,s_1]$ . Combining this with the fact that $\int_{({-}\infty,0)} {U^{(\theta)}}(x+y,t)\Pi({{d}} y)+{G^{(\theta)}}(x,t)>0$ for all $(t,x)\in D_c$ , we can conclude that
which shows a contradiction.
4. Examples
4.1. Brownian motion with drift
Suppose that $X=\{X_t ,t\geq 0 \}$ is a Brownian motion with drift. That is, for any $t\geq 0$ , $X_t=\mu t+\sigma B_t$ , where $\sigma>0$ and $\mu\in {\mathbb{R}}$ . In this case, we have that
for all $\beta \geq 0$ . Then
It is well known that $-\underline{X}_{{e_{\theta}}}$ has exponential distribution (see e.g. [7, p. 251] or [Reference Kyprianou15, p. 233]) with distribution function given by
Denote by $\Phi(x;a,b^2)$ the distribution function of a normal random variable with mean $a \in {\mathbb{R}}$ and variance $b^2$ ; i.e., for any $x\in {\mathbb{R}}$ ,
For any $b,s,t\geq 0$ and $x\in {\mathbb{R}}$ , define the function
Then it can easily be shown that
Thus, we have that $b^{(\theta)}$ satisfies the nonlinear integral equation
for all $t\in [0,{m_{\theta}})$ , and the value function ${V^{(\theta)}}$ is given by
for all $(t,x)\in {\mathbb{R}}_+ \times {\mathbb{R}}$ . Note that we can approximate the integrals above by Riemann sums, so a numerical approximation can be implemented. Indeed, take $n \in \mathbb{Z}_+$ sufficiently large and define $h={m_{\theta}}/n$ . For each $k\in \{0,1,2,\ldots,n \}$ , we define $t_k=kh$ . Then the sequence of times $\{ t_k, k= 0,1,\ldots,n \}$ is a partition of the interval $[0,{m_{\theta}}]$ . Then, for any $x\in {\mathbb{R}}$ and $t\in [t_k,t_{k+1})$ for $k \in \{0,1,\ldots, n-1 \}$ , we approximate ${V^{(\theta)}}(t,x)$ by
where the sequence $\{b_k, k=0,1,\ldots,n -1\}$ is a solution to
for each $k\in \{0,1,\ldots, n-1 \}$ . Note that the sequence $\{b_k, k=0,1,\ldots,n \}$ is a numerical approximation to the sequence $\{{b^{(\theta)}}(t_k), k=0,1,\ldots,n-1 \}$ (for n sufficiently large) and can be calculated by using backwards induction. In Figure 2, we show a numerical calculation of the equations above. The parameters used are $\mu=2$ and $\sigma=1$ , and we chose ${m_{\theta}}=10$ and $n=10,000$ time steps (so that $h=0.001$ ).
4.2. Brownian motion with exponential jumps
Let $X=\{X_t,t\geq 0 \}$ be a compound Poisson process perturbed by a Brownian motion; that is,
where $B=\{B_t,t\geq 0 \}$ is a standard Brownian motion, $N=\{N_t,t\geq 0 \}$ is a Poisson process with rate $\lambda$ independent of B, $\mu \in {\mathbb{R}}$ , $\sigma > 0$ , and the sequence $\{Y_1,Y_2,\ldots \}$ is a sequence of independent random variables exponentially distributed with mean $1/\rho>0$ . Then in this case, the Laplace exponent is derived as
Its Lévy measure, given by $\Pi({{d}} y)=\lambda \rho e^{\rho y} {\mathbb{I}}_{\{y<0 \}} {{d}} y$ , is a finite measure, and X is a process of infinite variation. According to [Reference Kuznetsov, Kyprianou and Rivero14, Equation (7), p. 101], the scale function in this case is given for $q\geq 0$ and $x\geq 0$ by
where $\zeta_2(q)$ , $\zeta_1(q)$ , and $\Phi(q)$ are the three real solutions to the equation $\psi(\beta)=q$ , which satisfy $\zeta_2(q)<-\rho<\zeta_1(q)<0<\Phi(q)$ . The second scale function, $Z^{(q)}$ , takes the form
Note that since we have exponential jumps (and hence $\Pi({{d}} y)=\lambda \rho e^{\rho y} {\mathbb{I}}_{\{y<0\}}$ ), we have that for all $t\in [0,{m_{\theta}})$ and $x>0$ ,
Then, for any $(t,x)\in [0,{m_{\theta}}]\times {\mathbb{R}}$ , Equation (3.14) reads
where for any $r,s\in [0,{m_{\theta}})$ , $b\geq 0$ , and $x\in {\mathbb{R}}$ ,
Note that the equation above suggests that in order to find a numerical value of ${b^{(\theta)}}$ using Theorem 3.1, we only need to know the values of the function $\mathcal{V}$ and not the values of $\int_{({-}\infty,0)}{V^{(\theta)}}(t,x+y)\Pi({{d}} y)$ for all $t\in [0,{m_{\theta}}]$ and $x>{b^{(\theta)}}(t)$ . The next corollary confirms this notion.
Corollary 4.1. Let $\theta>0$ . Assume that $X=\{X_t, t\geq 0 \}$ is of the form (4.1) with $\mu\in {\mathbb{R}}$ , $\sigma, \lambda, \rho>0$ . Suppose that ${c^{(\theta)}}$ and $\mathcal{U}$ are continuous functions on $[0,{m_{\theta}})$ such that ${c^{(\theta)}} \geq h^{(\theta)}$ and $0\geq \mathcal{U}(t) \geq -{G^{(\theta)}}(t,{c^{(\theta)}}(t))$ for all $t\in [0,{m_{\theta}})$ . For any $(t,x)\in [0,{m_{\theta}}]\times {\mathbb{R}}$ we define the function
Further assume that there exists a value $h>0$ such that ${U^{(\theta)}}(t,x)=0$ for any $t\in [0,{m_{\theta}})$ and $x\in [{b^{(\theta)}}(t),{b^{(\theta)}}(t)+h]$ . If ${U^{(\theta)}}$ is a non-positive function, we have that ${c^{(\theta)}}= {b^{(\theta)}}$ and ${U^{(\theta)}}={V^{(\theta)}}$ .
Proof. First note that, since X is of infinite variation, ${\mathbb{P}}_x(X_r={c^{(\theta)}}(r+t))=0$ for all $r,t \in [0,{m_{\theta}})$ such that $r+t<{m_{\theta}}$ and $x\in {\mathbb{R}}$ . Hence, by the continuity of ${G^{(\theta)}}$ and $\mathcal{U}$ , and by the dominated convergence theorem, we have that ${U^{(\theta)}}$ is continuous. By Theorem 3.1, it is enough to show that ${U^{(\theta)}}$ satisfies the integral equation
where $H(r)=\int_{({-}\infty,0)} {U^{(\theta)}}\big(r,{c^{(\theta)}}(r)+y\big)\Pi({{d}} y)$ for all $r\in [0,{m_{\theta}})$ , and in the last equality we used the explicit form of $\Pi({{d}} y)$ . Then it suffices to show that $H(t)=\mathcal{U}(t)$ for all $t\in [0,{m_{\theta}})$ .
Let $t\geq 0$ . For any $\delta \in (0,{m_{\theta}}-t)$ , consider the stopping time
Note that for any $s<\tau_{\delta}$ we have that $X_s\in \big({c^{(\theta)}}(s+t), {c^{(\theta)}}(s+t)+h\big)$ and $X_{\delta}\in \big({c^{(\theta)}}(s+t), {c^{(\theta)}}(s+t)+h\big)$ in the event $\{ \tau_{\delta}=\delta \}$ . Then, using the strong Markov property at time $\tau_{\delta}$ , we have that for any $x\in [{c^{(\theta)}}(t), {c^{(\theta)}}(t)+h)$ ,
where in the last equality we used the fact that ${U^{(\theta)}}(t,x)=0$ for all $x\in [{c^{(\theta)}}(t), {c^{(\theta)}}(t)+h]$ , the continuity of ${c^{(\theta)}}$ , and the fact that X can only cross above ${c^{(\theta)}}$ by creeping. By using the compensation formula for Poisson random measures, we obtain that
Hence we conclude that for any $\delta>0$ ,
and hence
By the continuity of H and $\mathcal{U}$ we obtain that
Moreover, from Equation (4.2) we conclude that $H(t)=\mathcal{U}(t)$ for all $t\in [0,{m_{\theta}})$ and the conclusion holds.
Hence, for any $(t,x)\in [0,{m_{\theta}})\times {\mathbb{R}}$ , we can write
where for any $r,s\in [0,{m_{\theta}})$ , $b\geq 0$ , and $x\in {\mathbb{R}}$ ,
Take a value $h_0>0$ sufficiently small. Then the functions ${b^{(\theta)}}$ and $\mathcal{V}$ satisfy the integral equations
for all $t\in [0,{m_{\theta}}]$ . We can approximate the integrals above by Riemann sums, so a numerical approximation can be implemented. Indeed, take $n \in \mathbb{Z}_+$ sufficiently large and define $h={m_{\theta}}/n$ . For each $k\in \{0,1,2,\ldots,n\}$ , we define $t_k=kh$ . Then the sequence of times $\{ t_k, k= 0,1,\ldots,n \}$ is a partition of the interval $[0,{m_{\theta}}]$ . Then, for any $x\in {\mathbb{R}}$ and $t\in [t_k,t_{k+1})$ , we approximate ${V^{(\theta)}}(t,x)$ by
where the sequence $\{(b_k, \mathcal{V}_k), k=0,1,\ldots,n-1 \}$ is a solution to
for each $k\in \{0,1,\ldots, n -1\}$ . For n sufficiently large, the sequence $\{(b_k,\mathcal{V}_k), k=0,1,\ldots,n \}$ is a numerical approximation to $\{({b^{(\theta)}}(t_k), \mathcal{V}(t_k)), k=0,1,\ldots,n \}$ (provided that $V^{(\theta)}_h\leq 0$ ) and can be calculated by using backwards induction. The functions $K_1$ and $K_2$ can be estimated using simulation methods. In Figure 3, we include a plot of the numerical calculation of ${b^{(\theta)}}$ and ${V^{(\theta)}}(0,x)$ using the parameters $\theta=\log\!(2)/10$ , $\mu=3$ , $\sigma=\lambda=\rho=1$ . In this case we take $n=10,000$ time steps (so that $h=0.001$ ), $h_0=0.001$ , and we estimate the functions $K_1$ and $K_2$ by simulating $N=30,000$ sample paths of the process $\{X_{hj}, j\in \{0,1,\ldots, n\} \}$ .
Acknowledgements
Support from the Department of Statistics of the LSE and the LSE Ph.D. Studentship is gratefully acknowledged by José M. Pedraza. We are also grateful to two anonymous referees for their useful suggestions, which improved the presentation of this paper.
Funding information
There are no funding bodies to thank in relation to the creation of this article.
Competing interests
There are no competing interests to declare which arose during the preparation or publication process of this article.