Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-23T21:30:52.880Z Has data issue: false hasContentIssue false

Stopping problems with an unknown state

Published online by Cambridge University Press:  09 August 2023

Erik Ekström*
Affiliation:
Uppsala University
Yuqiong Wang*
Affiliation:
Uppsala University
*
*Postal address: Department of Mathematics, Box 256, 751 05 Uppsala, Sweden.
*Postal address: Department of Mathematics, Box 256, 751 05 Uppsala, Sweden.
Rights & Permissions [Opens in a new window]

Abstract

We extend the classical setting of an optimal stopping problem under full information to include problems with an unknown state. The framework allows the unknown state to influence (i) the drift of the underlying process, (ii) the payoff functions, and (iii) the distribution of the time horizon. Since the stopper is assumed to observe the underlying process and the random horizon, this is a two-source learning problem. Assigning a prior distribution for the unknown state, standard filtering theory can be employed to embed the problem in a Markovian framework with one additional state variable representing the posterior of the unknown state. We provide a convenient formulation of this Markovian problem, based on a measure change technique that decouples the underlying process from the new state variable. Moreover, we show by means of several novel examples that this reduced formulation can be used to solve problems explicitly.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

In most literature on optimal stopping theory, the stopper acts under full information about the underlying system. In some applications, however, information is a scarce resource, and the stopper then needs to base her decision only on the information available upon stopping. We study stopping problems of the type

(1) \begin{equation}\sup_{\tau}{\mathbb E}\bigl[g(\tau,X_\tau,\theta)1_{\{\tau < \gamma\}}+ h(\gamma,X_\gamma,\theta)1_{\{ \gamma\leq\tau\}}\bigr],\end{equation}

where X is a diffusion process; here g and h are given functions representing the payoff if stopping occurs before or after the random time horizon $\gamma$ , respectively, and $\theta$ is a Bernoulli random variable representing the unknown state. This unknown state may influence the drift of the diffusion process X, the distribution of the random horizon $\gamma$ and the payoff functions g and h.

Cases with $g(t,x,\theta)=g(t,\theta)$ and $h(t,x,\theta)=h(t,\theta)$ are closely related to statistical problems, where the process X merely serves as an observation process but does not affect the payoff upon stopping. A classical example is the sequential testing problem for a Wiener process; see [Reference Shiryaev21] for a perpetual version and [Reference Novikov and Palacios-Soto19] for a version with a random horizon. Cases with $g(t,x,\theta)=g(t,x)$ and $h(t,x,\theta)=h(t,x)$ , on the other hand, where the unknown state does not affect the payoff directly but only implicitly via the dynamics of X, have been studied mainly in the financial literature. For example, American options with incomplete information about the drift of the underlying process have been studied in [Reference Décamps, Mariotti and Villeneuve4] and [Reference Ekström and Vannestål7], and a liquidation problem has been studied in [Reference Ekström and Lu5]. The related literature includes optimal stopping for regime-switching models (see [Reference Guo and Zhang12] and [Reference Vaicenavicius23]), studies of models containing change points [Reference Gapeev9, Reference Henderson, Kladvko, Monoyios and Reisinger13], a study allowing for an arbitrary distribution of the unknown state [Reference Ekström and Vaicenavicius6], problems of stochastic control [Reference Lakner16] and singular control [Reference De Angelis2], and stochastic games [Reference De Angelis, Gensbittel and Villeneuve3] under incomplete information. Stopping problems with a random time horizon are studied in, for example, [Reference Chakrabarty and Guo1] and [Reference Lempa and Matomäki17], where the authors consider models with a random finite time horizon but with state-independent distributions; for a study with a state-dependent random horizon, see [Reference Engehagen, Hornslien, Lavrutich and Tønnessen8].

In the current article, we study the optimal stopping problem using the general formulation in (1), which is flexible enough to accommodate several new examples. In particular, the notion of a state-dependent random horizon appears to be largely unstudied, even though it is a natural ingredient in many applications. Indeed, consider a situation where the unknown state is either ‘good’ ( $\theta=1$ ) or ‘bad’ ( $\theta=0$ ) for an agent who is thinking of investing in a certain business opportunity. Since agents are typically subject to competition, the business opportunity would eventually disappear, and the rate at which it does so would typically be larger in the ‘good’ state than in the ‘bad’ state. The disappearance of a business opportunity is incorporated in our set-up by choosing the compensation $h\equiv 0$ .

In some applications it is more natural to have a random state-dependent horizon at which the stopper is forced to stop (as opposed to missing out on the opportunity). For example, in modelling of financial contracts with recall risk (see e.g. [Reference Glover and Hulley11]), the party who makes the recall would decide on a time point at which the positions at hand have to be terminated. Consequently, problems with $h=g$ can be viewed as problems of forced stopping. More generally, the random horizon can be useful in models with competition, where $h\leq g$ corresponds to situations with first-mover advantage, and $h\geq g$ to situations with second-mover advantage.

We first apply classical filtering methods (see e.g. [Reference Liptser and Shiryaev18]) to the stopping problem (1), which allows us to reformulate the stopping problem in terms of a two-dimensional state process $(X,\Pi)$ , where $\Pi$ is the probability of one of the states conditional on observations. Then a measure-change technique is employed, where the dynamics of the diffusion process X under the new measure are unaffected by the unknown state, whereas the Radon–Nikodým derivative can be fully expressed in terms of $\Pi$ . Finally, it is shown how the general set-up, with two spatial dimensions, can be reduced further in specific examples. In fact, we provide three different examples (a hiring problem, a problem of optimal closing of a short position, and a sequential testing problem with random horizon) where it turns out that the spatial dimension is one-dimensional so that the problems are amenable to further analysis. The examples are mainly of motivational character, and in order not to burden the presentation with too many details, we content ourselves by providing the reduction to one spatial dimension; a detailed study of the corresponding one-dimensional problem can then be performed using standard methods of optimal stopping theory.

2. Problem specification

We consider a Bayesian set-up where one observes a diffusion process X in continuous time, the drift of which depends on an unknown state $\theta$ that takes values 0 and 1 with probabilities $1-\pi$ and $\pi$ , respectively, with $\pi\in[0,1]$ . Given payoff functions g and h, the problem is to stop the process so as to maximize the expected reward in (1). Here the random horizon $\gamma$ has a state-dependent distribution but is independent of the noise of X.

The above set-up can be realized by considering a probability space $(\Omega, \mathcal F,{\mathbb P}_\pi)$ that hosts a standard Brownian motion W and an independent Bernoulli-distributed random variable $\theta$ with ${\mathbb P}_\pi(\theta=1)=\pi=1-{\mathbb P}_\pi(\theta=0)$ . Additionally, we let $\gamma$ be a random time (possible infinite) independent of W and with state-dependent survival distribution

\[{\mathbb P}_\pi(\gamma>t\mid \theta=i) = F_i(t),\]

where $F_i$ is continuous and non-increasing with $F_i(0)=1$ and $F_i(t)>0$ for all $t\geq 0$ , $i=0,1$ . We remark that we include the possibility that $F_i\equiv 1$ for some $i\in\{0,1\}$ (or for both), corresponding to an infinite horizon. We then have

(2) \begin{equation}{\mathbb P}_\pi=(1-\pi){\mathbb P}_0 + \pi{\mathbb P}_1,\end{equation}

where ${\mathbb P}_0({\cdot})={\mathbb P}_\pi(\cdot\mid \theta=0)$ and ${\mathbb P}_1({\cdot})={\mathbb P}_\pi(\cdot\mid \theta=1)$ .

Now consider the equation

(3) \begin{equation}{\mathrm{d}} X_t= \mu(X_t,\theta) \,{\mathrm{d}} t + \sigma(X_t) \,{\mathrm{d}} W_t.\end{equation}

Here $\mu(\cdot, \cdot)\,:\, {\mathbb R}\times\{0,1\}\to{\mathbb R}$ is a given function of the unknown state $\theta$ and the current value of the underlying process; we denote $\mu_0(x)=\mu(x,0)$ and $\mu_1(x)=\mu(x,1)$ . The diffusion coefficient $\sigma({\cdot})\,:\, {\mathbb R}\to (0,\infty)$ is a given function of x, independent of the unknown state $\theta$ . We assume that the functions $\mu_0,\mu_1$ and $\sigma$ satisfy standard Lipschitz conditions so that the existence and uniqueness of a strong solution X is guaranteed. We are also given two functions

\[g(\cdot,\cdot,\cdot)\,:\, [0,\infty)\times{\mathbb R}\times\{0,1\}\to{\mathbb R}\]

and

\[h(\cdot,\cdot,\cdot)\,:\, [0,\infty)\times{\mathbb R}\times\{0,1\}\to{\mathbb R},\]

which we refer to as the payoff functions. We assume that $g_i$ and $h_i$ are continuous for $i=0,1$ , where we use the notation $g_i(\cdot,\cdot)\,:\!=\, g(\cdot,\cdot,i)$ and $h_i(\cdot,\cdot)\,:\!=\, h(\cdot,\cdot,i)$ to denote the payoff functions on the event $\{\theta=i\}$ , $i=0,1$ .

Let $\mathcal F^X$ denote the smallest right-continuous filtration that makes X adapted, and let $\mathcal T^X$ be the set of $\mathcal F^X$ -stopping times. Similarly, let $\mathcal F^{X,\gamma}$ denote the smallest right-continuous filtration to which both X and the process $1_{\{\cdot\geq \gamma\}}$ are adapted, and let $\mathcal T^{X,\gamma}$ be the set of $\mathcal F^{X,\gamma}$ -stopping times.

We now consider the optimal stopping problem

(4) \begin{equation}V= \sup_{\tau\in\mathcal T^{X,\gamma}}{\mathbb E}_\pi\bigl[g(\tau,X_\tau,\theta)1_{\{\tau< \gamma\}}+ h(\gamma,X_\gamma,\theta)1_{\{ \gamma\leq\tau\}}\bigr].\end{equation}

In (4), and in similar expressions throughout the paper, we use the convention that $h(\tau,X_\tau,\theta)\,:\!=\, 0$ on the event $\{\tau=\gamma=\infty\}$ . We further assume that the integrability condition

\[{\mathbb E}_\pi\biggl[\sup_{t\geq0}\{ | g(t,X_t,\theta)| + | h(t,X_t,\theta)|\}\biggr]<\infty\]

holds.

Remark 1. The unknown state $\theta$ in the stopping problem (4) influences

  1. (i) the drift of the process X,

  2. (ii) the payoffs g and h, and

  3. (iii) the survival distribution of the random horizon $\gamma$ .

More precisely, on the event $\{\theta=0\}$ , the drift of X is $\mu_0({\cdot})$ , the payoff functions are $g_0(\cdot,\cdot)$ and $h_0(\cdot,\cdot)$ , and the random horizon has survival distribution function $F_0({\cdot})$ ; on the event $\{\theta=1\}$ , the drift is $\mu_1$ , the payoff functions are $g_1(\cdot,\cdot)$ and $h_1(\cdot,\cdot)$ , and the random horizon has survival distribution function $F_1({\cdot})$ .

Remark 2. In (4), the payoff on the event $\{\tau=\gamma\}$ is specified in terms of h. In some applications, however, one may want to use the alternative formulation

(5) \begin{equation}U\,:\!=\, \sup_{\tau\in\mathcal T^{X,\gamma}}{\mathbb E}_\pi\bigl[g(\tau,X_\tau,\theta)1_{\{\tau\leq \gamma\}}+ h(\gamma,X_\gamma,\theta)1_{\{ \gamma<\tau\}}\bigr].\end{equation}

If $g\geq h$ , then we have

\[ U = \sup_{\tau\in\mathcal T^{X,\gamma}}{\mathbb E}_\pi \bigl[g(\tau,X_{\tau},\theta)1_{\{\tau< \gamma\}}+ g(\gamma,X_\gamma,\theta)1_{\{ \gamma\leq\tau\}}\bigr]\]

since $\tau\wedge\gamma\in\mathcal T^{X,\gamma}$ for every $\tau\in\mathcal T^{X,\gamma}$ , and thus the formulation (5) is contained in the formulation (4).

Similarly, if $g\leq h$ , then

\begin{equation*}U=\sup_{\tau\in\mathcal T^{X,\gamma}}{\mathbb E}_\pi\bigl[g(\tau,X_\tau,\theta)1_{\{\tau< \gamma\}}+ h(\gamma,X_\gamma,\theta)1_{\{ \gamma\leq \tau\}}\bigr]=V,\end{equation*}

where the first equality uses that for a given stopping time $\tau$ , the time

\[\tau^{\prime}=\begin{cases}\tau & \text{on $\{\tau<\gamma\}$}\\\infty & \text{on $\{\tau\geq \gamma\}$}\end{cases}\]

is also a stopping time. In the general case where no ordering between g and h is given, however, the formulation (5) is more involved; such cases are not covered by the results of the current article.

3. A useful reformulation of the problem

In this section we rewrite the optimal stopping problem (4) with incomplete information as an optimal stopping problem with respect to stopping times in $\mathcal T^X$ and with complete information.

First consider the stopping problem

(6) \begin{equation}\hat V= \sup_{\tau\in\mathcal T^X}{\mathbb E}_\pi\bigl[g(\tau,X_\tau,\theta)1_{\{\tau< \gamma\}}+ h(\gamma,X_\gamma,\theta)1_{\{\gamma\leq\tau\}}\bigr],\end{equation}

where the supremum is taken over $\mathcal F^X$ -stopping times. Since $\mathcal T^X\subseteq\mathcal T^{X,\gamma}$ , we have $\hat V\leq V$ . On the other hand, by a standard argument (see [Reference Chakrabarty and Guo1] or [Reference Lempa and Matomäki17]), we also have the reverse inequality, so $\hat V=V$ . Indeed, first recall that for any $\tau\in\mathcal T^{X,\gamma}$ there exists $\tau^\prime\in\mathcal T^X$ such that $\tau\wedge\gamma = \tau^\prime\wedge\gamma$ ; see [Reference Protter20, p. 378]. Consequently, $\tau=\tau^{\prime}$ on $\{\tau<\gamma\}=\{\tau^{\prime}<\gamma\}$ and $\tau\wedge\gamma=\tau^{\prime}\wedge\gamma=\gamma$ on $\{\tau\geq \gamma\}=\{\tau^{\prime}\geq\gamma\}$ , so

\begin{equation*}{\mathbb E}_\pi\bigl[g(\tau,X_\tau,\theta)1_{\{\tau< \gamma\}}+ h(\gamma,X_\gamma,\theta)1_{\{\gamma\leq\tau\}}\bigr] = {\mathbb E}_\pi\bigl[g(\tau^\prime,X_{\tau^\prime},\theta)1_{\{\tau^\prime< \gamma\}} + h(\gamma,X_\gamma,\theta)1_{\{\gamma\leq\tau^{\prime}\}}\bigr],\end{equation*}

from which $\hat V=V$ follows. Moreover, if $\tau^{\prime}\in\mathcal T^X$ is optimal in (6), then it is also optimal in (4).

Remark 3. Since we assume that the survival distributions are continuous, we have

\[{\mathbb P}_\pi(\tau=\gamma<\infty)=0\]

for any $\tau\in\mathcal T^X$ . Consequently, we can alternatively write

\[\hat V=\sup_{\tau\in\mathcal T^X}{\mathbb E}_\pi\bigl[g(\tau,X_\tau,\theta)1_{\{\tau\leq \gamma\}}+h(\gamma,X_\gamma,\theta)1_{\{\gamma<\tau\}}\bigr].\]

To study the stopping problem (4), or equivalently the optimal stopping problem (6), we introduce the conditional probability process

\[\Pi_t\,:\!=\, {\mathbb P}_\pi\bigl(\theta=1\mid \mathcal F^X_t\bigr)\]

and the corresponding probability ratio process

(7) \begin{equation}\Phi_t\,:\!=\, \dfrac{\Pi_t}{1-\Pi_t}.\end{equation}

Note that $\Pi_0=\pi$ and $\Phi_0=\varphi\,:\!=\, \pi/(1-\pi)$ , ${\mathbb P}_\pi$ -a.s.

Proposition 1. We have

(8) \begin{align}V &= \sup_{\tau\in\mathcal T^X}{\mathbb E}_\pi\biggl[ g_0(\tau,X_\tau)(1-\Pi_\tau)F_0(\tau) + g_1(\tau,X_\tau)\Pi_\tau F_1(\tau)\\ \notag&\quad- \int_0^{\tau} h_0(t,X_t)(1-\Pi_t )\,{\mathrm{d}} F_0(t) -\int_0^{\tau}h_1(t,X_t)\Pi_t \,{\mathrm{d}} F_1(t)\biggr].\end{align}

Moreover, if $\tau\in\mathcal T^X$ is optimal in (8), then it is also optimal in (4).

Proof. Let $\tau$ denote a stopping time in $\mathcal T^X$ . Using the tower property, we find that

\begin{align*}{\mathbb E}_\pi\bigl[g(\tau,X_\tau,\theta)1_{\{\tau< \gamma\}}\bigr]&= {\mathbb E}_\pi\bigl[{\mathbb E}_\pi\bigl[ g_i(\tau,X_\tau) 1_{\{\tau< \gamma\}}\mid \mathcal F^X_\tau\bigr]\bigr] \\ &={\mathbb E}_\pi\bigl[ {\mathbb E}_\pi\bigl[g(\tau,X_\tau,\theta)1_{\{\tau< \gamma\}}\mid \mathcal F^X_\tau,\theta=0\bigr] (1-\Pi_\tau) \\ & \quad + {\mathbb E}_\pi\bigl[g(\tau,X_\tau,\theta)1_{\{\tau< \gamma\}}\mid \mathcal F^X_\tau,\theta=1\bigr] \Pi_\tau\bigr]\\&={\mathbb E}_\pi\bigl[g_0(\tau,X_\tau){\mathbb P}_\pi\bigl(\tau< \gamma \mid \mathcal F^X_\tau,\theta=0\bigr) (1-\Pi_\tau) \\ & \quad + g_1(\tau,X_\tau) {\mathbb P}_\pi\bigl(\tau< \gamma\mid \mathcal F^X_\tau,\theta=1\bigr) \Pi_\tau\bigr]\\ &={\mathbb E}_\pi\bigl[ g_0(\tau,X_\tau)(1-\Pi_\tau)F_0(\tau) + g_1(\tau,X_\tau)\Pi_\tau F_1(\tau)\bigr],\end{align*}

where for the last equality we recall that $\gamma$ and $\tau$ are independent on the event $\{\theta = i\}$ , $i=0,1$ .

Similarly, for the second term we use

\[{\mathbb P}_\pi\bigl(\theta=0, \gamma\geq t\mid \mathcal F^X_\tau\bigr)=(1-\Pi_\tau) F_0(t),\quad{\mathbb P}_\pi\bigl(\theta=1, \gamma\geq t\mid \mathcal F^X_\tau\bigr)=\Pi_\tau F_1(t),\]

so that

\begin{align*} {\mathbb E}_\pi\bigl[ h(\gamma,X_\gamma,\theta)1_{\{\gamma\leq\tau\}}\bigr] &= {\mathbb E}_\pi\bigl[ {\mathbb E}_\pi\bigl[h(\gamma,X_\gamma,\theta) 1_{\{\gamma\leq\tau\}}\mid \mathcal F^{X}_\tau\bigr] \bigr]\\ &= -{\mathbb E}_\pi\biggl[ \int_0^\infty {\mathbb E}_\pi\bigl[h(\gamma,X_\gamma,\theta) 1_{\{\gamma\leq\tau\}}\mid \mathcal F^{X}_\tau, \theta = 0, \gamma=t \bigr](1-\Pi_\tau) \,{\mathrm{d}} F_0(t) \\ & \quad + \int_0^\infty {\mathbb E}_\pi\bigl[h(\gamma,X_\gamma,\theta) 1_{\{\gamma\leq\tau\}}\mid \mathcal F^{X}_\tau, \theta = 1, \gamma=t \bigr] \Pi_\tau \,{\mathrm{d}} F_1(t)\biggr]\\&= -{\mathbb E}_\pi\biggl[ \int_0^\infty h_0(t,X_t) 1_{\{t\leq\tau\}}(1-\Pi_\tau) \,{\mathrm{d}} F_0(t) \\ & \quad +\int_0^\infty h_1(t,X_t) 1_{\{t\leq\tau\}} \Pi_\tau \,{\mathrm{d}} F_1(t)\biggr]\\ &= -{\mathbb E}_\pi\biggl[\int_0^{\tau} h_0(t,X_t)(1-\Pi_t )\,{\mathrm{d}} F_0(t) +\int_0^{\tau}h_1(t,X_t)\Pi_t \,{\mathrm{d}} F_1(t)\biggr],\end{align*}

where the last equality holds because $\Pi$ is a martingale. The optimal stopping problem (4) therefore coincides with the stopping problem

\begin{align*}& \sup_{\tau\in\mathcal T^X}{\mathbb E}_\pi\biggl[ g_0(\tau,X_\tau)(1-\Pi_\tau)F_0(\tau) + g_1(\tau,X_\tau)\Pi_\tau F_1(\tau) \\ &\quad - \int_0^{\tau} h_0(t,X_t)(1-\Pi_t )\,{\mathrm{d}} F_0(t) -\int_0^{\tau}h_1(t,X_t)\Pi_t \,{\mathrm{d}} F_1(t)\biggr].\end{align*}

It is well known (see e.g. [Reference Shiryaev22, pp. 180--181] or [Reference Gapeev and Shiryaev10, p. 522]) that the posterior probability is given by

\[\Pi_t=\dfrac{\frac{\pi}{1-\pi}L_t}{1+\frac{\pi}{1-\pi}L_t},\]

where

\[L_t\,:\!=\, \exp\biggl\{\int_0^t\dfrac{\mu_1(X_s)-\mu_0(X_s)}{\sigma^2(X_s)}\,{\mathrm{d}} X_s-\dfrac{1}{2}\int_0^t\dfrac{\mu^2_1(X_s)-\mu^2_0(X_s)}{\sigma^2(X_s)}\,{\mathrm{d}} s\biggr\}.\]

Therefore, by an application of Itô’s formula, the pair $(X,\Pi)$ satisfies

(9) \begin{equation}\begin{cases}{\mathrm{d}} X_t=(\mu_0(X_t)+ (\mu_1(X_t)-\mu_0(X_t))\Pi_t)\,{\mathrm{d}} t + \sigma(X_t) \,{\mathrm{d}}\hat W_t,\\{\mathrm{d}} \Pi_t=\omega(X_t) \Pi_t(1-\Pi_t)\,{\mathrm{d}}\hat W_t,\end{cases}\end{equation}

where $\omega(x)\,:\!=\, (\mu_1(x)-\mu_0(x))/\sigma(x)$ is the signal-to-noise ratio and

\[\hat W_t\,:\!=\, \int_0^t\dfrac{{\mathrm{d}} X_t}{\sigma(X_s)}-\int_0^t\dfrac{1}{\sigma(X_t)}(\mu_0 (X_s)+(\mu_1(X_s)-\mu_0(X_s))\Pi_s)\,{\mathrm{d}} s\]

is the so-called innovation process; by P. Lévy’s theorem, $\hat W$ is a ${\mathbb P}_\pi$ -Brownian motion. By our non-degeneracy and Lipschitz assumptions on the coefficients, there exists a unique strong solution to the system (9), and the pair $(X,\Pi)$ is a strong Markov process. Moreover, using Itô’s formula, it is straightforward to check that the likelihood ratio process $\Phi$ in (7) satisfies

(10) \begin{equation}{\mathrm{d}}\Phi_t=\dfrac{\mu_1(X_t)-\mu_0(X_t)}{\sigma^2(X_t)}\Phi_t({\mathrm{d}} X_t-\mu_0(X_t)\,{\mathrm{d}} t).\end{equation}

4. A measure change

In the current section we provide a measure change that decouples X from $\Pi$ . This specific measure change technique was first used in [Reference Klein15] and has since been used by several authors (see [Reference De Angelis2], [Reference Ekström and Lu5], [Reference Ekström and Vannestål7], [Reference Johnson and Peskir14]).

Lemma 1. For $t\in[0,\infty)$ , let ${\mathbb P}_{\pi,t}$ denote the measure ${\mathbb P}_\pi$ restricted to $\mathcal F^X_t$ . We then have

\[\dfrac{{\mathrm{d}}{\mathbb P}_{0,t}}{{\mathrm{d}}{\mathbb P}_{\pi,t}}=\dfrac{1+\varphi}{1+\Phi_t}.\]

Proof. For any $A\in\mathcal F^X_t$ we have

\[{\mathbb E}_\pi[(1-\Pi_t) 1_{A}]={\mathbb E}_\pi[1_{\{\theta=0\}} 1_A]=(1-\pi){\mathbb E}_0[ 1_A]\]

by (2). Consequently,

\begin{equation*} \dfrac{{\mathrm{d}}{\mathbb P}_{0,t}}{{\mathrm{d}}{\mathbb P}_{\pi,t}}=\dfrac{1-\Pi_t}{1-\pi}= \dfrac{1+\varphi}{1+\Phi_t}.\end{equation*}

Since $1-\Pi_\tau=1/(1+\Phi_t)$ and $\Pi_t={{\Phi_t}/{(1+\Phi_t)}}$ , it is now clear that the stopping problem (4) (or equivalently problem (8)) can be written as

\begin{align*}V &= \dfrac{1}{1+\varphi} \sup_{\tau\in\mathcal T^X}{\mathbb E}_0\biggl[g_0(\tau,X_\tau)F_0(\tau) + g_1(\tau,X_\tau)\Phi_\tau F_1(\tau) \\ &\quad-\int_0^\tau h_0(t,X_t)\,{\mathrm{d}} F_0(t) - \int_0^\tau h_1(t,X_t)\Phi_t \,{\mathrm{d}} F_1(t)\biggr],\end{align*}

where the expected value is with respect to ${\mathbb P}_0$ , under which the process $(X,\Phi)$ is strong Markov and satisfies

(11) \begin{equation}\begin{cases}{\mathrm{d}} X_t=\mu_0(X_t)\,{\mathrm{d}} t + \sigma(X_t) \,{\mathrm{d}} W_t\\{\mathrm{d}}\Phi_t=\omega(X_t)\Phi_t \,{\mathrm{d}} W_t\end{cases}\end{equation}

(see (3) and (10)).

Next we introduce the process

(12) \begin{equation}\Phi^\circ_t\,:\!=\, \dfrac{F_1(t)}{F_0(t)}\Phi_t,\end{equation}

so that

\begin{align*} V &= \dfrac{1}{1+\varphi}\sup_{\tau\in\mathcal T^X}{\mathbb E}_0\biggl[ F_0(\tau)\bigl( g_0(\tau,X_\tau) + g_1(\tau,X_\tau)\Phi^\circ_\tau \bigr) \notag\\ &\quad-\int_0^\tau h_0(t,X_t) \,{\mathrm{d}} F_0(t) -\int_0^\tau \dfrac{F_0(t)}{F_1(t)} h_1(t,X_t)\Phi^\circ_t \,{\mathrm{d}} F_1(t)\biggr].\end{align*}

Note that the process $\Phi^\circ$ satisfies

\[{\mathrm{d}}\Phi^\circ_t= \dfrac{1}{f(t)}\Phi^\circ_t\,{\mathrm{d}} f(t) + \omega(X_t)\Phi^\circ_t \,{\mathrm{d}} W_t\]

under ${\mathbb P}_0$ , where $f(t)=F_1(t)/F_0(t)$ .

Remark 4. The process $\Phi^\circ$ is the likelihood ratio given observations of the processes X and $1_{\{\cdot\geq \gamma\}}$ on the event $\{ \gamma>t\}$ . Indeed, for $t\leq T$ , defining

\[\Pi^\circ_t \,:\!=\, {\mathbb P}_\pi\bigl(\theta=1\mid \mathcal F^X_t, \gamma>t\bigr)=\dfrac{{\mathbb P}_\pi\bigl(\theta=1,\gamma>t\mid \mathcal F^X_t\bigr)}{{\mathbb P}_\pi\bigl(\gamma>t\mid \mathcal F^X_t\bigr)}= \dfrac{\Pi_tF_1(t)}{\Pi_tF_1(t) + (1-\Pi_t)F_0(t)},\]

we have

\[\Pi^\circ_t=\dfrac{\Phi^\circ_t}{\Phi^\circ_t+1}.\]

We summarize our theoretical findings in the following theorem.

Theorem 1. Let

(13) \begin{align}v &= \sup_{\tau\in\mathcal T^X}{\mathbb E}_0\biggl[F_0(\tau)\bigl(g_0(\tau,X_\tau)+g_1(\tau,X_\tau)\Phi^\circ_\tau \bigr) \notag \\ &\quad -\int_0^\tau h_0(t,X_t) \,{\mathrm{d}} F_0(t) -\int_0^\tau \dfrac{F_0(t)}{F_1(t)} h_1(t,X_t)\Phi^\circ_t \,{\mathrm{d}} F_1(t) \biggr],\end{align}

where $(X,\Phi^\circ)$ is given by (11) and (12). Then $V=v/(1+\varphi)$ , where $\varphi=\pi/(1-\pi)$ . Moreover, if $\tau\in\mathcal T^X$ is an optimal stopping in (13), then it is also optimal in the original problem (4).

Remark 5. Under ${\mathbb P}_0$ , the three-dimensional process $(t,X,\Phi^\circ)$ is strong Markov, and the stopping problem (13) can be naturally embedded in a setting allowing for an arbitrary starting point $(t,x,\varphi)$ . In the sections below we consider examples that can be reduced to problems that only depend on $\Phi^\circ$ , where $\Phi^\circ$ is a one-dimensional Markov process, which simplifies the embedding.

5. An example: a hiring problem

In this section we consider a (simplistic) version of a hiring problem. To describe this, consider a situation where a company tries to decide whether or not to employ a certain candidate, where there is considerable uncertainty about the candidate’s ability. The candidate is either of a ‘strong type’ or of a ‘weak type’, and during the employment procedure, tests are performed to find out which is the true state. At the same time, the candidate is potentially lost for the company as he/she may receive other offers. Moreover, the rate at which such offers are presented may depend on the ability of the candidate; for example, a candidate of the strong type could be more likely to be recruited to other companies than a candidate of the weak type.

To model the above hiring problem, we let $h_i\equiv 0$ , $i=0,1$ , and

\[g(t,x,\theta)=\begin{cases}-{\mathrm{e}}^{-rt}c & \text{if $\theta=0$,}\\{\mathrm{e}}^{-rt}d & \text{if $\theta=1$,}\end{cases}\]

where c and d are positive constants representing the overall cost and benefit of hiring the candidate, respectively, and $r> 0$ is a constant discount rate. To learn about the unknown state $\theta$ , tests are performed and represented as a Brownian motion

\[X_t=\mu (\theta) t+\sigma W_t\]

with state-dependent drift

\[\mu(\theta)=\begin{cases}\mu_0 & \text{if $\theta=0$,}\\\mu_1 & \text{if $\theta=1$,}\end{cases}\]

where $\mu_0<\mu_1$ . We further assume that the survival probabilities $F_0$ and $F_1$ decay exponentially in time, that is,

\[F_0(t) = {\mathrm{e}}^{-\lambda_0 t},\quad F_1(t) = {\mathrm{e}}^{-\lambda_1 t},\]

where $\lambda_0, \lambda_1\geq 0$ are known constants. The stopping problem (4) under consideration is thus

\[V=\sup_{\tau\in\mathcal T^{X,\gamma}}{\mathbb E}_\pi\bigl[ {\mathrm{e}}^{-r\tau}\bigl(d1_{\{\theta=1\}}-c1_{\{\theta=0\}}\bigr)1_{\{\tau<\gamma\}}\bigr],\]

where $\pi={\mathbb P}_{\pi}(\theta=1)$ .

By Theorem 1, we have

\[V = \dfrac{1}{1+\varphi}\sup_{\tau\in\mathcal T^X}{\mathbb E}_0\bigl[ {\mathrm{e}}^{-(r+\lambda_0)\tau}\bigl( \Phi^\circ_\tau d -c\bigr) \bigr],\]

where the underlying process $\Phi^{\circ}$ is a geometric Brownian motion satisfying

\[{\mathrm{d}}\Phi_t^\circ = {-(\lambda_1-\lambda_0)\Phi_t^\circ\,{\mathrm{d}} t + \omega\Phi^\circ_t\,{\mathrm{d}} W_t.}\]

Clearly, the value of the stopping problem is

\[V = \dfrac{d}{1+\varphi}\sup_{\tau\in\mathcal T^X}{\mathbb E}_0\biggl[ {\mathrm{e}}^{-(r+\lambda_0)\tau}\biggl( \Phi^\circ_\tau-\dfrac{c}{d} \biggr) \biggr]= \dfrac{d}{1+\varphi} V^{Am}(\varphi),\]

where $V^{Am}$ is the value of the American call option with underlying $\Phi^{\circ}$ starting at $\Phi^\circ_0=\varphi$ and with strike ${{c}/{d}}$ . Standard stopping theory gives that the corresponding value is

\[V =\begin{cases} \dfrac{d b^{1-\eta}}{\eta(1+\varphi)}{\varphi}^\eta, & \varphi<b,\\[9pt] \dfrac{d}{1+\varphi}\biggl(\varphi-\dfrac{c}{d}\biggr), & \varphi\geq b,\end{cases}\]

where $\eta>1$ is the positive solution of the quadratic equation

\[\dfrac{\omega^2}{2}\eta(\eta-1)+(\lambda_0-\lambda_1)\eta-(r+\lambda_0)=0,\]

and $b = {{c \eta}/{(d(\eta-1))}}$ . Furthermore,

\[\tau\,:\!=\, \inf\{t\geq 0\,:\, \Phi^\circ_t\geq b\}\]

is an optimal stopping time. More explicitly, in terms of the process X, we have

\[\tau=\inf\biggl\{t\geq 0\,:\, X_t\geq x + \dfrac{\sigma}{\omega}\biggl(\ln\biggl(\dfrac{b}{\varphi}\biggr) +(\lambda_1-\lambda_0)t\biggr)+\dfrac{\mu_0+\mu_1}{2}t\biggr\},\]

where $\omega\,:\!=\, (\mu_1-\mu_0)/\sigma$ .

6. An example: closing a short position

In this section we study an example of optimal closing of a short position under recall risk; see [Reference Glover and Hulley11]. We consider a short position in an underlying stock with unknown drift, where the random horizon corresponds to a time point at which the counterparty recalls the position. Naturally, the counterparty favours a large drift, and we thus assume that the risk of recall is greater in the state with a small drift. A similar model (but with no recall risk) was studied in [Reference Ekström and Lu5].

Let the stock price be modelled by geometric Brownian motion with dynamics

\[{\mathrm{d}} X_t = \mu(\theta) X_t\,{\mathrm{d}} t+ \sigma X_t \,{\mathrm{d}} W_t,\]

where the drift is state-dependent with $\mu(0)=\mu_0 <\mu_1=\mu(1)$ , and $\sigma$ is a known constant. We let $g(t,x,\theta)=h(t,x,\theta)=x\,{\mathrm{e}}^{-rt}$ and consider the stopping problem

\[V=\inf_{\tau\in\mathcal T^{X,\gamma}}{\mathbb E}_\pi[{\mathrm{e}}^{-r\tau\wedge\gamma} X_{\tau\wedge\gamma}],\]

where

\[F_i(t)\,:\!=\, {\mathbb P}_\pi(\gamma>t\mid \theta=i) = {\mathrm{e}}^{-\lambda_i t},\]

with $\lambda_0>0=\lambda_1$ and

\[{\mathbb P}_\pi(\theta= 1)=\pi =1-{\mathbb P}_\pi(\theta = 0) .\]

Here r is a constant discount rate; to avoid degenerate cases, we assume that $r\in(\mu_0,\mu_1)$ .

Then the value function can be written as $V=v/(1+\varphi)$ , where

(14) \begin{equation}v = \inf_{\tau\in\mathcal T^X}{\mathbb E}_0\biggl[ {\mathrm{e}}^{-r\tau}F_0(\tau)X_{\tau}\bigl( 1+ \Phi^\circ_\tau \bigr) + \lambda_0\int_0^\tau \,{\mathrm{e}}^{-rt}F_0(t) X_{t}\bigl( 1+ \Phi^\circ_t \bigr)\,{\mathrm{d}} t\biggr],\end{equation}

with ${\mathrm{d}}\Phi^\circ_t = \lambda_0\Phi^\circ_t \,{\mathrm{d}} t + \omega \Phi^\circ_t\,{\mathrm{d}} W_t$ and $\Phi^\circ_0 = \varphi$ . Here $\omega = {{(\mu_1-\mu_0)}/{\sigma}}$ .

Another change of measure will remove the occurrences of X in (14). In fact, let $\tilde {\mathbb {P}}$ be a measure with

\[\dfrac{{\mathrm{d}}\tilde {\mathbb P}}{{\mathrm{d}}{\mathbb P}_0}\bigg\vert_{\mathcal F_t} = {\mathrm{e}}^{-({{\sigma^2}/{2}})t + \sigma W_t},\]

so that $\tilde W_t=-\sigma t+W_t$ is a $\tilde{\mathbb P}$ -Brownian motion. Then

(15) \begin{equation}v=x\inf_{\tau\in\mathcal T^X}\tilde{\mathbb {E}}\biggl[ {\mathrm{e}}^{-(r+\lambda_0-\mu_0)\tau}\bigl( 1+ \Phi^\circ_\tau \bigr) + \lambda_0\int_0^\tau \,{\mathrm{e}}^{-(r+\lambda_0-\mu_0)t} \bigl( 1+ \Phi^\circ_t \bigr)\,{\mathrm{d}} t\biggr],\end{equation}

with

\[{\mathrm{d}}\Phi^0_t=(\lambda_0 +\sigma\omega)\Phi^0_t\,{\mathrm{d}} t + \omega \Phi^0_t\,{\mathrm{d}}\tilde W.\]

The optimal stopping problem (15) is a one-dimensional time-homogeneous problem, and is thus straightforward to analyse using standard stopping theory. Indeed, setting

\[\bar v\,:\!=\, \dfrac{v}{x}= \inf_{\tau\in\mathcal T^X}\tilde{\mathbb E}\biggl[ {\mathrm{e}}^{-(r+\lambda_0-\mu_0)\tau}\bigl( 1+ \Phi^\circ_\tau \bigr) + \lambda_0\int_0^\tau \,{\mathrm{e}}^{-(r+\lambda_0-\mu_0)t} \bigl( 1+ \Phi^\circ_t \bigr)\,{\mathrm{d}} t\biggr],\]

the associated free-boundary problem is to find $(\bar v,B)$ such that

(16) \begin{equation}\begin{cases}\dfrac{\omega^2\varphi^2}{2}\bar v_{\varphi\varphi} + (\lambda_0+\mu_1-\mu_0)\varphi\bar v_\varphi-(r+\lambda_0-\mu_0)\bar v + \lambda_0(1+\varphi)=0, & \varphi<B,\\[7pt]\bar v(\varphi)=1+\varphi, &\varphi\geq B,\\\bar v_\varphi(B)=1,\end{cases}\end{equation}

and such that $\bar v\leq 1+\varphi$ . Solving the free-boundary problem (16) gives

\[B=\dfrac{\eta(r-\mu_0)(\mu_1-r)}{(1-\eta)(r+\lambda_0-\mu_0)(\lambda_0+\mu_1-r)}\]

and

\[\bar v(\varphi)=\begin{cases}\dfrac{r-\mu_0}{(1-\eta)(r+\lambda_0-\mu_0)}\biggl(\dfrac{\varphi}{B}\biggr)^\eta-\dfrac{\lambda_0}{\mu_1-r}\varphi +\dfrac{\lambda_0}{r+\lambda_0-\mu_0}, & \varphi <B,\\[7pt]1+\varphi, & \varphi\geq B,\end{cases}\]

where $\eta<1$ is the positive solution of the quadratic equation

\[\dfrac{\omega^2}{2}\eta(\eta-1)+(\lambda_0+\mu_1-\mu_0)\eta-(r+\lambda_0-\mu_0)=0.\]

Note that $\bar v$ is concave since it satisfies the smooth-fit condition at B and since $\eta<1$ , so $\bar v(\varphi)\leq 1+\varphi$ . A standard verification argument then gives that

\[V=\frac{x}{1+\varphi}\bar v(\varphi),\]

and

\[\tau_B\,:\!=\, \inf\{t\geq 0\,:\, \Phi^\circ_t\geq B\}= \inf\{t\geq 0\,:\, \Phi_t\geq B\,{\mathrm{e}}^{-\lambda_0t}\}\]

is optimal in (14).

7. An example: a sequential testing problem with a random horizon

Consider the sequential testing problem for a Wiener process, i.e. the problem of determining as quickly, and accurately, an unknown drift $\theta$ from observations of the process

\[X_t=\theta t + \sigma W_t.\]

Similar to the classical version (see [Reference Shiryaev21]), we assume that $\theta$ is Bernoulli-distributed with ${\mathbb P}_\pi(\theta=1)=\pi=1-{\mathbb P}_\pi(\theta=0)$ , where $\pi\in(0,1)$ . In [Reference Novikov and Palacios-Soto19], the sequential testing problem has been studied under a random horizon. Here we consider an instance of a testing problem which further extends the set-up by allowing the distribution of the random horizon to depend on the unknown state.

More specifically, we assume that when $\theta = 1$ , then the horizon $\gamma$ is infinite, i.e. $F_1(t) = 1$ for all t; and when $\theta = 0$ , the time horizon is exponentially distributed with rate $\lambda$ , i.e. $F_0(t) = {\mathrm{e}}^{-\lambda t}$ . Mimicking the classical formulation of the problem, we study the problem of minimizing

\[{\mathbb P}_\pi(\theta\not= d) + c{\mathbb E}_\pi[\tau]\]

over all stopping times $\tau\in\mathcal T^{X,\gamma}$ and $\mathcal F^{X,\gamma}_\tau$ -measurable decision rules d with values in $\{0,1\}$ . By standard methods, the above optimization problem reduces to a stopping problem

\[V=\inf_{\tau\in\mathcal T^{X,\gamma}}{\mathbb E}_\pi\bigl[\hat\Pi_\tau\wedge (1-\hat\Pi_\tau) + c\tau\bigr],\]

where

\[\hat\Pi_t\,:\!=\, {\mathbb P}_\pi\bigl(\theta=1\mid \mathcal F^{X,\gamma}_t\bigr).\]

Moreover, the process $\hat\Pi$ satisfies

\[\hat\Pi_t=\begin{cases} \Pi^\circ_t, & t<\gamma,\\0, & t\geq \gamma,\end{cases}\]

where

\[\Pi^\circ_t=\dfrac{\Pi_t}{\Pi_t + (1-\Pi_t)\,{\mathrm{e}}^{-\lambda t}}=\dfrac{{\mathbb P}_\pi\bigl(\theta=1\mid \mathcal F^{X}_t\bigr)}{{\mathbb P}_\pi\bigl(\theta=1\mid \mathcal F^{X}_t\bigr) + \bigl(1-{\mathbb P}_\pi\bigl(\theta=1\mid \mathcal F^{X}_t\bigr)\bigr)\,{\mathrm{e}}^{-\lambda t}},\]

and it follows that

\begin{align*}V &= \inf_{\tau\in\mathcal T^{X,\gamma}}{\mathbb E}_\pi\bigl[\hat\Pi_\tau\wedge (1-\hat\Pi_\tau) + c\tau\bigr]\\ &= \inf_{\tau\in\mathcal T^{X,\gamma}}{\mathbb E}_\pi\bigl[\bigl(\Pi^\circ_\tau\wedge \bigl(1-\Pi_\tau^\circ\bigr)\bigr)1_{\{\tau<\gamma\}} + c(\tau\wedge\gamma) \bigr]\\ &=\inf_{\tau\in\mathcal T^{X,\gamma}}{\mathbb E}_\pi\bigl[\bigl(\Pi^\circ_\tau\wedge \bigl(1-\Pi_\tau^\circ\bigr)+ c\tau\bigr)1_{\{\tau<\gamma\}} + c\gamma 1_{\{\gamma\leq \tau\}}\bigr].\end{align*}

In other words, $g_i(t,\pi) = \pi \wedge (1-\pi) +ct$ and $h_i(t,\pi) = ct$ for $i \in\{0,1\}$ . Following the general methodology leading up to Theorem 1, we find that

\begin{align*} V &= \inf_{\tau\in\mathcal T^{X}}{\mathbb E}_\pi\biggl[\bigl(\Pi^\circ_\tau\wedge \bigl(1-\Pi_\tau^\circ\bigr)+c\tau\bigr)((1-\Pi_\tau)F_0(\tau) +\Pi_\tau)- c\int_0^\tau t(1-\Pi_t)\,{\mathrm{d}} F_0(t)\biggr]\\ &= \inf_{\tau\in\mathcal T^{X}} {\mathbb E}_\pi\biggl[\bigl(\Pi^\circ_\tau\wedge \bigl(1-\Pi_\tau^\circ\bigr)+c\tau\bigr)((1-\Pi_\tau)F_0(\tau) +\Pi_\tau)- c\tau(1-\Pi_{\tau}) F_0(\tau) \\ &\quad - c\int_0^\tau F_0(t)\,{\mathrm{d}}(t(1-\Pi_t))\biggr]\\&= \inf_{\tau\in\mathcal T^{X}}{\mathbb E}_\pi\biggl[\bigl(\Pi^\circ_\tau\wedge \bigl(1-\Pi_\tau^\circ\bigr)\bigr)((1-\Pi_\tau)F_0(\tau) +\Pi_\tau)+ c\int_0^\tau((1-\Pi_t)F_0(t) +\Pi_t)\,{\mathrm{d}} t\biggr]\\ &= \dfrac{1}{1+\varphi} \inf_{\tau\in\mathcal T^{X}}{\mathbb E}_0\biggl[F_0(\tau)\bigl(\Phi^\circ_\tau\wedge 1\bigr)+ c\int_0^\tau F_0(t)\bigl(1 +\Phi^\circ_t\bigr) \,{\mathrm{d}} t\biggr].\end{align*}

Here $\Phi^\circ\,:\!=\, \Pi^\circ/(1-\Pi^\circ)$ satisfies

\[{\mathrm{d}}\Phi^\circ_t=\lambda \Phi^\circ_t\,{\mathrm{d}} t + \omega \Phi^\circ_t \,{\mathrm{d}} W_t,\]

where $\omega = {{1}/{\sigma}}$ .

Standard stopping theory can now be applied to solve the sequential testing problem with a random horizon. Setting

\[v(\varphi)\,:\!=\,\inf_{\tau\in\mathcal T^{X}}{\mathbb E}_0\biggl[F_0(\tau)\bigl(\Phi^\circ_\tau\wedge 1\bigr)+ c\int_0^\tau F_0(t)\bigl(1 +\Phi^\circ_t\bigr) \,{\mathrm{d}} t\biggr],\]

where $\Phi^\circ_0=\varphi$ , one expects a two-sided stopping region $(0,A]\cup[B,\infty)$ , and v to satisfy

\[\begin{cases}\dfrac{1}{2}\omega^2\varphi^2 v_{\varphi\varphi}+\lambda \varphi v_\varphi -\lambda v+c(1+\varphi) =0, & \varphi \in (A,B),\\v(A) =A,\\v_\varphi(A) =1,\\v(B) =1,\\v_\varphi(B) =0,\end{cases}\]

for some constants A, B with $0<A<1<B$ . The general solution of the ODE is easily seen to be

\[v(\varphi)=C_1 \varphi^{-{{2\lambda}/{\omega^2}}}+C_2 \varphi+\dfrac{c}{\lambda} - \dfrac{c}{\lambda+\frac{1}{2}\omega^2}\varphi\ln(\varphi),\]

where $C_1, C_2$ are arbitrary constants. Since the stopping region is two-sided, explicit solutions are not expected. Instead, using the four boundary conditions, equations for the unknowns $C_1$ , $C_2$ , A, and B can be derived using standard methods; we omit the details.

Acknowledgements

We wish to thank two anonymous referees for their thorough reading and constructive criticism. The feedback helped us improve the presentation and arguments considerably.

Funding information

Funding from the Swedish Research Council and from the Knut and Alice Wallenberg Foundation is gratefully acknowledged.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Chakrabarty, A. and Guo, X. (2012). Optimal stopping times with different information levels and with time uncertainty. In Stochastic Analysis and Applications to Finance: Essays in Honour of Jia-an Yan, pp. 19–38. World Scientific.CrossRefGoogle Scholar
De Angelis, T. (2020). Optimal dividends with partial information and stopping of a degenerate reflecting diffusion. Finance Stoch. 24, 71123.CrossRefGoogle Scholar
De Angelis, T., Gensbittel, F. and Villeneuve, S. (2021). A Dynkin game on assets with incomplete information on the return. Math. Operat. Res. 46, 2860.CrossRefGoogle Scholar
Décamps, J.-P., Mariotti, T. and Villeneuve, S. (2005). Investment timing under incomplete information. Math. Operat. Res. 30, 472500.CrossRefGoogle Scholar
Ekström, E. and Lu, B. (2011). Optimal selling of an asset under incomplete information. Internat. J. Stoch. Anal. 2011, 543590.Google Scholar
Ekström, E. and Vaicenavicius, J. (2016). Optimal liquidation of an asset under drift uncertainty. SIAM J. Financial Math. 7, 357381.CrossRefGoogle Scholar
Ekström, E. and Vannestål, M. (2019). American options and incomplete information. Internat. J. Theoret. Appl. Finance 22, 1950035.CrossRefGoogle Scholar
Engehagen, S., Hornslien, M., Lavrutich, M. and Tønnessen, S. (2021). Optimal harvesting of farmed salmon during harmful algal blooms. Marine Policy 129, 104528.CrossRefGoogle Scholar
Gapeev, P. V. (2012). Pricing of perpetual American options in a model with partial information. Internat. J. Theoret. Appl. Finance 15, 1250010.CrossRefGoogle Scholar
Gapeev, P. V. and Shiryaev, A. N. (2011). On the sequential testing problem for some diffusion processes. Stochastics 83, 519535.CrossRefGoogle Scholar
Glover, K. and Hulley, H. (2022). Short selling with margin risk and recall risk. Internat. J. Theoret. Appl. Finance 25, 2250007.CrossRefGoogle Scholar
Guo, X. and Zhang, Q. (2004). Closed-form solutions for perpetual American put options with regime switching. SIAM J. Appl. Math. 64, 20342049.Google Scholar
Henderson, V., Kladvko, K., Monoyios, M. and Reisinger, C. (2020). Executive stock option exercise with full and partial information on a drift change point. SIAM J. Financial Math. 11, 10071062.CrossRefGoogle Scholar
Johnson, P. and Peskir, G. (2018). Sequential testing problems for Bessel processes. Trans. Amer. Math. Soc. 370, 20852113.CrossRefGoogle Scholar
Klein, M. (2009). Comment on ‘Investment timing under incomplete information’. Math. Operat. Res. 34, 249254.CrossRefGoogle Scholar
Lakner, P. (1995). Utility maximization with partial information. Stoch. Process. Appl. 56, 247273.CrossRefGoogle Scholar
Lempa, J. and Matomäki, P. (2013). A Dynkin game with asymmetric information. Stochastics 85, 763788.CrossRefGoogle Scholar
Liptser, R. S. and Shiryaev, A. N. (2001). Statistics of Random Processes II: Applications, 2nd edn (Stochastic Modelling and Applied Probability 6). Springer, Berlin.Google Scholar
Novikov, A. and Palacios-Soto, J. L. (2020). Sequential hypothesis tests under random horizon. Sequential Anal. 39, 133166.CrossRefGoogle Scholar
Protter, P. E. (2005). Stochastic Integration and Differential Equations, 2nd edn (Stochastic Modelling and Applied Probability 21). Springer, Berlin.Google Scholar
Shiryaev, A. N. (1967). Two problems of sequential analysis. Cybernetics 3, 6369.CrossRefGoogle Scholar
Shiryaev, A. N. (2008). Optimal Stopping Rules (Stochastic Modelling and Applied Probability 8). Springer, Berlin.Google Scholar
Vaicenavicius, J. (2020). Asset liquidation under drift uncertainty and regime-switching volatility. Appl. Math. Optimization 81, 757784.CrossRefGoogle Scholar