Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-12-01T00:16:25.097Z Has data issue: false hasContentIssue false

Optimal multiple stopping problem under nonlinear expectation

Published online by Cambridge University Press:  08 September 2022

Hanwu Li*
Affiliation:
Shandong University
*
*Postal address: Research Center for Mathematics and Interdisciplinary Sciences, Binhai Rd 72, Qingdao, China. Email address: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we study the optimal multiple stopping problem under the filtration-consistent nonlinear expectations. The reward is given by a set of random variables satisfying some appropriate assumptions, rather than a process that is right-continuous with left limits. We first construct the optimal stopping time for the single stopping problem, which is no longer given by the first hitting time of processes. We then prove by induction that the value function of the multiple stopping problem can be interpreted as the one for the single stopping problem associated with a new reward family, which allows us to construct the optimal multiple stopping times. If the reward family satisfies some strong regularity conditions, we show that the reward family and the value functions can be aggregated by some progressive processes. Hence, the optimal stopping times can be represented as hitting times.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

The optimal single stopping problem, under both uncertainty and ambiguity (or Knightian uncertainty, especially drift uncertainty), has attracted a great deal of attention and been well studied; we may refer to the papers [Reference Bayraktar and Yao1], [Reference Bayraktar and Yao2], [Reference Cheng and Riedel6], [Reference Peskir and Shiryaev17]. Consider a filtered probability space $(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\in[0,T]},\mathbb{P})$ satisfying the usual conditions of right-continuity and completeness. Given a nonnegative and adapted reward process $\{X_t\}_{t\in[0,T]}$ with some integrability and regularity conditions, we then define

\begin{align*}V_0=\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X_\tau],\end{align*}

where $\mathcal{S}_0$ is the collection of all stopping times taking values between 0 and T. The operator $\mathcal{E}[{\cdot}]$ corresponds to the classical expectation $\mathbb{E}[{\cdot}]$ when the agent faces only risk or uncertainty (i.e., he does not know the future state, but knows exactly the distribution of the reward process), while it corresponds to some nonlinear expectation if ambiguity is taken into account (i.e., the agent does not even have full confidence about the distribution). In both situations, the main objective is to compute the value $V_0$ as explicitly as possible and find some stopping time $\tau^*$ at which the supremum is attained, that is, $V_0=\mathcal{E}[X_{\tau^*}]$ . For this purpose, consider the value function

$${V_t} = \mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau \in {{\cal S}_t}} {{\cal E}_t}[{X_\tau }],$$

where $\mathcal{S}_t$ is the set of stopping times greater than t. Assuming some regularity of the reward family $\{X_t\}_{t\in[0,T]}$ and some appropriate conditions on the nonlinear conditional expectation $\mathcal{E}_t[{\cdot}]$ , we prove that the process $\{V_t\}_{t\in[0,T]}$ admits a modification that is right-continuous with left limits (RCLL), which, for simplicity, is still denoted by $\{V_t\}_{t\in[0,T]}$ . Furthermore, the stopping time given in terms of the first hitting time

\begin{align*}\tau=\inf\{t\geq 0\,:\, V_t=X_t\}\end{align*}

is optimal, and $\{V_t\}_{t\in[0,T]}$ is the smallest $\mathcal{E}$ -supermartingale (which reduces to the classical supermartingale when $\mathcal{E}[{\cdot}]$ is the linear expectation) dominating the reward process $\{X_t\}_{t\in[0,T]}$ . One of the most important applications of the single optimal stopping problem is pricing for American options.

Motivated by the pricing for financial derivatives with several exercise rights in the energy market (swing options), one needs to solve an optimal multiple stopping problem. Mathematically, given a reward process $\{X_t\}_{t\in[0,T]}$ , if an agent has d exercise rights, the price of this contract is defined as follows:

\begin{align*}v_0=\sup_{(\tau_1,\cdots,\tau_d)\in\mathcal{\widetilde{S}}_0^d}\mathbb{E} \!\left[\sum_{i=1}^{d}X_{\tau_i} \right].\end{align*}

To avoid triviality, we assume that there exists a constant $\delta>0$ , which represents the length of the refracting time interval, such that the difference of any two successive exercises is greater than $\delta$ . Therefore, $\widetilde{S}_0^d$ is the collection of stopping times $(\tau_1,\cdots,\tau_d)$ such that $\tau_1\geq 0$ and $\tau_j-\tau_{j-1}\geq \delta$ , for any $j=2,\cdots,d$ . There are several papers concerning this kind of problem. To name a few, [Reference Bender and Schoenmakers3] and [Reference Meinshausen and Hambly16] mainly deal with the discrete-time case, focusing on the Monto Carlo methods and algorithm, while [Reference Carmona and Touzi5] investigates the continuous-time case, allowing the time horizon to be either finite or infinite. It is worth pointing out that none of the existing literature considers the multiple stopping problem under Knightian uncertainty.

In fact, to make the value function well-defined for both the single and the multiple stopping problem, the reward can be given by a set of random variables $\{X(\tau),\tau\in\mathcal{S}_0\}$ satisfying some compatibility properties, which means that we do not need to assume that the reward family can be aggregated into a progressive process. Under this weaker assumption on the reward family, [Reference El Karoui8] and [Reference Kobylanski, Quenez and Rouy-Mironescu15] establish the existence of the optimal stopping times for the single stopping problem and multiple stopping problem, respectively. Without aggregation of the reward family and the value function, the optimal stopping time is no longer given by the first hitting time of processes but by the essential infimum over an appropriate set of stopping times.

In the present work, we study the multiple stopping problem under Knightian uncertainty without the requirement of aggregation of the reward family. We will use the filtration-consistent nonlinear expectations established in [Reference Bayraktar and Yao1] to model Knightian uncertainty. First, we focus on the single stopping problem. Similarly as in the classical case, the value function is a kind of nonlinear supermartingale which is the smallest one dominating the reward family. Furthermore, the value function has the same regularity as the reward family in the single stopping case. Applying an approximation method, we prove the existence of the optimal stopping times under the assumption that the reward family is continuous along stopping times under nonlinear expectation (see Definition 3.3). It is important to note that in proving the existence of optimal stopping times, we need the assumption that the nonlinear expectation is sub-additive and positive homogenous, which is to say that the nonlinear expectation is an upper expectation. Hence, this optimal stopping problem is in fact a ‘ $\sup_\tau \sup_P$ ’ problem.

For the multiple stopping case, one important observation is that the value function of the d-stopping problem coincides with that of the single stopping case corresponding to a new reward family, where the new reward family is given by the maximum of a set of value functions associated with the $(d-1)$ -stopping problem. Therefore, we may construct the optimal stopping times by an induction method, provided that this new reward family satisfies the conditions under which the optimal single stopping time exists. The main difficulty in this problem is due to some measurability issues. To overcome this difficulty, we need to slightly modify the reward family to a new one and to establish the regularity of the induced value functions.

Recall that in [Reference Bayraktar and Yao2] and [Reference Cheng and Riedel6], for the single stopping problems under Knightian uncertainty, the reward is given by an RCLL adapted process, and the optimal stopping time can be represented as a first hitting time, which provides an efficient way to calculate an optimal stopping time. In our setting, if the reward family satisfies some stronger regularity conditions than those required in the existence result, we can prove that the reward family and the associated value function can be aggregated into some progressively measurable processes. Therefore, in this case, the optimal stopping times can be interpreted in terms of hitting times of processes.

The paper is organized as follows. We first recall some basic notation and results about the $\mathbb{F}$ -expectation in Section 2. In Section 3, we investigate the properties of the value function and construct the optimal stopping times for the optimal single stopping problem under nonlinear expectations. Then we solve the optimal double stopping problem under nonlinear expectations in Section 4. In Section 5 we study some aggregation results when the reward family satisfies some strong regularity conditions, and then interpret the optimal stopping times as the first hitting times of processes. The optimal d-stopping problem appears in the appendix.

2. $\mathbb{F}$ -expectations and their properties

In this paper, we fix a finite time horizon $T>0$ . Let $(\Omega,\mathcal{F},\mathbb{P})$ be a complete probability space equipped with a filtration $\mathbb{F}=\{\mathcal{F}_t\}_{t\in[0,T]}$ satisfying the usual conditions of right-continuity and completeness. We denote by $L^0(\mathcal{F}_T)$ the collection of all $\mathcal{F}_T$ -measurable random variables. We first recall some basic notation and properties of the so-called $\mathbb{F}$ -expectation introduced in [Reference Bayraktar and Yao1]. Roughly speaking, the $\mathbb{F}$ -expectation is a nonlinear expectation defined on a subspace of $L^0(\mathcal{F}_T)$ which satisfies the following algebraic properties.

Definition 2.1. Let $\mathscr{D}_T$ denote the collection of all non-empty subsets $\Lambda$ of $L^0(\mathcal{F}_T)$ satisfying the following:

  1. (D1) $0,1\in\Lambda$ ;

  2. (D2) for any $\xi,\eta\in\Lambda$ and $A\in\mathcal{F}_T$ , both $\xi+\eta$ and $I_A \xi$ $|\xi|$ belong to $\Lambda$ ;

  3. (D3) for any $\xi,\eta\in L^0(\mathcal{F}_T)$ with $0\leq \xi\leq \eta$ , almost surely (a.s.), if $\eta\in\Lambda$ , then $\xi\in\Lambda$ .

Definition 2.2. ([Reference Bayraktar and Yao1]) An $\mathbb{F}$ -consistent nonlinear expectation ( $\mathbb{F}$ -expectation for short) is a pair $(\mathcal{E},\Lambda)$ in which $\Lambda\in\mathscr{D}_T$ and $\mathcal{E}$ denotes a family of operators $\{\mathcal{E}_t[{\cdot}]\,:\,\Lambda\mapsto\Lambda_t\,:\!=\,\Lambda\cap L^0(\mathcal{F}_t)\}_{t\in[0,T]}$ satisfying the following hypotheses for any $\xi,\eta\in\Lambda$ and $t\in[0,T]$ :

  1. (A1) Monotonicity (positively strict): $\mathcal{E}_t[\xi]\leq \mathcal{E}_t[\eta]$ a.s. if $\xi\leq \eta$ a.s. Moreover, if $0\leq \xi\leq \eta$ a.s. and $\mathcal{E}_0[\xi]=\mathcal{E}_0[\eta]$ , then $\xi=\eta$ a.s.

  2. (A2) Time-consistency: $\mathcal{E}_s[\mathcal{E}_t[\xi]]=\mathcal{E}_s[\xi]$ , a.s., for any $0\leq s\leq t\leq T$ .

  3. (A3) Zero–one law: $\mathcal{E}_t[\xi I_A]=\mathcal{E}_t[\xi]I_A$ , a.s., for any $A\in\mathcal{F}_t$ .

  4. (A4) Translation-invariance: $\mathcal{E}_t[\xi+\eta]=\mathcal{E}_t[\xi]+\eta$ , a.s., if $\eta\in\Lambda_t$ .

Example 2.1. The following pairs are $\mathbb{F}$ -expectations:

  1. (1) $\big(\{\mathbb{E}_t[{\cdot}]\}_{t\in[0,T]}, L^1(\mathcal{F}_T)\big)$ : the classical expectation $\mathbb{E}$ .

  2. (2) $\big(\big\{\mathcal{E}^g_t[{\cdot}]\big\}_{t\in[0,T]},L^2(\mathcal{F}_T)\big)$ : the g-expectation with Lipschitz generator g(t, z) which is progressively measurable and square-integrable, and which satisfies $g(t,0)=0$ (see [Reference Bayraktar and Yao2], [Reference Coquet, Hu, Mémin and Peng7]).

  3. (3) $\big(\big\{\mathcal{E}^g_t[{\cdot}]\big\}_{t\in[0,T]},L^e(\mathcal{F}_T)\big)$ : the g-expectation with convex generator g(t, z) having quadratic growth in z and satisfying $g(t,0)=0$ , where $L^e(\mathcal{F}_T)\,:\!=\,\{\xi\in L^0(\mathcal{F}_T)\,:\, \mathbb{E}[\!\exp(\lambda|\xi|)]<\infty, \forall \lambda>0\}$ (see [Reference Bayraktar and Yao2]).

  4. (4) Let $\mathcal{P}$ be a set of probability measures satisfying the following conditions:

    1. (i) For any $\mathbb{Q}\in \mathcal{P}$ , $\mathbb{Q}$ is equivalent to $\mathbb{P}$ and the density process is bounded away from zero by a constant.

    2. (ii) Let $\mathbb{Q}^i\in\mathcal{P}$ with density process $\big\{q^i_t\big\}_{t\in[0,T]}$ , $i=1,2$ . Fix a stopping time $\tau$ . Define a new measure $\mathbb{Q}$ with density process $\{q_t\}_{t\in[0,T]}$ , where

      \begin{align*} q_t=\begin{cases} q^1_t, &0\leq t\leq \tau;\\[4pt] \frac{q^1_\tau q^2_t}{q^2_\tau}, &\tau<t\leq T.\end{cases}\end{align*}
      Then we have $\mathbb{Q}\in\mathcal{P}$ .

    For any $\xi\in L^2(\mathcal{F}_T)$ , set

    \begin{align*} \mathcal{E}_t[\xi]=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{inf}}}\limits_{\mathbb{Q}\in \mathcal{P}}\mathbb{E}_t^\mathbb{Q}[\xi].\end{align*}
    Then $\big(\{\mathcal{E}_t[{\cdot}]\}_{t\in[0,T]},L^2(\mathcal{F}_T)\big)$ is an $\mathbb{F}$ -expectation. Actually, this kind of nonlinear expectation can be regarded as a coherent risk measure. More examples can be found in [Reference Föllmer and Schied11].

The pair $(\{\underline{\mathcal{E}}_t[{\cdot}]\},\textrm{Dom}(\mathcal{E}))$ is almost an $\mathbb{F}$ -expectation, where

\begin{align*}\underline{\mathcal{E}}_t[\xi]\,:\!=\,\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{inf}}}\limits_{i\in\mathcal{I}}\mathcal{E}^i_t[\xi]\end{align*}

and $\{\mathcal{E}^i\}_{i\in\mathcal{I}}$ is a stable class of $\mathbb{F}$ -expectations (for the definition of stable class, we may refer to Definition 3.2 in [Reference Bayraktar and Yao1]). By Proposition 4.1 in [Reference Bayraktar and Yao2], the family of operators $\{\underline{\mathcal{E}}_t[{\cdot}]\}_{t\in[0,T]}$ satisfies (A2)–(A4) in Definition 2.2 as well as

\begin{align*}\underline{\mathcal{E}}_t[\xi]\leq \underline{\mathcal{E}}_t[\eta],\textrm{ a.s., for any } \xi,\eta\in \textrm{Dom}(\mathcal{E}) \textrm{ with } \xi\leq \eta \text{ a.s.}\end{align*}

That is to say, the nonlinear operator $\underline{\mathcal{E}}_t[{\cdot}]$ preserves all of the properties (A1)–(A4) except the strict comparison property.

For notational simplicity, we will substitute $\mathcal{E}[{\cdot}]$ for $\mathcal{E}_0[{\cdot}]$ . We denote the domain $\Lambda$ by Dom $(\mathcal{E})$ and introduce the following subsets of Dom $(\mathcal{E})$ :

\begin{align*} \textrm{Dom}_\tau(\mathcal{E})&\,:\!=\,\textrm{Dom}(\mathcal{E})\cap L^0(\mathcal{F}_\tau), \ \forall \tau\in\mathcal{S}_0,\\ \textrm{Dom}^+(\mathcal{E})&\,:\!=\,\{\xi\in \textrm{Dom}(\mathcal{E})\,:\,\xi\geq 0, \textrm{ a.s.}\},\\ \textrm{Dom}^{c}(\mathcal{E})&\,:\!=\,\{\xi\in \textrm{Dom}(\mathcal{E})\,:\,\xi\geq c, \textrm{ a.s., for some }c\in\mathbb{R}\}. \end{align*}

Definition 2.3. ([Reference Bayraktar and Yao1].)

  1. (1) An $\mathbb{F}$ -adapted process $X=\{X_t\}_{t\in[0,T]}$ is called an $\mathcal{E}$ -process if $X_t\in$ Dom $(\mathcal{E})$ , for any $t\in[0,T]$ .

  2. (2) An $\mathcal{E}$ -process is said to be an $\mathcal{E}$ -supermartingale (resp. $\mathcal{E}$ -martingale, $\mathcal{E}$ -submartingale) if for any $0\leq s\leq t\leq T$ , $\mathcal{E}_s[X_t]\leq$ (resp. $=$ , $\geq$ ) $X_s$ , a.s.

For any $\mathbb{F}$ -adapted process X, its right-limit process is defined as follows:

\begin{align*} X_t^+\,:\!=\,\liminf_{n\rightarrow\infty}X_{q_n^+(t)}, \textrm{ for any } t\in[0,T],\end{align*}

where $q_n^+(t)=\frac{[2^n t]}{2^n}T$ . Let X be an $\mathcal{E}$ -process. For any stopping time $\tau\in \mathcal{S}^F_0$ , where $\mathcal{S}_0^F$ is the collection of all stopping times taking values in a finite set, by Condition (D2) in Definition 2.1, it is easy to check that $X_\tau\in$ Dom $_\tau(\mathcal{E})$ . For any $\xi\in$ Dom $(\mathcal{E})$ , $\big\{X^\xi_t\big\}_{t\in[0,T]}$ is an $\mathcal{E}$ -process, where $X_t^\xi=\mathcal{E}_t[\xi]$ . Therefore, for any $\tau\in\mathcal{S}_0^F$ , we may define an operator $\mathcal{E}_\tau[{\cdot}]\,:\,\textrm{Dom}(\mathcal{E})\mapsto\textrm{Dom}_\tau(\mathcal{E})$ by

\begin{align*} \mathcal{E}_\tau[\xi]\,:\!=\,X^\xi_\tau, \textrm{ for any } \xi\in\textrm{Dom}(\mathcal{E}).\end{align*}

In order to make the operator $\mathcal{E}_\tau[{\cdot}]$ well-defined for any stopping time $\tau$ , we need to put the following hypotheses on the $\mathbb{F}$ -expectation and the associated domain Dom $(\mathcal{E})$ :

  1. (H0) For any $A\in\mathcal{F}_T$ with $P(A)>0$ , we have $\lim_{n\rightarrow\infty}\mathcal{E}[nI_A]=\infty$ .

  2. (H1) For any $\xi\in\textrm{Dom}^+(\mathcal{E})$ and any $\{A_n\}_{n\in\mathbb{N}}\subset \mathcal{F}_T$ with $\lim_{n\rightarrow\infty}\uparrow I_{A_n}=1$ , a.s., we have $\lim_{n\rightarrow\infty}\uparrow\mathcal{E}[\xi I_{A_n}]=\mathcal{E}[\xi]$ .

  3. (H2) For any $\xi,\eta\in\textrm{Dom}^+(\mathcal{E})$ and any $\{A_n\}_{n\in\mathbb{N}}\subset \mathcal{F}_T$ with $\lim_{n\rightarrow\infty}\downarrow I_{A_n}=0$ , a.s., we have $\lim_{n\rightarrow\infty}\downarrow\mathcal{E}[\xi+\eta I_{A_n}]=\mathcal{E}[\xi]$ .

  4. (H3) For any $\xi\in\textrm{Dom}^+(\mathcal{E})$ and $\tau\in\mathcal{S}_0$ , $X^{\xi,+}_\tau\in \textrm{Dom}^+(\mathcal{E})$ .

  5. (H4) $\textrm{Dom}(\mathcal{E})\in \widetilde{\mathscr{D}_T}\,:\!=\,\{\Lambda\in\mathscr{D}_T\,:\,\mathbb{R}\subset\Lambda\}$ .

Example 2.2. The $\mathbb{F}$ -expectations (1)–(3) listed in Example 2.1 satisfy (H0)–(H4).

Under the above assumptions, [Reference Bayraktar and Yao1] shows that the process $\big\{X_t^{\xi,+}\big\}_{t\in[0,T]}$ is an RCLL modification of $\big\{X_t^\xi\big\}_{t\in[0,T]}$ for any $\xi\in \textrm{Dom}^+(\mathcal{E})$ . Then for any stopping time $\tau\in\mathcal{S}_0$ , the conditional $\mathbb{F}$ -expectation of $\xi\in \textrm{Dom}^+(\mathcal{E})$ at $\tau$ is given by

\begin{align*} \widetilde{\mathcal{E}}_\tau[\xi]\,:\!=\,X^{\xi,+}_\tau.\end{align*}

It is easy to check that $\widetilde{\mathcal{E}}_\tau[{\cdot}]$ is an operator from $\textrm{Dom}^+(\mathcal{E})$ to $\textrm{Dom}^+(\mathcal{E})_\tau\,:\!=\,\textrm{Dom}^+(\mathcal{E})\cap L^0(\mathcal{F}_\tau)$ . Furthermore, $\big\{\widetilde{\mathcal{E}}_t[{\cdot}]\big\}_{t\in[0,T]}$ defines an $\mathbb{F}$ -expectation and for any $\xi\in \textrm{Dom}^+(\mathcal{E})$ , $\big\{\widetilde{\mathcal{E}}_t[\xi]\big\}_{t\in[0,T]}$ is an RCLL modification of $\{\mathcal{E}_t[\xi]\}_{t\in[0,T]}$ . For simplicity, we still denote $\widetilde{\mathcal{E}}_t[{\cdot}]$ by $\mathcal{E}_t[{\cdot}]$ , and it satisfies the following properties.

Proposition 2.1 ([Reference Bayraktar and Yao1].) For any $\xi,\eta\in {Dom}^+(\mathcal{E})$ and $\tau\in \mathcal{S}_0$ , the following hold:

  1. (1) Monotonicity (positively strict): $\mathcal{E}_\tau[\xi]\leq \mathcal{E}_\tau[\eta]$ a.s. if $\xi\leq \eta$ a.s. Moreover, if $\mathcal{E}_\sigma[\xi]=\mathcal{E}_\sigma[\eta]$ a.s. for some $\sigma\in\mathcal{S}_0$ , then $\xi=\eta$ a.s.

  2. (2) Time-consistency: $\mathcal{E}_\sigma[\mathcal{E}_\tau[\xi]]=\mathcal{E}_\sigma[\xi]$ , a.s., for any $\tau,\sigma\in\mathcal{S}_0$ with $\sigma\leq \tau$ .

  3. (3) Zero–one law: $\mathcal{E}_\tau[\xi I_A]=\mathcal{E}_\tau[\xi]I_A$ , a.s., for any $A\in\mathcal{F}_\tau$ .

  4. (4) Translation-invariance: $\mathcal{E}_\tau[\xi+\eta]=\mathcal{E}_\tau[\xi]+\eta$ , a.s., if $\eta\in{Dom}^+_\tau(\mathcal{E})$ .

  5. (5) Local property: $\mathcal{E}_\tau[\xi I_A+\eta I_{A^c}]=\mathcal{E}_\tau[\xi]I_A+\mathcal{E}_\tau[\eta]I_{A^c}$ , a.s., for any $A\in\mathcal{F}_\tau$ .

  6. (6) Constant-preserving: $\mathcal{E}_\tau[\xi]=\xi$ , a.s., if $\xi\in{Dom}^+_\tau(\mathcal{E})$ .

Proposition 2.2 ([Reference Bayraktar and Yao1].) Let X be a nonnegative $\mathcal{E}$ -supermartingale. Then we have the following:

  1. (1) Assume either that $\mathrm{ess\,sup}_{t\in \mathcal{I}}X_t\in{Dom}^+(\mathcal{E})$ (where $\mathcal{I}$ is the set of all dyadic rational numbers less than T) or that for any sequence $\{\xi_n\}_{n\in\mathbb{N}}\subset{Dom}^+(\mathcal{E})$ that converges a.s. to some $\xi\in L^0(\mathcal{F}_T)$ ,

    \begin{align*} \liminf_{n\rightarrow\infty}\mathcal{E}[\xi_n]<\infty \textrm{ implies } \xi\in {Dom}^+(\mathcal{E}).\end{align*}
    Then for any $\tau\in\mathcal{S}_0$ , $X^+_\tau\in {Dom}^+(\mathcal{E})$ .
  2. (2) If $X_t^+\in {Dom}^+(\mathcal{E})$ for any $t\in[0,T]$ , then $X^+$ is an RCLL $\mathcal{E}$ -supermartingale such that for any $t\in[0,T]$ , $X_t^+\leq X_t$ , a.s.

  3. (3) Moreover, if the function $t\mapsto\mathcal{E}[X_t]$ from $[0, T]$ to $\mathbb{R}$ is right-continuous, then $X^+$ is an RCLL modification of X. Conversely, if X has a right-continuous modification, then the function $t\mapsto\mathcal{E}[X_t]$ is right-continuous.

Fatou’s lemma and the dominated convergence theorem still hold for the conditional $\mathbb{F}$ -expectation $\mathcal{E}_\tau[{\cdot}]$ .

Proposition 2.3 ([Reference Bayraktar and Yao1].) Let $\{\xi_n\}_{n\in\mathbb{N}}\subset {Dom}^+(\mathcal{E})$ converge a.s. to some $\xi\in {Dom}^+(\mathcal{E})$ . Then for any $\tau\in\mathcal{S}_0$ , we have

\begin{align*}\mathcal{E}_\tau[\xi]\leq \liminf_{n\rightarrow\infty}\mathcal{E}_\tau[\xi_n]. \end{align*}

Furthermore, if there exists an $\eta\in {Dom}^+(\mathcal{E})$ such that $\xi_n\leq \eta$ a.s. for any $n\in\mathbb{N}$ , then the limit $\xi\in {Dom}^+(\mathcal{E})$ , and for any $\tau\in \mathcal{S}_0$ we have

\begin{align*}\mathcal{E}_\tau[\xi]= \lim_{n\rightarrow\infty}\mathcal{E}_\tau[\xi_n]. \end{align*}

Throughout this paper, we assume that the $\mathbb{F}$ -expectation satisfies the hypotheses (H0)–(H4) and the following condition:

  1. (H5) If the sequence $\{\xi_n\}_{n\in\mathbb{N}}\subset \textrm{Dom}^+(\mathcal{E})$ converges to $\xi\in L^0(\mathcal{F}_T)$ a.s. and satisfies $\liminf_{n\rightarrow\infty}\mathcal{E}[\xi_n]<\infty$ , then we have $\xi\in\textrm{Dom}^+(\mathcal{E})$ .

This assumption is mainly used to prove the following lemma.

Lemma 2.1. Let $\Xi$ be a subset of ${Dom}^+(\mathcal{E})$ . Suppose that $\sup_{\xi\in\Xi}\mathcal{E}[\xi]<\infty$ . Set $\eta=\mathrm{ess\,sup}_{\xi\in\Xi}\xi$ . Then we have $\eta\in{Dom}^+(\mathcal{E})$ .

Proof. By the definition of essential supremum, there exists a sequence $\{\xi_n\}_{n\in\mathbb{N}}\subset \Xi$ such that $\xi_n\rightarrow \eta$ a.s. Since $\liminf_{n\rightarrow\infty}\mathcal{E}[\xi_n]\leq \sup_{\xi\in\Xi}\mathcal{E}[\xi]<\infty$ , Assumption (H5) implies that $\eta\in \textrm{Dom}^+(\mathcal{E})$ .

Remark 2.1. The classical expectation naturally satisfies Assumption (H5), by Fatou’s lemma. Consider the g-expectation introduced in Example 2.1(2). If we additionally assume that the function g is convex in its second component, we may check that Fatou’s lemma still holds for the g-expectation (see Proposition A.1 in [Reference Ferrari, Li and Riedel10]). Hence, in this case, Assumption (H5) is fulfilled. However, for some other g-expectations, Assumption (H5) may not hold. We refer to Example 5.1 in [Reference Bayraktar and Yao2] as a counterexample.

Remark 2.2. By Corollary 2.2 and Propositions 2.7–2.9 in [Reference Bayraktar and Yao1], all the properties in this section still hold for the random variables in $\textrm{Dom}^{c}(\mathcal{E})$ .

3. The optimal single stopping problem under nonlinear expectation

In this section, we study the optimal single stopping problem under the $\mathbb{F}$ -expectation. Throughout this paper, for each fixed stopping time $\tau$ , $\mathcal{S}_\tau$ represents the collection of all stopping times taking values between $\tau$ and T. We now introduce the definition of an admissible family, which can be interpreted as the payoff process in the classical case.

Definition 3.1. A family of random variables $\{X(\tau),\tau\in\mathcal{S}_0\}$ is said to be admissible if the following conditions are satisfied:

  1. (1) For all $\tau\in\mathcal{S}_0$ , $X(\tau)\in{Dom}^+_\tau(\mathcal{E}) $ .

  2. (2) For all $\tau,\sigma\in\mathcal{S}_0$ , we have $X(\tau)=X(\sigma)$ a.s. on the set $\{\tau=\sigma\}$ .

Remark 3.1. Since the $\mathbb{F}$ -expectation is translation-invariant, all the results in this paper still hold if the family of random variables $\{X(\tau),\tau\in\mathcal{S}_0\}$ is bounded from below.

Now consider the reward given by the admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ . For each $S\in\mathcal{S}_0$ , the value function at time S takes the following form:

(3.1) \begin{equation} v(S)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_S}\mathcal{E}_S[X(\tau)]. \end{equation}

Definition 3.2. For each fixed $S\in\mathcal{S}_0$ , an admissible family $\{X(\tau),\tau\in\mathcal{S}_S\}$ is said to be an $\mathcal{E}$ -supermartingale system (resp. an $\mathcal{E}$ -martingale system) if, for any $\tau,\sigma\in\mathcal{S}_S$ with $\tau\leq \sigma$ a.s., we have

\begin{align*}\mathcal{E}_\tau[X(\sigma)]\leq X(\tau), \textrm{ a.s. } (\textrm{resp., } \mathcal{E}_\tau[X(\sigma)]= X(\tau), \textrm{ a.s.}). \end{align*}

Proposition 3.1. If $\{X(\tau),\tau\in\mathcal{S}_0\}$ is an admissible family with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ , then the value function $\{v(S), S\in\mathcal{S}_0\}$ defined by (3.1) has the following properties:

  1. (i) $\{v(S), S\in\mathcal{S}_0\}$ is an admissible family;

  2. (ii) $\{v(S), S\in\mathcal{S}_0\}$ is the smallest $\mathcal{E}$ -supermartingale system which is greater than $\{X(S), S\in\mathcal{S}_0\}$ ;

  3. (iii) for any $S\in\mathcal{S}_0$ , we have

    (3.2) \begin{equation} \mathcal{E}[v(S)]=\sup_{\tau\in\mathcal{S}_S}\mathcal{E}[X(\tau)]. \end{equation}

Proof. The proof of this proposition is similar to the one in [Reference Grigorova13] (see Section 8) and to the proof of Propositions 1.1–1.3 in [Reference Kobylanski, Quenez and Rouy-Mironescu15], so we omit it.

Remark 3.2. (1) For the cases when the admissible family $\{v(\tau),\tau\in\mathcal{S}_0\}$ can be aggregated, we may refer to Proposition 5.1 in Section 5.

(2) It follows from Equation (3.2) that

\begin{align*} \sup_{S\in\mathcal{S}_0}\mathcal{E}[v(S)]\leq \sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty. \end{align*}

Consequently, we obtain that $v(S)<\infty$ , a.s., for any $S\in\mathcal{S}_0$ .

(3) Assumption (H5) is mainly used to make sure that the value function v(S) at any stopping time S belongs to $\textrm{Dom}^+(\mathcal{E})$ . We can drop this assumption by requiring that the admissible family satisfy $\eta\,:\!=\,\mathrm{ess\,sup}_{\tau\in\mathcal{S}_0} X(\tau)\in \textrm{Dom}(\mathcal{E})$ . Under this new condition, since $0\leq v(S)\leq \mathcal{E}_S[\eta]$ , it follows that $v(S)\in\textrm{Dom}^+(\mathcal{E})$ .

The following proposition gives the characterization of the optimal stopping time for the value function (3.1).

Proposition 3.2. For each fixed $S\in\mathcal{S}_0$ , let $\tau^*\in\mathcal{S}_S$ be such that $\mathcal{E}[X(\tau^*)]<\infty$ . The following statements are equivalent:

  1. (a) $\tau^*$ is S-optimal for v(S), i.e.,

    (3.3) \begin{equation} v(S)=\mathcal{E}_S[X(\tau^*)]; \end{equation}
  2. (b) $v(\tau^*)=X(\tau^*)$ and $\mathcal{E}[v(S)]=\mathcal{E}[v(\tau^*)]$ ;

  3. (c) $\mathcal{E}[v(S)]=\mathcal{E}[X(\tau^*)]$ .

Proof. The proof is the same as that of Proposition 4.1 in [Reference Grigorova12], so we omit it.

Remark 3.3. It is worth mentioning that most of the results in Propositions 3.1 and 3.2 still hold if the reward family is not ‘adapted’, which means that $X(\tau)$ is $\mathcal{F}_T$ -measurable rather than $\mathcal{F}_\tau$ -measurable for any $\tau\in\mathcal{S}_0$ . In fact, the first difference is that $\{v(S),S\in\mathcal{S}_0\}$ is the smallest $\mathcal{E}$ -supermartingale system which is greater than $\{\mathcal{E}_S[X(S)],S\in\mathcal{S}_0\}$ . The second is that we need to replace $X(\tau^*)$ by $\mathcal{E}_{\tau^*}[X(\tau^*)]$ in the assertion (b) of Proposition 3.2. Furthermore, the results do not depend on the regularity of the reward family.

We now study the regularity of the value functions $\{v(\tau),\tau\in\mathcal{S}_0\}$ , after introducing the following definition of continuity.

Definition 3.3. An admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is said to be right-continuous (resp., left-continuous) along stopping times in $\mathcal{E}$ -expectation [RC $\mathcal{E}$ (resp., LC $\mathcal{E}$ )] if for any $\tau\in\mathcal{S}_0$ and $\{\tau_n\}_{n\in\mathbb{N}}\subset\mathcal{S}_0$ such that $\tau_n\downarrow \tau$ a.s. (resp., $\tau_n\uparrow\tau$ a.s.), we have $\mathcal{E}[X(\tau)]=\lim_{n\rightarrow\infty}\mathcal{E}[X(\tau_n)]$ . The family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is called continuous along stopping times in $\mathcal{E}$ -expectation (C $\mathcal{E}$ ) if it is both RC $\mathcal{E}$ and LC $\mathcal{E}$ .

Proposition 3.3. Suppose that the admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is RC $\mathcal{E}$ with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ . Then the family $\{v(\tau),\tau\in\mathcal{S}_0\}$ is RC $\mathcal{E}$ .

Proof. The proof is the same as that of Proposition 1.5 in [Reference Kobylanski, Quenez and Rouy-Mironescu15], with linear expectation $\mathbb{E}$ replaced by $\mathbb{F}$ -expectation $\mathcal{E}$ .

Remark 3.4. (i) By Remark 3.3, the above result does not rely on the ‘adapted’ property of the reward family.

(ii) For any fixed $\sigma\in\mathcal{S}_0$ , suppose that the admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is right-continuous in $\mathcal{E}$ -expectation along all stopping times greater than $\sigma$ , which means that if $S\in\mathcal{S}_\sigma$ and $\{S_n\}_{n\in\mathbb{N}}\subset\mathcal{S}_\sigma$ satisfy $S_n\downarrow S$ , then we have $\lim_{n\rightarrow\infty}\mathcal{E}[X(S_n)]=\mathcal{E}[X(S)]$ . Then we can prove that the family $\{v(\tau),\tau\in\mathcal{S}_0\}$ is right-continuous in $\mathcal{E}$ -expectation along all stopping times greater than $\sigma$ .

(iii) Furthermore, if the RC $\mathcal{E}$ admissible family $\{X(\tau),\tau\in\mathcal{S}_\sigma\}$ is only well-defined for the stopping times greater than $\sigma$ , then by a similar analysis as in the proof of Proposition 3.1, $\{v(S),S\in\mathcal{S}_0\}$ is still an $\mathcal{E}$ -supermartingale system, but without the dominance property that $v(S)\geq X(S)$ for $S\leq \sigma$ . We can then prove that the family $\{v(\tau),\tau\in\mathcal{S}_0\}$ is right-continuous in $\mathcal{E}$ -expectation along all stopping times greater than $\sigma$ .

In order to show the existence of the optimal stopping time for the value function v(S), we need furthermore to assume that the $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ satisfies the following conditions:

  1. (H6) Sub-additivity: for any $\tau\in\mathcal{S}_0$ and $\xi,\eta\in\textrm{Dom}^+(\mathcal{E})$ , $\mathcal{E}_\tau[\xi+\eta]\leq \mathcal{E}_\tau[\xi]+\mathcal{E}_\tau[\eta]$ .

  2. (H7) Positive homogeneity: for any $\tau\in\mathcal{S}_0$ , $\lambda\geq 0$ , and $\xi\in \textrm{Dom}^+(\mathcal{E})$ , $\mathcal{E}_\tau[\lambda \xi]=\lambda\mathcal{E}_\tau[\xi]$ .

The main idea in proving the existence is to apply an approximation method. More precisely, for $\lambda\in(0,1)$ , we define an $\mathcal{F}_S$ -measurable random variable $\tau^\lambda(S)$ by

(3.4) \begin{equation} \tau^\lambda(S)=\mathrm{ess\,inf}\{\tau\in\mathcal{S}_S\,:\, \lambda v(\tau)\leq X(\tau),\textrm{ a.s.}\}. \end{equation}

Remark 3.5. It is important to note that this stopping time $\tau^\lambda(S)$ is defined as an essential infimum of a set of stopping times, instead of being defined trajectorially. It was introduced for the first time in [Reference Kobylanski, Quenez and Rouy-Mironescu15] (see Equation (1.6) in [Reference Kobylanski, Quenez and Rouy-Mironescu15]).

We will show that the sequence $\big\{\tau^\lambda(S)\big\}_{\lambda\in(0,1)}$ admits a limit as $\lambda$ goes to 1 and that the limit is the optimal stopping time. Our first observation is that the stopping time $\tau^\lambda(S)$ is $(1-\lambda)$ -optimal for the problem (3.2).

Lemma 3.1. Let the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ satisfy all the assumptions (H0)–(H7), and suppose that $\{X(\tau),\tau\in\mathcal{S}_0\}$ is a C $\mathcal{E}$ admissible family with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ . For each $S\in\mathcal{S}_0$ and $\lambda\in(0,1)$ , the stopping time $\tau^\lambda(S)$ satisfies

(3.5) \begin{equation}\lambda \mathcal{E}[v(S)]\leq \mathcal{E}\big[X\big(\tau^\lambda(S)\big)\big].\end{equation}

This lemma is analogous to Lemma 1.1 in [Reference Kobylanski, Quenez and Rouy-Mironescu15], with linear expectation $\mathbb{E}$ replaced by $\mathbb{F}$ -expectation $\mathcal{E}$ . For reader’s convenience, we give a short proof here.

Proof. Fix $S\in\mathcal{S}_0$ and $\lambda\in(0,1)$ . For any $\tau^i\in\mathcal{S}_S$ such that $\lambda v\big(\tau^i\big)\leq X\big(\tau^i\big)$ , $i=1,2$ , it is easy to check that the stopping time $\tau$ defined by $\tau=\tau^1\wedge \tau^2$ preserves the same property as $\tau^i$ . Hence, there exists a sequence of stopping times $\{\tau_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_S$ with $\lambda v(\tau_n)\leq X(\tau_n)$ , such that $\tau_n\downarrow \tau^\lambda(S)$ . By the monotonicity and positive homogeneity, we have $\lambda\mathcal{E}[v(\tau_n)]\leq \mathcal{E}[X(\tau_n)]$ for any $n\in\mathbb{N}$ . Letting n go to infinity and applying the RC $\mathcal{E}$ property of v and X, we obtain

(3.6) \begin{equation}\lambda \mathcal{E}\big[v\big(\tau^\lambda(S)\big)\big]\leq \mathcal{E}\big[X\big(\tau^\lambda(S)\big)\big].\end{equation}

We claim that, for each $S\in\mathcal{S}_0$ and $\lambda\in(0,1)$ , the stopping time $\tau^\lambda(S)$ satisfies

(3.7) \begin{equation} v(S)=\mathcal{E}_S\big[v\big(\tau^\lambda(S)\big)\big]. \end{equation}

Now, combining Equations (3.6) and (3.7), we obtain

\begin{align*}\lambda \mathcal{E}[v(S)]=\lambda \mathcal{E}\big[v\big(\tau^\lambda(S)\big)\big]\leq \mathcal{E}\big[X\big(\tau^\lambda(S)\big)\big]. \end{align*}

The proof is complete.

Proof of Equation (3.7). Note that Equation (3.7) is the same as Equation (1.11) in [Reference Kobylanski, Quenez and Rouy-Mironescu15] with the classical conditional expectation $\mathbb{E}[{\cdot}|\mathcal{F}_S]$ replaced by the $\mathbb{F}$ -expectation $\mathcal{E}_S$ . Therefore, the proof is similar. For the convenience of the reader, we give a short proof here.

For simplicity, we denote $\mathcal{E}_S\big[v\big(\tau^\lambda(S)\big)\big]$ by $J^\lambda(S)$ . Recalling that $\{v(\tau),\tau\in\mathcal{S}_0\}$ is an $\mathcal{E}$ -supermartingale system, we have $J^\lambda(S)\leq v(S)$ . It remains to prove the reverse inequality.

We first claim that $\big\{J^\lambda(\tau),\tau\in\mathcal{S}_0\big\}$ is an $\mathcal{E}$ -supermartingale system. Indeed, let $S,S^{\prime}\in\mathcal{S}_0$ be such that $S\leq S^{\prime}$ . Noting that $\{v(\tau),\tau\in\mathcal{S}_0\}$ is an $\mathcal{E}$ -supermartingale system, it is easy to check that $S\leq \tau^\lambda(S)\leq \tau^\lambda(S^{\prime})$ and

\begin{align*} \mathcal{E}_S\big[J^\lambda\big(S^{\prime}\big)\big]=\mathcal{E}_S\big[v\big(\tau^\lambda\big(S^{\prime}\big)\big)\big]=\mathcal{E}_S\big[\mathcal{E}_{\tau^\lambda(S)}\big[v\big(\tau^\lambda\big(S^{\prime}\big)\big)\big]\big]\leq \mathcal{E}_S\big[v\big(\tau^\lambda(S)\big)\big]=J^\lambda(S),\end{align*}

where we have used the time-consistency in the first two equalities. Hence, the claim holds.

We then show that for any $S\in\mathcal{S}_0$ and $\lambda\in(0,1)$ , we have $L^\lambda(S)\geq X(S)$ , where $L^\lambda(S)=\lambda v(S)+(1-\lambda)J^\lambda(S)$ . Indeed, by a simple calculation, we obtain that

\begin{align*} L^\lambda(S) & = \lambda v(S)+(1-\lambda)J^\lambda(S)I_{\{\tau^\lambda(S)=S\}}+(1-\lambda)J^\lambda(S)I_{\{\tau^\lambda(S)>S\}}\\ &=\lambda v(S)+(1-\lambda)\mathcal{E}_S\big[v\big(\tau^\lambda(S)\big)\big]I_{\{\tau^\lambda(S)=S\}}+(1-\lambda)J^\lambda(S)I_{\{\tau^\lambda(S)>S\}}\\&=\lambda v(S)+(1-\lambda)v(S)I_{\{\tau^\lambda(S)=S\}}+(1-\lambda)J^\lambda(S)I_{\{\tau^\lambda(S)>S\}}\\ & \geq v(S)I_{\{\tau^\lambda(S)=S\}}+\lambda v(S)I_{\{\tau^\lambda(S)>S\}}\\& \geq X(S)I_{\{\tau^\lambda(S)=S\}}+X(S)I_{\{\tau^\lambda(S)>S\}}=X(S), \end{align*}

where the third equality is obtained from the zero–one law, in the first inequality we used that $J^\lambda(S)\geq 0$ , and the last inequality follows from $v(S)\geq X(S)$ and the definition of $\tau^\lambda(S)$ .

Since the $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ satisfies (H6) and (H7), it is easy to check that $\big\{L^\lambda(\tau),\tau\in\mathcal{S}_0\big\}$ is an $\mathcal{E}$ -supermartingale system. By Proposition 3.1, we have $L^\lambda(S)\geq v(S)$ , which, together with $v(S)<\infty$ obtained in Remark 3.2, implies that $J^\lambda(S)\geq v(S)$ . The above analysis completes the proof.

Remark 3.6. Clearly, an $\mathbb{F}$ -expectation that satisfies (H6)–(H7) is a ‘positively convex’ $\mathbb{F}$ -expectation (see Definition 3.1 in [Reference Bayraktar and Yao1]). In [Reference Bayraktar and Yao2], the optimal single stopping problem induced by some positively convex $\mathbb{F}$ -expectation can be solved. Note that our assumptions (H6)–(H7) are stronger than the property of positive convexity. This is mainly because our optimal stopping time is not defined trajectorially and we need to ensure that the crucial inequality (3.6) holds.

Now we state the main result of this section, which is analogous to the results in Theorem 1.1 of [Reference Kobylanski, Quenez and Rouy-Mironescu15], shown in the case of a classical expectation.

Theorem 3.1. Under the same assumptions as those of Lemma 3.1, for each $S\in\mathcal{S}_0$ , there exists an optimal stopping time for v(S) defined by (3.1). Furthermore, the stopping time

(3.8) \begin{equation}\tau^*(S)=\mathrm{ess\,inf}\{\tau\in\mathcal{S}_S\,:\, v(\tau)=X(\tau) \textrm{ a.s.}\}\end{equation}

is the minimal optimal stopping time for v(S).

Proof. The proof is the same as the proof of Theorem 1.1 in [Reference Kobylanski, Quenez and Rouy-Mironescu15] with $\mathbb{E}$ replaced by $\mathcal{E}$ , so we omit it.

Remark 3.7. Compared with the usual case, in which the optimal stopping time is defined trajectorially, our optimal stopping time is interpreted as the essential infimum, which makes it possible to relax the condition on the regularity of the reward family. For example, in [Reference Cheng and Riedel6], the reward $\{X_t\}_{t\in[0,T]}$ is assumed to be RCLL and LC $\mathcal{E}$ . The price for the weak condition of regularity is that the $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ should be positive homogenous and sub-additive. These two assumptions on the $\mathbb{F}$ -expectation are mainly used to prove the existence of the optimal stopping time. For the properties which do not depend on the existence of the optimal stopping time, we may drop the positive homogeneity and sub-additivity of the $\mathbb{F}$ -expectation.

With the help of the existence of the optimal stopping time, we may establish the LC $\mathcal{E}$ property of the value function when the reward family is LC $\mathcal{E}$ , which is analogous to Proposition 1.6 in [Reference Kobylanski, Quenez and Rouy-Mironescu15].

Proposition 3.4. Under the same assumptions as those of Theorem 3.1, the value function $\{v(\tau),\tau\in\mathcal{S}_0\}$ is LC $\mathcal{E}$ .

Proof. The proof is similar to the proof of Proposition 1.6 in [Reference Kobylanski, Quenez and Rouy-Mironescu15] with $\mathbb{E}$ replaced by $\mathcal{E}$ , so we omit it.

Remark 3.8. (i) Let $\{S_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_0$ be such that $S_n\uparrow S$ , where S is a stopping time. By an analysis similar to that of Remark 1.6 in [Reference Kobylanski, Quenez and Rouy-Mironescu15], we have

\begin{align*}\tau^*(S)=\lim_{n\rightarrow\infty}\tau^*(S_n);\end{align*}

that is, the mapping $S\mapsto \tau^*(S)$ is left-continuous along stopping times.

(ii) Suppose that the family $\{X(\tau),\tau\in\mathcal{S}_0\}$ in Proposition 3.4 is only left-continuous in $\mathcal{E}$ -expectation along stopping times greater than $\sigma$ (i.e., if $\{\tau_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_\sigma$ and $\tau_n\uparrow\tau$ , then we have $\mathcal{E}[X(\tau)]=\lim_{n\rightarrow\infty}\mathcal{E}[X(\tau_n)]$ ). If, for any $S\in\mathcal{S}_0$ , the optimal stopping time $\tau^*(S)$ defined by (3.8) is no less than $\sigma$ , then the value function $\{v(S),S\in\mathcal{S}_0\}$ is still LC $\mathcal{E}$ .

In the remainder of this section, suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ satisfies all the assumptions (H0)–(H7). Now, given an admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ , for each fixed $\theta\in\mathcal{S}_0$ we define the following random variable:

\begin{align*} X^{\prime}(\tau)=X(\tau)I_{\{\tau\geq \theta\}}-I_{\{\tau<\theta\}}.\end{align*}

Then, for each $\tau\in\mathcal{S}_0$ , $X^{\prime}(\tau)$ is $\mathcal{F}_\tau$ -measurable and bounded from below, and

\begin{align*}\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[|X^{\prime}(\tau)|]<\infty.\end{align*}

In addition, $X^{\prime}(\tau)=X^{\prime}(\sigma)$ on the set $\{\tau=\sigma\}$ . Let us define

\begin{align*} v^{\prime}(S)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_S}\mathcal{E}_S[X^{\prime}(\tau)].\end{align*}

Note that all the properties of $\mathcal{E}$ hold for random variables which are bounded from below (see Remark 2.2). Then all the results in Proposition 3.1, Proposition 3.2, and Remark 3.2 still hold if we replace X and v by X and v , respectively. Furthermore, if the original admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is RC $\mathcal{E}$ , by Remark 3.4, the family $\{v^{\prime}(S),S\in\mathcal{S}_0\}$ is right-continuous in $\mathcal{E}$ -expectation along stopping times greater than $\theta$ . The following theorem indicates that there exists an optimal stopping time for v (S), and that the family $\{v^{\prime}(\tau),\tau\in\mathcal{S}_0\}$ is LC $\mathcal{E}$ (not only left-continuous in $\mathcal{E}$ -expectation along stopping times greater than $\theta$ ), provided that the family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is C $\mathcal{E}$ .

Theorem 3.2. Let the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ satisfy all the assumptions (H0)–(H7), and let $\{X(\tau),\tau\in\mathcal{S}_0\}$ be a C $\mathcal{E}$ admissible family with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ . For each $S\in\mathcal{S}_0$ , there exists an optimal stopping time for v(S). Furthermore, the stopping time

\begin{align*}\tau^{\prime}(S)=\mathrm{ess\,inf}\{\tau\in\mathcal{S}_S\,:\, v^{\prime}(\tau)=X^{\prime}(\tau) \textrm{ a.s.}\}\end{align*}

is the minimal optimal stopping time for v(S), and the value function $\{v^{\prime}(S),S\in\mathcal{S}_0\}$ is LC $\mathcal{E}$ .

Proof. For any $\lambda\in(0,1)$ , we define a random variable $\tau^{\prime,\lambda}(S)$ by

\begin{align*}\tau^{\prime,\lambda}(S)=\mathrm{ess\,inf}\big\{\tau\in \mathcal{S}_S\,:\, \lambda v^{\prime}(\tau)\leq X^{\prime}(\tau), \textrm{ a.s.}\big\}.\end{align*}

Since, for any $S\in\mathcal{S}_0$ , we have $v^{\prime}(S)\geq \mathcal{E}_S[X(T)]\geq 0$ , this implies that $\tau\geq \theta$ , where $\tau\in\big\{\tau\in \mathcal{S}_S\,:\, \lambda v^{\prime}(\tau)\leq X^{\prime}(\tau), \textrm{ a.s.}\big\}$ . Therefore, we obtain that $\tau^{\prime,\lambda}(S)\geq \theta$ . It follows that for any fixed $S\in\mathcal{S}_0$ and any $\tau\geq \tau^{\prime,\lambda}(S)$ , we have $X^{\prime}(\tau)=X(\tau)$ . Modifying the proofs of Lemma 3.1, Theorem 3.1, and Proposition 3.4, we finally get the desired result.

4. The optimal double stopping problem under nonlinear expectation

In this section, we consider the optimal double stopping problem under the $\mathbb{F}$ -expectation satisfying Assumptions (H0)–(H5). We first introduce the definition of the appropriate reward family.

Definition 4.1. The family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is said to be biadmissible if it has the following properties:

  1. (1) for all $\tau,\sigma\in\mathcal{S}_0$ , $X(\tau,\sigma)\in \textrm{Dom}^+_{\tau\vee\sigma}(\mathcal{E})$ ;

  2. (2) for all $\tau,\sigma,\tau^{\prime},\sigma^{\prime}\in\mathcal{S}_0$ , $X(\tau,\sigma)=X(\tau^{\prime},\sigma^{\prime})$ on the set $\{\tau=\tau^{\prime}\}\cap \{\sigma=\sigma^{\prime}\}$ .

Now, suppose we are given a biadmissible reward family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ such that $\sup_{\tau,\sigma\in\mathcal{S}_0}\mathcal{E}[X(\tau,\sigma)]<\infty$ . Then the corresponding value function is defined as follows:

(4.1) \begin{equation} v(S)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau_1,\tau_2\in\mathcal{S}_S}\mathcal{E}_S[X(\tau_1,\tau_2)]. \end{equation}

Similarly to the case of the single optimal stopping problem, we have the following properties.

Proposition 4.1. Suppose that $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is a biadmissible family such that $\sup_{\tau_1,\tau_2\in\mathcal{S}_0}\mathcal{E}[X(\tau_1,\tau_2)]<\infty$ ; then the value function $\{v(S),S\in\mathcal{S}_0\}$ defined by (4.1) satisfies the following properties:

  1. (i) for each $S\in\mathcal{S}_0$ , there exists a sequence of pairs of stopping times $\big\{\big(\tau_1^n,\tau_2^n\big)\big\}_{n\in\mathbb{N}}\subset \mathcal{S}_S\times\mathcal{S}_S$ such that $\mathcal{E}_S\big[X\big(\tau_1^n,\tau_2^n\big)\big]$ converges monotonically up to v(S);

  2. (ii) $\{v(S),S\in\mathcal{S}_0\}$ is an admissible family;

  3. (iii) $\{v(S),S\in\mathcal{S}_0\}$ is an $\mathcal{E}$ -supermartingale system;

  4. (iv) for each $S\in\mathcal{S}_0$ , we have

    \begin{align*} \mathcal{E}[v(S)]=\sup_{\tau,\sigma\in\mathcal{S}_S}\mathcal{E}[X(\tau,\sigma)].\end{align*}

Proof. The proof is similar to the proof of Proposition 2.1 in [Reference Kobylanski, Quenez and Rouy-Mironescu15]. The main difference is in the method of proving that $v(S)\in \textrm{Dom}^+(\mathcal{E})$ . In fact, the measurability and nonnegativity follow from the definition of v(S). By (i), we have $v(S)=\lim_{n\rightarrow\infty}\mathcal{E}_S\big[X\big(\tau_1^n,\tau_2^n\big)\big]$ . Because

\begin{align*} \liminf_{n\rightarrow\infty}\mathcal{E}\big[\mathcal{E}_S\big[X\big(\tau_1^n,\tau_2^n\big)\big]\big]\leq \sup_{\tau_1,\tau_2\in\mathcal{S}_0}\mathcal{E}[X(\tau_1,\tau_2)]<\infty, \end{align*}

Assumption (H5) implies that $v(S)\in \textrm{Dom}^+(\mathcal{E})$ .

Remark 4.1. (i) Under the integrability condition $\sup_{\tau,\sigma\in\mathcal{S}_0}\mathcal{E}[X(\tau,\sigma)]<\infty$ and (iv) in Proposition 4.1, we conclude that $v(S)<\infty$ , a.s.

(ii) If Assumption (H5) does not hold, we need to assume furthermore that $\eta\,:\!=\,\mathrm{ess\,sup}_{\tau,\sigma\in\mathcal{S}_0}X(\tau,\sigma)\in\textrm{Dom}(\mathcal{E})$ in order to ensure that Proposition 4.1 still holds.

In the following, we will show that the value function defined by (4.1) coincides with the value function of the single stopping problem corresponding to a new reward family. Motivated by the results in [Reference Kobylanski, Quenez and Rouy-Mironescu15] (cf. (2.2) and (2.3) in [Reference Kobylanski, Quenez and Rouy-Mironescu15] for the definitions of $u_1$ , $u_2$ and the new reward), for each $\tau\in\mathcal{S}_0$ we define

(4.2) \begin{equation} u_1(\tau)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau_1\in\mathcal{S}_\tau}\mathcal{E}_\tau[X(\tau_1,\tau)], \qquad u_2(\tau)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau_2\in\mathcal{S}_\tau}\mathcal{E}_\tau[X(\tau,\tau_2)], \end{equation}

and

(4.3) \begin{equation} \widetilde{X}(\tau)=\max\{u_1(\tau),u_2(\tau)\}. \end{equation}

The first observation is that the family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ is admissible.

Lemma 4.1. Suppose that $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is a biadmissible family such that $\sup_{\tau,\sigma}\mathcal{E}[X(\tau,\sigma)]<\infty$ . Then the family defined by (4.3) is admissible and we have $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}\big[\widetilde{X}(\tau)\big]<\infty$ .

Proof. It is sufficient to prove that $\{u_1(\tau),\tau\in\mathcal{S}_0\}$ is admissible. Similarly as in the proof of Proposition 4.1, $u_1(\tau)$ is $\mathcal{F}_\tau$ -measurable and $u_1(\tau)\in\textrm{Dom}^+(\mathcal{E})$ . For each fixed $\tau,\sigma\in\mathcal{S}_0$ , set $A=\{\tau=\sigma\}$ and $\theta^A=\theta I_A+T I_{A^c}$ , where $\theta\in\mathcal{S}_\tau$ . It is easy to check that $A\in\mathcal{F}_{\tau\wedge\sigma}$ , $\theta^A\in\mathcal{S}_\sigma$ and

\begin{align*}\mathcal{E}_\tau[X(\theta,\tau)]I_A&=\mathcal{E}_\tau[X(\theta,\tau)I_A]=\mathcal{E}_\tau\big[X\big(\theta^A,\sigma\big)I_A\big]\\ &=\mathcal{E}_\tau\big[X\big(\theta^A,\sigma\big)\big]I_A=\mathcal{E}_\sigma\big[X\big(\theta^A,\sigma\big)\big]I_A\leq u_1(\sigma)I_A. \end{align*}

Taking the supremum over all $\theta\in\mathcal{S}_\tau$ implies that $u_1(\tau)\leq u_1(\sigma)$ on A. By symmetry, we have $u_1(\sigma)\leq u_1(\tau)$ on A. Therefore, $u_1(\tau)I_A=u_1(\sigma)I_A$ .

It is easy to verify that $0\leq \widetilde{X}(\tau)\leq v(\tau)$ . By Proposition 4.1, we have

\begin{align*}\sup_{\tau\in\mathcal{S}_0}\mathcal{E}\big[\widetilde{X}(\tau)\big]\leq \sup_{\tau\in\mathcal{S}_0}\mathcal{E}[v(\tau)]\leq \sup_{\tau,\sigma\in\mathcal{S}_0}\mathcal{E}[X(\tau,\sigma)]<\infty. \end{align*}

The next theorem states that $\{v(S),S\in\mathcal{S}_0\}$ is the smallest $\mathcal{E}$ -supermartingale system such that $v(S)\geq \widetilde{X}(S)$ , for any $S\in\mathcal{S}_0$ . In other words, v(S) corresponds to the value function u(S) associated with the reward family $\{\widetilde{X}(S),S\in\mathcal{S}_0\}$ , where

(4.4) \begin{equation} u(S)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_S}\mathcal{E}_S\big[\widetilde{X}(\tau)\big]. \end{equation}

Theorem 4.1. Suppose that $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is a biadmissible family such that $\sup_{\tau_1,\tau_2\in\mathcal{S}_0}\mathcal{E}[X(\tau_1,\tau_2)]<\infty$ . Then, for each stopping time $S\in\mathcal{S}_0$ , we have $v(S)=u(S)$ .

Proof. The proof is similar to the proof of Theorem 2.1 in [Reference Kobylanski, Quenez and Rouy-Mironescu15], with the classical expectation $\mathbb{E}$ replaced by $\mathcal{E}$ , so we omit it.

With the help of the characterization of the value function v stated in Theorem 4.1, we may construct the optimal stopping times for either the multiple problem (4.1) or the single problem (4.2), (4.4) if we obtain the optimal stopping times for one of the problems.

Proposition 4.2. Fix $S\in\mathcal{S}_0$ . Suppose that $\big(\tau_1^*,\tau_2^*\big)\in\mathcal{S}_S\times\mathcal{S}_S$ is optimal for v(S). Then we have the following:

  1. (1) $\tau_1^*\wedge\tau_2^*$ is optimal for u(S);

  2. (2) $\tau_1^*$ is optimal for $u_2\big(\tau_1^*\big)$ on the set A;

  3. (3) $\tau_2^*$ is optimal for $u_1\big(\tau_2^*\big)$ on the set $A^c$ ,

where $A=\big\{\tau_1^*\leq \tau_2^*\big\}$ . On the other hand, suppose that the stopping times $\theta^*,\theta_i^*$ , $i=1,2$ , satisfy the following conditions:

  1. (i) $\theta^*$ is optimal for u(S);

  2. (ii) $\theta_1^*$ is optimal for $u_2(\theta^*)$ ;

  3. (iii) $\theta_2^*$ is optimal for $u_1(\theta^*)$ .

Set

(4.5) \begin{equation}\sigma^*_1=\theta^* I_B+\theta_1^* I_{B^c},\ \sigma_2^*=\theta_2^* I_B+\theta^* I_{B^c},\end{equation}

where $B=\{u_1(\theta^*)\leq u_2(\theta^*)\}$ . Then the pair $\big(\sigma^*_1,\sigma^*_2\big)$ is optimal for v(S).

Proof. The proof is similar to the proof of Proposition 2.4 in [Reference Kobylanski, Quenez and Rouy-Mironescu15], so we omit it.

By Proposition 4.2, in order to obtain the multiple optimal stopping times for v(S) defined by (4.1), it is sufficient to derive the optimal stopping times for the auxiliary single stopping problems (4.2) and (4.4). For this purpose, according to Theorem 3.1, we need to study some regularity results for $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ . Before establishing this property, we first introduce the definition of continuity for a biadmissible family.

Definition 4.2. A biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is said to be right-continuous (resp., left-continuous) along stopping times in $\mathcal{E}$ -expectation [RC $\mathcal{E}$ (resp., LC $\mathcal{E}$ )] if, for any $\tau,\sigma\in\mathcal{S}_0$ and any sequences $\{\tau_n\}_{n\in\mathbb{N}}, \{\sigma_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_0$ such that $\tau_n\downarrow \tau$ , $\sigma_n\downarrow\sigma$ (resp., $\tau_n\uparrow \tau$ , $\sigma_n\uparrow\sigma$ ), one has $\mathcal{E}[X(\tau,\sigma)]=\lim_{n\rightarrow\infty}\mathcal{E}[X(\tau_n,\sigma_n)]$ .

By a proof similar to that of Proposition 3.3, we have the following regularity result.

Proposition 4.3. If the biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is RC $\mathcal{E}$ , then the family $\{v(S),S\in\mathcal{S}_0\}$ defined by (4.1) is RC $\mathcal{E}$ .

The regularity of the new reward family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ requires some strong continuity of the biadmissible family. Because of the nonlinearity of the expectation, the definition is slightly different from Definition 2.3 in [Reference Kobylanski, Quenez and Rouy-Mironescu15].

Definition 4.3. A biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is said to be uniformly right-continuous (resp., left-continuous) along stopping times in $\mathcal{E}$ -expectation [URC $\mathcal{E}$ (resp., ULC $\mathcal{E}$ )] if, for any $\sigma\in\mathcal{S}_0$ and any sequence $\{\sigma_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_0$ such that $\sigma_n\downarrow\sigma$ (resp., $\sigma_n\uparrow\sigma$ ), one has

\begin{align*}&\lim_{n\rightarrow\infty}\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[|X(\tau,\sigma)-X(\tau,\sigma_n)|]=0,\\&\lim_{n\rightarrow\infty}\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[|X(\sigma,\tau)-X(\sigma_n,\tau)|]=0.\end{align*}

Furthermore, the biadmissible family is said to be uniformly continuous along stopping times in $\mathcal{E}$ -expectation (UC $\mathcal{E}$ ) if it is both URC $\mathcal{E}$ and ULC $\mathcal{E}$ .

Definition 4.4. An $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ is said to be dominated by another $\mathbb{F}$ -expectation $\big(\widetilde{\mathcal{E}},\textrm{Dom}\big(\widetilde{\mathcal{E}}\big)\big)$ if $\textrm{Dom}(\mathcal{E})\subset \textrm{Dom}\big(\widetilde{\mathcal{E}}\big)$ and for any $\tau\in\mathcal{S}_0$ and $\xi,\eta\in \textrm{Dom}(\mathcal{E})$ , one has

\begin{align*} \mathcal{E}_\tau[\xi+\eta]-\mathcal{E}_\tau[\eta]\leq \widetilde{\mathcal{E}}_\tau[\xi]. \end{align*}

Remark 4.2. From the requirements on the domain of $\mathcal{E}$ (see Definition 2.1 and Assumptions (H3)–(H5)), we may not conclude that $\xi-\eta\in \textrm{Dom}(\mathcal{E})$ for any $\xi,\eta\in\textrm{Dom}(\mathcal{E})$ . Therefore, the above definition of dominance cannot be written as

\begin{align*}\mathcal{E}_\tau[\xi]-\mathcal{E}_\tau[\eta]\leq \widetilde{\mathcal{E}}_\tau[\xi-\eta].\end{align*}

However, if $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ is dominated by $\big(\widetilde{\mathcal{E}},\textrm{Dom}\big(\widetilde{\mathcal{E}}\big)\big)$ , then for any $\tau\in\mathcal{S}_0$ and $\xi,\eta\in \textrm{Dom}^c(\mathcal{E})$ we have

(4.6) \begin{equation}|\mathcal{E}_\tau[\xi]-\mathcal{E}_\tau[\eta]|\leq \widetilde{\mathcal{E}}_\tau[|\xi-\eta|]. \end{equation}

First, if $\xi\in\textrm{Dom}^c(\mathcal{E})$ , by (D2) in Definition 2.1 and Assumption (H4), we have $\xi-c\in \textrm{Dom}^+(\mathcal{E})$ . Since $0\leq |\xi-\eta|=|(\xi-c)-(\eta-c)|\leq \xi+\eta-2c$ , by (D2) and (D3), it follows that $|\xi-\eta|\in\textrm{Dom}(\mathcal{E})$ . It is easy to check that

\begin{align*}\mathcal{E}_\tau[\xi]-\mathcal{E}_\tau[\eta]\leq \mathcal{E}_\tau[\eta+|\xi-\eta|]-\mathcal{E}_\tau[\eta]\leq \widetilde{\mathcal{E}}_\tau[|\xi-\eta|].\end{align*}

By the symmetry of $\xi$ and $\eta$ , we obtain Equation (4.6).

Example 4.1.

  1. (1) If for any $\tau\in\mathcal{S}_0$ and $\xi,\eta\in\textrm{Dom}(\mathcal{E})$ , $\mathcal{E}_\tau[\xi+\eta]\leq \mathcal{E}_\tau[\xi]+\mathcal{E}_\tau[\eta]$ , the $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ is dominated by itself. In particular, $\big(\{\mathbb{E}_t[{\cdot}]\}_{t\in[0,T]}, L^1(\mathcal{F}_T)\big)$ is dominated by itself.

  2. (2) For a generator g with Lipschitz constant $\kappa$ , the g-expectation $\big(\big\{\mathcal{E}^g_t[{\cdot}]\big\}_{t\in[0,T]},L^2(\mathcal{F}_T)\big)$ is dominated by $\big(\big\{\mathcal{E}^{\tilde{g}}_t[{\cdot}]\big\}_{t\in[0,T]},L^2(\mathcal{F}_T)\big)$ , where $\tilde{g}(t,z)=\kappa|z|$ .

Theorem 4.2. Let $\big(\widetilde{\mathcal{E}},{Dom}\big(\widetilde{\mathcal{E}}\big)\big)$ be an $\mathbb{F}$ -expectation satisfying Assumptions (H0)–(H5). Suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ is dominated by $\big(\widetilde{\mathcal{E}},{Dom}\big(\widetilde{\mathcal{E}}\big)\big)$ and the biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is URC $\widetilde{\mathcal{E}}$ with $\sup_{\tau,\sigma\in\mathcal{S}_0}\mathcal{E}[X(\tau,\sigma)]<\infty$ . Then the family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ defined by (4.3) is RC $\mathcal{E}$ .

Proof. By the definition of $\widetilde{X}$ , we only need to prove that the family $\{u_1(\tau),\tau\in\mathcal{S}_0\}$ is RC $\mathcal{E}$ . Let $\{\theta_n\}_{n\in\mathbb{N}}$ be a sequence of stopping times such that $\theta_n\downarrow\theta$ . Since $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is URC $\widetilde{\mathcal{E}}$ , by Equation (4.6), we have

\begin{align*} \lim_{n\rightarrow\infty}|\mathcal{E}[X(\tau_n,\sigma)]-\mathcal{E}[X(\tau,\sigma)]|\leq \lim_{n\rightarrow\infty}\widetilde{\mathcal{E}}[|X(\tau_n,\sigma)-X(\tau,\sigma)|]=0. \end{align*}

It follows that for each fixed $\sigma\in\mathcal{S}_0$ , the family $\{X(\tau,\sigma),\tau\in\mathcal{S}_\sigma\}$ is admissible and right-continuous in $\mathcal{E}$ -expectation along stopping times greater than $\sigma$ . (It is important to note that the whole family $\{X(\tau,\sigma),\tau\in\mathcal{S}_0\}$ may not be admissible, since $X(\tau,\sigma)$ is $\mathcal{F}_\sigma$ -measurable rather than $\mathcal{F}_\tau$ -measurable if $\tau\leq \sigma$ .) By Proposition 3.3 and Remark 3.4, we obtain that the family $\{U_1(S,\theta),S\in\mathcal{S}_0\}$ is right-continuous in $\mathcal{E}$ -expectation along stopping times greater than $\theta$ , where

(4.7) \begin{equation} U_1(S,\theta)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau_1\in\mathcal{S}_{S}}\mathcal{E}_S[X(\tau_1,\theta)]. \end{equation}

That is, $\lim_{n\rightarrow\infty}\mathcal{E}[U_1(\theta_n,\theta)]=\mathcal{E}[U_1(\theta,\theta)]$ .

Now we state the following lemma, whose proof appears after the conclusion of the current argument.

Lemma 4.2. For any stopping times $\tau,\sigma_1,\sigma_2$ , we have

\begin{align*} |\mathcal{E}[U_1(\tau,\sigma_1)]-\mathcal{E}[U_1(\tau,\sigma_2)]|\leq \sup_{S\in\mathcal{S}_0}\widetilde{\mathcal{E}}[|X(S,\sigma_1)-X(S,\sigma_2)|].\end{align*}

Using this lemma, by the URC $\widetilde{\mathcal{E}}$ property of $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ , as n goes to infinity, we obtain that

\begin{align*}|\mathcal{E}[U_1(\theta_n,\theta)]-\mathcal{E}[U_1(\theta_n,\theta_n)]|\leq \sup_{S\in\mathcal{S}_0}\widetilde{\mathcal{E}}[|X(S,\theta)-X(S,\theta_n)|]\rightarrow 0.\end{align*}

The above analysis indicates that

\begin{align*} &\lim_{n\rightarrow\infty}|\mathcal{E}[u_1(\theta)]-\mathcal{E}[u_1(\theta_n)]|=\lim_{n\rightarrow\infty}|\mathcal{E}[U_1(\theta,\theta)]-\mathcal{E}[U_1(\theta_n,\theta_n)]|\\ \leq &\lim_{n\rightarrow\infty}|\mathcal{E}[U_1(\theta,\theta)]-\mathcal{E}[U_1(\theta_n,\theta)]|+\lim_{n\rightarrow\infty}|\mathcal{E}[U_1(\theta_n,\theta)]-\mathcal{E}[U_1(\theta_n,\theta_n)]|=0.\end{align*}

The proof is complete.

Proof of Lemma 4.2. By an analysis similar to the one in the proof of Proposition 3.1, for each fixed $\tau\in\mathcal{S}_0$ there exists a sequence of stopping times $\{S_m\}_{m\in\mathbb{N}}\subset \mathcal{S}_{\tau}$ such that

\begin{align*}\widetilde{\mathcal{E}}_{\tau}[|X(S_m,\sigma_1)-X(S_m,\sigma_2)|]\uparrow \mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau_1\in\mathcal{S}_{\tau}}\widetilde{\mathcal{E}}_{\tau}[|X(\tau_1,\sigma_1)-X(\tau_1,\sigma_2)|]. \end{align*}

By a simple calculation, we have

\begin{align*}|\mathcal{E}[U_1(\tau,\sigma_1)]-\mathcal{E}[U_1(\tau,\sigma_2)]|& \leq \widetilde{\mathcal{E}}\Big[\Big|\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau_1\in\mathcal{S}_{\tau}}\mathcal{E}_{\tau}[X(\tau_1,\sigma_1)]-\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau_1\in\mathcal{S}_{\tau}}\mathcal{E}_{\tau}[X(\tau_1,\sigma_2)]\Big|\Big]\\&\leq \widetilde{\mathcal{E}}\Big[\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau_1\in\mathcal{S}_{\tau}}\Big|\mathcal{E}_{\tau}[X(\tau_1,\sigma_1)-X(\tau_1,\sigma_2)]\Big|\Big]\\&\leq \widetilde{\mathcal{E}}\Big[\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau_1\in\mathcal{S}_{\tau}}\widetilde{\mathcal{E}}_{\tau}\big[\big|X(\tau_1,\sigma_1)-X(\tau_1,\sigma_2)\big|\big]\Big]\\&\leq \liminf_{m\rightarrow\infty}\widetilde{\mathcal{E}}[|X(S_m,\sigma_1)-X(S_m,\sigma_2)|]\\&\leq \sup_{S\in\mathcal{S}_0}\widetilde{\mathcal{E}}[|X(S,\sigma_1)-X(S,\sigma_2)|].\end{align*}

The proof is complete.

The main difficulty is to prove the LC $\mathcal{E}$ property of the reward family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ , because of some measurability issues. More precisely, let $\{\theta_n\}_{n\in\mathbb{N}}$ be a sequence of stopping times such that $\theta_n\uparrow\theta$ . We need to prove that $\lim_{n\rightarrow\infty}\mathcal{E}[u_1(\theta_n)]=\mathcal{E}[u_1(\theta)]$ . However, we cannot follow the proof of the RC $\mathcal{E}$ property in Theorem 4.2. The problem is that the relation $\lim_{n\rightarrow\infty}\mathcal{E}[U_1(\theta_n,\theta)]=\mathcal{E}[U_1(\theta,\theta)]=\mathcal{E}[u_1(\theta)]$ may not hold, where $U_1$ is given by (4.7). Although $\{U_1(S,\theta),S\in\mathcal{S}_0\}$ can be interpreted as the value function associated with the family $\{X(\tau_1, \theta),\tau_1\in\mathcal{S}_0\}$ , we cannot apply Proposition 3.4 since the reward $\{X(\tau_1,\theta),\tau_1\in\mathcal{S}_0\}$ is not admissible. The main idea is to modify this reward slightly and then apply the LC $\mathcal{E}$ property of the modified reward family stated in Theorem 3.2.

Theorem 4.3. Suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ satisfies (H0)–(H7) and the biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is UC ${\mathcal{E}}$ with $\sup_{\tau,\sigma\in\mathcal{S}_0}\mathcal{E}[X(\tau,\sigma)]<\infty$ . Then the family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ defined by (4.3) is LC $\mathcal{E}$ .

Proof. By the definition of $\widetilde{X}$ , it suffices to prove that $\{u_1(\tau),\tau\in\mathcal{S}_0\}$ is LC $\mathcal{E}$ . Let $\{\theta_n\}_{n\in\mathbb{N}}$ be a sequence of stopping times such that $\theta_n\uparrow\theta$ . Now we define

\begin{align*}X^{\prime}(\tau,\theta)=X(\tau,\theta)I_{\{\tau\geq \theta\}}-I_{\{\tau<\theta\}}.\end{align*}

It is easy to check that for any $\tau\in\mathcal{S}_0$ , $X^{\prime}(\tau,\theta)$ is $\mathcal{F}_\tau$ -measurable and bounded from below, with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[|X^{\prime}(\tau,\theta)|]<\infty$ . Therefore, by Theorem 3.2, the value function $\{v^{\prime}(S),S\in\mathcal{S}_0\}$ defined by

\begin{align*}v^{\prime}(S)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_S}\mathcal{E}_S[X^{\prime}(\tau,\theta)]\end{align*}

is LC $\mathcal{E}$ . It follows that $\lim_{n\rightarrow\infty}\mathcal{E}[v^{\prime}(\theta_n)]=\mathcal{E}[v^{\prime}(\theta)]$ . By the definition of X , it is easy to check that

\begin{align*}v^{\prime}(\theta)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_{\theta}}\mathcal{E}_\theta[X^{\prime}(\tau,\theta)]=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_{\theta}}\mathcal{E}_\theta[X(\tau,\theta)]=u_1(\theta),\end{align*}

which implies that $\lim_{n\rightarrow\infty}\mathcal{E}[v^{\prime}(\theta_n)]=\mathcal{E}[u_1(\theta)]$ . Note that for any $\tau\in\mathcal{S}_{\theta_n}$ , we have

\begin{align*}|X^{\prime}(\tau,\theta)-X(\tau,\theta_n)| & = |X(\tau,\theta)I_{\{\tau\geq \theta\}}-I_{\{\theta_n\leq \tau<\theta\}}-X(\tau,\theta_n)|\\& =|X(\tau,\theta)-X(\tau,\theta_n)|I_{\{\tau\geq \theta\}}+|1+X(\tau,\theta_n)|I_{\{\theta_n\leq \tau<\theta\}}\\&\leq |X(\tau,\theta)-X(\tau,\theta_n)|+|1+\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau,\sigma\in\mathcal{S}_0}X(\tau,\sigma)|I_{\{\theta_n<\theta\}}. \end{align*}

Set $\eta=1+\mathrm{ess\,sup}_{\tau,\sigma\in\mathcal{S}_0}X(\tau,\sigma)$ . By Lemma 2.1, we have $\eta\in\textrm{Dom}^+(\mathcal{E})$ . By a similar analysis as in Lemma 4.2, we obtain that

\begin{align*}|\mathcal{E}[v^{\prime}(\theta_n)]-\mathcal{E}[u_1(\theta_n)]|&\leq \mathcal{E}\Big[\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_{\theta_n}}|X^{\prime}(\tau,\theta)-X(\tau,\theta_n)|\Big]\\&\leq \mathcal{E}\Big[\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_{\theta_n}}|X(\tau,\theta)-X(\tau,\theta_n)|\Big]+\mathcal{E}[\eta I_{A_n}], \end{align*}

where $A_n=\{\theta_n<\theta\}$ . For the first part of the right-hand side, it is easy to check that

\begin{align*}\mathcal{E}\Big[\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_{\theta_n}}|X(\tau,\theta)-X(\tau,\theta_n)|\Big]\leq \sup_{\tau\in\mathcal{S}_0}\mathcal{E}[|X(\tau,\theta)-X(\tau,\theta_n)|]\rightarrow 0, \quad \textrm{ as }n\rightarrow \infty.\end{align*}

Noting that $I_{A_n}\downarrow 0$ and $\{A_n\}_{n\in\mathbb{N}}\subset\mathcal{F}_T$ , by Assumption (H2), we obtain that $\lim_{n\rightarrow\infty}[\eta I_{A_n}]=0$ . Finally, we get that

\begin{align*}\lim_{n\rightarrow\infty}|\mathcal{E}[u_1(\theta)]-\mathcal{E}[u_1(\theta_n)]|\leq \lim_{n\rightarrow\infty}|\mathcal{E}[u_1(\theta)]-\mathcal{E}[v^{\prime}(\theta_n)]|+\lim_{n\rightarrow\infty}|\mathcal{E}[v^{\prime}(\theta_n)]-\mathcal{E}[u_1(\theta_n)]|=0.\end{align*}

The proof is complete.

Now we can establish the existence of optimal stopping times for the value function defined by (4.1).

Theorem 4.4. Suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ satisfies (H0)–(H7) and the biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is UC ${\mathcal{E}}$ . Then there exists a pair of optimal stopping times $\big(\tau_1^*,\tau_2^*\big)$ for the value function v(S) defined by (4.1).

Proof. The proof is similar to the proof of Theorem 2.3 in [Reference Kobylanski, Quenez and Rouy-Mironescu15], so we omit it.

Since v defined by (4.1) coincides with the value function of the optimal single stopping problem with the reward family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ , by Propositions 3.3 and 3.4, $\{v(\tau),\tau\in\mathcal{S}_0\}$ is C $\mathcal{E}$ if $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ is C $\mathcal{E}$ .

Corollary 4.1. Under the same hypotheses as those of Theorem 4.4, the family $\{v(\tau),\tau\in\mathcal{S}_0\}$ defined by (4.1) is C $\mathcal{E}$ .

Remark 4.3. By Proposition 3.3, the RC $\mathcal{E}$ property of $\{v(\tau),\tau\in\mathcal{S}_0\}$ does not depend on the existence of optimal stopping times. Thus, the conditions can be weakened to those of Theorem 4.2 to guarantee the RC $\mathcal{E}$ property of $\{v(\tau),\tau\in\mathcal{S}_0\}$ .

Remark 4.4. The optimal d-stopping time problem under nonlinear expectation is similar to the optimal d-stopping problem under classical expectation (cf. Section 3 in [Reference Kobylanski, Quenez and Rouy-Mironescu15]). We only list the results in the appendix.

Example 4.2. Application to swing options. Suppose that $T=\infty$ and the $\mathbb{F}$ -expectation satisfies all the assumptions (H0)–(H7). Recall that a swing option is a contract which gives its holder the right to exercise it more than once, with the exercise times separated by a fixed amount of time $\delta>0$ , called the refracting time. Now, consider the swing option with two exercise times. If the holder exercises it at a stopping time $\tau$ , then she will get the payoff $Y(\tau)$ . The objective of the holder is try to obtain the maximum expected payoff, i.e., at each stopping time S,

\begin{align*}v(S)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau_1,\tau_2\in \mathcal{S}_S,\tau_1+\delta\leq \tau_2}\mathcal{E}_S[Y(\tau_1)+Y(\tau_2)].\end{align*}

It is easy to check that the value does not change if we interchange the roles of $\tau_1$ and $\tau_2$ ; mathematically,

\begin{align*}v(S)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau_1,\tau_2\in \mathcal{S}_S,\tau_2+\delta\leq \tau_1}\mathcal{E}_S[Y(\tau_1)+Y(\tau_2)].\end{align*}

By Equations (4.2) and (4.3), we have

\begin{align*}\widetilde{X}(\tau)=u_1(\tau)=u_2(\tau)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\sigma \in \mathcal{S}_{\tau+\delta}}\mathcal{E}_\tau[Y(\tau)+Y(\sigma)]=Y(\tau)+Z(\tau),\end{align*}

where $Z(\tau)=\mathrm{ess\,sup}_{\sigma \in \mathcal{S}_{\tau+\delta}}\mathcal{E}_\tau[Y(\sigma)]$ . If $\{Y(\tau),\tau\in\mathcal{S}_0\}$ is an admissible family with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[Y(\tau)]<\infty$ , then by Theorem 4.1, we have $v(S)=u(S)$ , where

\begin{align*}u(S)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_S}\mathcal{E}_S\big[\widetilde{X}(\tau)\big]=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_S}\mathcal{E}_S[Y(\tau)+Z(\tau)].\end{align*}

This result is similar to Proposition 3.2 in [Reference Carmona and Dayanik4].

If we additionally assume that the admissible family $\{Y(\tau),\tau\in \mathcal{S}_0\}$ is continuous along stopping times (cf. Definition 5.1), by Proposition 2.3 and noting that $\eta\,:\!=\,\mathrm{ess\,sup}_{\tau\in\mathcal{S}_0}Y(\tau)\in \textrm{Dom}(\mathcal{E})$ , we have that the biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is UC $\mathcal{E}$ , where $X(\tau,\sigma)=Y(\tau)+Y(\sigma)$ . Let $\tau_1^*$ be the minimal optimal stopping time for the value function u(S), and let $\tau_2^*$ be the minimal optimal stopping time for the value function $Z\big(\tau_1^*\big)$ . By Theorem 4.4, the pair $\big(\tau_1^*,\tau_2^*\big)$ is the optimal stopping time for v(S), which is similar to Proposition 5.4 in [Reference Carmona and Dayanik4].

5. Aggregation of the optimal multiple stopping problem

We first recall some basic results from [Reference El Karoui and Quenez9]. Let $\big(\big\{\mathcal{E}^g_t[{\cdot}]\big\}_{t\in[0,T]}, L^2(\mathcal{F}_T)\big)$ be the g-expectation satisfying the assumptions in Example 2.2. Now, given an adapted, nonnegative process $\{X_t\}_{t\in[0,T]}$ which has continuous sample path with $\mathbb{E}\big[\!\sup_{t\in[0,T]}X_t^2\big]<\infty$ , the value function is defined by

\begin{align*}v_t^g=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_t}\mathcal{E}_t^g[X_\tau].\end{align*}

By Proposition 5.5 in [Reference El Karoui and Quenez9], the first hitting time

\begin{align*}\tau^*=\inf\!\big\{t\geq 0\,:\, v_t^g=X_t \big\}\end{align*}

is an optimal stopping time (a similar result can be found in [Reference Cheng and Riedel6]). This formulation makes it efficient to compute an optimal stopping time.

In this section, we aim to express the optimal stopping times studied in the previous parts in terms of the hitting times of processes. According to Theorem A.1, the multiple optimal stopping times can be constructed by induction. Therefore, it is sufficient to study the double stopping case, for which it remains to aggregate the value function and the reward family. For this purpose, we need to make some stronger regularity conditions.

In the next part of this section, assume that the $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ satisfies (H0)–(H5) (a typical example is the g-expectation given in Remark 2.1). The following proposition can be used to aggregate the value function of both the single and the multiple stopping problem.

Proposition 5.1. Let $\{h(\tau),\tau\in\mathcal{S}_0\}$ be a nonnegative, RC $\mathcal{E}$ $\mathcal{E}$ -supermartingale system with $h(0)<\infty$ . Then there exists an adapted process $\{h_t\}_{t\in[0,T]}$ which is RCLL such that it aggregates the family $\{h(\tau),\tau\in\mathcal{S}_0\}$ , i.e., $h_\tau=h(\tau)$ for any $\tau\in\mathcal{S}_0$ .

Proof. Consider the process $\{h(t)\}_{t\in[0,T]}$ . Since this process is an $\mathcal{E}$ -supermartingale and the function $t\rightarrow\mathcal{E}[h(t)]$ is right-continuous, by Proposition 2.2, there is an $\mathcal{E}$ -supermartingale $\{h_t\}_{t\in[0,T]}$ which is RCLL such that for each $t\in[0,T]$ , $h_t=h(t)$ a.s. For each $n\in\mathbb{N}$ , set $\mathcal{I}_n=\big\{0,\frac{1}{2^n}\wedge T, \frac{2}{2^n}\wedge T,\cdots, T\big\}$ and $\mathcal{I}=\cup_{n=1}^\infty\mathcal{I}_n$ . Then, for any stopping time $\tau$ taking values in $\mathcal{I}$ , we have $h_\tau=h(\tau)$ , a.s., which implies that

(5.1) \begin{equation}\mathcal{E}[h(\tau)]=\mathcal{E}[h_\tau].\end{equation}

For any stopping time $\tau\in\mathcal{S}_0$ , we may construct a sequence of stopping times $\{\tau_n\}_{n\in\mathbb{N}}$ which takes values in $\mathcal{I}$ , such that $\tau_n\downarrow\tau$ . Noting that $\{h_t\}_{t\in[0,T]}$ is RCLL, $h_{\tau_n}$ converges to $h_\tau$ . It is obvious that $h_{\tau_n}\leq \mathrm{ess\,sup}_{\tau\in\mathcal{S}_0}h(\tau)\,=\!:\,\eta$ . Since $\{h(\tau),\tau\in\mathcal{S}_0\}$ is an $\mathcal{E}$ -supermartingale system, we have

\begin{align*} \sup_{\tau\in\mathcal{S}_0}\mathcal{E}[h(\tau)]\leq h(0)<\infty. \end{align*}

Then by Lemma 2.1, we obtain that $\eta\in\textrm{Dom}^+(\mathcal{E})$ . Noting that $\{h(\tau),\tau\in\mathcal{S}_0\}$ is RC $\mathcal{E}$ and applying the dominated convergence theorem 2.3, we may check that

(5.2) \begin{equation}\mathcal{E}[h(\tau)]=\lim_{n\rightarrow\infty}\mathcal{E}[h(\tau_n)]=\lim_{n\rightarrow\infty}\mathcal{E}[h_{\tau_n}]=\mathcal{E}[h_\tau].\end{equation}

Assume that $P\big(h_\tau\neq h(\tau)\big)>0$ . Without loss of generality, we may assume that $P(A)>0$ , where $A=\{h_\tau>h(\tau)\}$ . Set $\tau_A=\tau I_A+TI_{A^c}$ . It is easy to check that $\tau_A$ is a stopping time and $h_{\tau_A}\geq h(\tau_A)$ with $P\big(h_{\tau_A}>h(\tau_A)\big)=P(A)>0$ . It follows that $\mathcal{E}[h(\tau_A)]<\mathcal{E}\big[h_{\tau_A}\big]$ , which contradicts Equation (5.2). Therefore, we obtain that $h_\tau=h(\tau)$ for any $\tau\in\mathcal{S}_0$ .

Remark 5.1. Consider the nonlinear operator $\widetilde{\mathcal{E}}^g$ induced by a backward stochastic differential equation (BSDE) with default, which is a nontrivial extension of the g-expectation generated by a BSDE (more details can be found in [Reference Grigorova, Quenez and Sulem14]). Let $\{X(\tau),\tau\in \mathcal{S}_0\}$ be an $\widetilde{\mathcal{E}}^g$ -supermartingale family. By Lemma A.1 in [Reference Grigorova, Quenez and Sulem14], there exists a right upper semicontinuous optional process $\{X_t\}_{t\in[0,T]}$ which aggregates the family $\{X(\tau),\tau\in \mathcal{S}_0\}$ . The case of a smallest $\widetilde{\mathcal{E}}^g$ -supermartingale family supposed to be right-continuous is addressed in [Reference Grigorova, Quenez and Sulem14, Proposition A.6].

With the help of Proposition 5.1, the value function $\{v(\tau),\tau\in\mathcal{S}_0\}$ can be aggregated as an RCLL $\mathcal{E}$ -supermartingale.

Proposition 5.2. Suppose that $\{X(\tau),\tau\in\mathcal{S}_0\}$ is an RC $\mathcal{E}$ admissible family with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ . Then there exists an RCLL $\mathcal{E}$ -supermartingale $\{v_t\}_{t\in[0,T]}$ which aggregates the family $\{v(S),S\in\mathcal{S}_0\}$ defined in (3.1); i.e., for each stopping time S, $v(S)=v_S$ a.s.

Proof. By Propositions 3.1 and 3.3, $\{v(S),S\in\mathcal{S}_0\}$ is a nonnegative, RC $\mathcal{E}$ $\mathcal{E}$ -supermartingale system. Recalling (3.2), we have

\begin{align*}v(0)=\mathcal{E}[v(0)]=\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty.\end{align*}

The result follows from Proposition 5.1.

For the reward family $\{X(\tau),\tau\in\mathcal{S}_0\}$ , since it is not an $\mathcal{E}$ -supermartingale system, we cannot apply Proposition 5.1 to conclude that it can be aggregated. In order to do this, we need to require the following continuity property of the reward family.

Definition 5.1. ([Reference Kobylanski, Quenez and Rouy-Mironescu15].) An admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is said to be right-continuous along stopping times (RC) if for any $\tau\in\mathcal{S}_0$ and any sequence $\{\tau_n\}_{n\in\mathbb{N}}\subset\mathcal{S}_0$ such that $\tau_n\downarrow\tau$ , one has $X(\tau)=\lim_{n\rightarrow\infty}X(\tau_n)$ .

Remark 5.2. If the admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is RC with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ , then it is RC $\mathcal{E}$ . Indeed, let $\{\tau_n\}_{n\in\mathbb{N}}\subset\mathcal{S}_0$ be a sequence of stopping times such that $\tau_n\downarrow\tau$ , a.s. By Lemma 2.1, the random variable $\eta\,:\!=\,\mathrm{ess\,sup}_{\tau\in\mathcal{S}_0}X(\tau)$ belongs to $\textrm{Dom}^+(\mathcal{E})$ . Since $X(\tau_n)\leq \eta$ , applying the dominated convergence theorem 2.3 implies that

\begin{align*}\mathcal{E}[X(\tau)]=\lim_{n\rightarrow\infty}\mathcal{E}[X(\tau_n)].\end{align*}

The following theorem, obtained in [Reference Kobylanski, Quenez and Rouy-Mironescu15], is used to aggregate the reward family.

Theorem 5.1. ([Reference Kobylanski, Quenez and Rouy-Mironescu15].) Suppose that the admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is right-continuous along stopping times. Then there exists a progressive process $\{X_t\}_{t\in[0,T]}$ such that for each $\tau\in\mathcal{S}_0$ , $X(\tau)=X_\tau$ , a.s., and such that there exists a nonincreasing sequence of right-continuous processes $\{X^n_t\}_{t\in[0,T]}$ such that for each $(t,\omega)\in[0,T]\times\Omega$ , $\lim_{n\rightarrow\infty}X_t^n(\omega)=X_t(\omega)$ .

Now we can prove that the optimal stopping time for the single stopping problem obtained in Section 2 can be represented as a first hitting time.

Theorem 5.2. Suppose that the $\mathbb{F}$ -expectation satisfies all the assumptions (H0)–(H7). Let $\{X(\tau),\tau\in\mathcal{S}_0\}$ be an RC and LC $\mathcal{E}$ admissible family with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ . Then for any $S\in\mathcal{S}_0$ , the optimal stopping time of v(S) defined by (3.8) can be given by a first hitting time. More precisely, let $\{X_t\}_{t\in[0,T]}$ be the progressive process given by Theorem 5.1 that aggregates $\{X(\tau),\tau\in\mathcal{S}_0\}$ , and let $\{v_t\}_{t\in[0,T]}$ be the RCLL $\mathcal{E}$ -supermartingale that aggregates the family $\{v(\tau),\tau\in\mathcal{S}_0\}$ . Then the random variable defined by

(5.3) \begin{equation}{\tau}(S)=\inf\{t\geq S\,:\,v_t=X_t\} \end{equation}

is the minimal optimal stopping time for v(S).

Proof. For $\lambda\in(0,1)$ , set

(5.4) \begin{equation}\bar{\tau}^\lambda(S)\,:\!=\,\inf\{t\geq S\,:\,\lambda v_t\leq X_t\}\wedge T. \end{equation}

It is easy to check that the mapping $\lambda\mapsto \bar{\tau}^\lambda(S)$ is nondecreasing. Then the stopping time

\begin{align*}\bar{\tau}(S)=\lim_{\lambda\uparrow 1}\bar{\tau}^\lambda(S)\end{align*}

is well-defined. The proof remains almost the same as the proofs of Lemma 3.1 and Theorem 3.1 if $\tau^\lambda(S)$ , $\hat{\tau}(S)$ , and $\tau^*(S)$ are replaced by $\bar{\tau}^\lambda(S)$ , $\bar{\tau}(S)$ , and $\tau(S)$ , respectively, except the proof for Equation (3.6). In order to prove (3.6) in the present setting, that is, to prove the inequality

\begin{align*}\lambda \mathcal{E}\big[v\big(\bar{\tau}^\lambda(S)\big)\big]\leq \mathcal{E}\big[X\big(\bar{\tau}^\lambda(S)\big)\big],\end{align*}

it is sufficient to verify that for each $S\in\mathcal{S}_0$ and $\lambda\in(0,1)$ ,

\begin{align*}\lambda v_{\bar{\tau}^\lambda(S)}\leq X_{\bar{\tau}^\lambda(S)} ,\textrm{ a.s.}\end{align*}

For the proof of this assertion, we may refer to Lemma 4.1 in [Reference Kobylanski, Quenez and Rouy-Mironescu15]. The proof is complete.

In the following, we will show that the optimal stopping times for the multiple stopping problem can be given in terms of hitting times. For simplicity, we only consider the double stopping time problem. Let us first aggregate the value function.

Proposition 5.3. Let $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ be an RC $\mathcal{E}$ biadmissible family such that $\sup_{\tau,\sigma\in\mathcal{S}_0}\mathcal{E}[X(\tau,\sigma)]<\infty$ . Then there exists an $\mathcal{E}$ -supermartingale $\{v_t\}_{t\in[0,T]}$ with RCLL sample paths that aggregates the family $\{v(S),S\in\mathcal{S}_0\}$ defined by (4.1); i.e., for each $S\in\mathcal{S}_0$ , $v_S=v(S)$ , a.s.

Proof. By Propositions 4.1 and 4.3, the family $\{v(S),S\in\mathcal{S}_0\}$ is an $\mathcal{E}$ -supermartingale system which is RC $\mathcal{E}$ . Remark 4.1 implies that $v(0)<\infty$ . Therefore, the result follows from Proposition 5.1.

In order to aggregate the reward family obtained by (4.3), by Theorem 5.1, it suffices to show that it is RC. Since this new reward is defined by the value function of the single stopping problem corresponding to the biadmissible family, we need to assume the following regularity condition on the biadmissible family.

Definition 5.2. ([Reference Kobylanski, Quenez and Rouy-Mironescu15].) A biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is said to be uniformly right-continuous along stopping times (URC) if $\sup_{\tau,\sigma\in\mathcal{S}_0}\mathcal{E}[X(\tau,\sigma)]<\infty$ and if for each nonincreasing sequence of stopping times $\{S_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_S$ which converges a.s. to a stopping time $S\in\mathcal{S}_0$ , one has

\begin{align*}&\lim_{n\rightarrow\infty}\Big[\!\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_0}|X(\tau,S_n)-X(\tau,S)|\Big]=0,\\&\lim_{n\rightarrow\infty}\Big[\!\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\sigma\in\mathcal{S}_0}|X(S_n,\sigma)-X(S,\sigma)|\Big]=0. \end{align*}

Theorem 5.3. Suppose that there exists an $\mathbb{F}$ -expectation $\big(\widetilde{\mathcal{E}},{Dom}\big(\widetilde{\mathcal{E}}\big)\big)$ satisfying (H0)–(H5) that dominates $(\mathcal{E},{Dom}(\mathcal{E}))$ . Let $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ be a biadmissible family which is URC. Then the family $\{\widetilde{X}(S),S\in\mathcal{S}_0\}$ defined by (4.3) is RC.

Proof. By the expression for $\widetilde{X}$ , it is sufficient to prove that the family $\{u_1(\tau),\tau\in\mathcal{S}_0\}$ is RC. For any $\tau,\sigma\in\mathcal{S}_0$ , we define

(5.5) \begin{equation}U_1(\tau,\sigma)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau_1\in\mathcal{S}_\tau}\mathcal{E}_\tau[X(\tau_1,\sigma)]. \end{equation}

Since $u_1(\tau)=U_1(\tau,\tau)$ , it remains to prove that $\{U_1(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is RC.

Now let $\{\tau_n\}_{n\in\mathbb{N}},\{\sigma_n\}_{n\in\mathbb{N}}$ be two nonincreasing sequences of stopping times that converge to $\tau$ and $\sigma$ respectively. It is easy to check that

(5.6) \begin{equation}|U_1(\tau,\sigma)-U_1(\tau_n,\sigma_n)|\leq |U_1(\tau,\sigma)-U_1(\tau_n,\sigma)|+|U_1(\tau_n,\sigma)-U_1(\tau_n,\sigma_n)|. \end{equation}

It is obvious that for each fixed $\sigma\in\mathcal{S}_0$ , the family $\{X(\tau,\sigma),\tau\in\mathcal{S}_0\}$ is RC. By Remark 5.2, this family is also RC $\mathcal{E}$ . Note that $\{U_1(\tau,\sigma),\tau\in\mathcal{S}_0\}$ can be regarded as the value function of the single optimal stopping problem associated with the reward $\{X(\tau,\sigma),\tau\in\mathcal{S}_0\}$ . Although the reward family $\{X(\tau,\sigma),\tau\in\mathcal{S}_0\}$ may not be admissible owing to the lack of adaptedness, i.e., $X(\tau,\sigma)$ is not $\mathcal{F}_\tau$ -measurable if $\tau<\sigma$ , Remarks 3.3 and 3.4 imply that $\{U_1(\tau,\sigma),\tau\in\mathcal{S}_0\}$ is an $\mathcal{E}$ -supermartingale which is RC $\mathcal{E}$ . By Proposition 5.2, we obtain that there exists an RCLL adapted process $\{U_t^{1,\sigma}\}_{t\in[0,T]}$ such that for each stopping time $\tau\in\mathcal{S}_0$ ,

(5.7) \begin{equation}U^{1,\sigma}_\tau=U_1(\tau,\sigma). \end{equation}

Hence, the first part of the right-hand side of (5.6) can be written as $\big|U^{1,\sigma}_\tau-U^{1,\sigma}_{\tau_n}\big|$ . By the right-continuity of $\big\{U_t^{1,\sigma}\big\}_{t\in[0,T]}$ , it converges to 0 as n goes to infinity.

For any $m\in\mathbb{N}$ , set $Z_m=\sup_{r\geq m}\{\mathrm{ess\,sup}_{\tau\in\mathcal{S}_0}|X(\tau,\sigma)-X(\tau,\sigma_r)|\}$ . It is easy to check that

\begin{align*}0\leq Z_m\leq 2\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau,\sigma\in\mathcal{S}_0} X(\tau,\sigma)\,=\!:\,\eta.\end{align*}

An analysis similar to the one in the proof of Lemma 2.1 shows that $\eta\in\textrm{Dom}^+(\mathcal{E})$ . Therefore, $Z_m\in\textrm{Dom}^+(\mathcal{E})$ for any $m\in\mathbb{N}$ . By a simple calculation, for any $n\geq m$ , we have

\begin{align*}|U_1(\tau_n,\sigma)-U_1(\tau_n,\sigma_n)|& \leq \mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau_1\in\mathcal{S}_{\tau_n}}|\mathcal{E}_{\tau_n}[X(\tau_1,\sigma)]-\mathcal{E}_{\tau_n}[X(\tau_1,\sigma_n)]|\\ &\leq \mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau_1\in\mathcal{S}_{\tau_n}}\widetilde{\mathcal{E}}_{\tau_n}[|X(\tau_1,\sigma)-X(\tau_1,\sigma_n)|]\\ &\leq \widetilde{\mathcal{E}}_{\tau_n}[Z_m]. \end{align*}

Since, for any $\xi\in \textrm{Dom}^+(\mathcal{E})$ , the family $\big\{\widetilde{\mathcal{E}}_t[\xi]\big\}_{t\in[0,T]}$ is right-continuous, it follows that for any $m\in\mathbb{N}$ ,

(5.8) \begin{equation}\limsup_{n\rightarrow\infty}|U_1(\tau_n,\sigma)-U_1(\tau_n,\sigma_n)|\leq \widetilde{\mathcal{E}}_{\tau}[Z_m]. \end{equation}

Note that $Z_m$ converges to 0 as m goes to infinity. By the dominated convergence theorem 2.3, letting m go to infinity in (5.8), we obtain that the second term of the right-hand side of (5.6) converges to 0. The proof is complete.

Combining Theorems 5.1 and 5.3, we get the following aggregation result.

Corollary 5.1. Under the same hypotheses as those of Theorem 5.3, there exists some progressive right-continuous adapted process $\big\{\widetilde{X}_t\big\}_{t\in[0,T]}$ which aggregates the family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ , i.e., for any $\tau\in\mathcal{S}_0$ , $\widetilde{X}_\tau=\widetilde{X}(\tau)$ , a.s., and such that there exists a nonincreasing sequence of right-continuous processes $\{\widetilde{X}_t^n\}_{t\in[0,T]}$ that converges to $\big\{\widetilde{X}_t\big\}_{t\in[0,T]}$ .

Theorem 5.4. Suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ satisfies all the assumptions (H0)–(H7) and that the biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is URC and ULC ${\mathcal{E}}$ . Then the optimal stopping time for the value function defined by (4.1) can be given in terms of some first hitting times.

Proof. Let $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ be the new reward family given by (4.3). By Theorems 4.3 and 5.3, it is LC $\mathcal{E}$ and RC. Applying Theorem 5.1, there exists a progressively measurable process $\big\{\widetilde{X}_t\big\}_{t\in[0,T]}$ which aggregates this family. Let $\{u_t\}_{t\in[0,T]}$ be an RCLL process that aggregates the value function defined as (4.4), which corresponds to the reward family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ by Proposition 5.2. Then Theorem 5.2 implies that, for any $S\in\mathcal{S}_0$ , the stopping time

\begin{align*}\theta^*=\inf\{t\geq S\,:\,u_t=\widetilde{X}_t\}\end{align*}

is optimal for u(S).

For each $\theta\in\mathcal{S}_{\theta^*}$ , set $X^{(1)}(\theta)=X(\theta,\theta^*)$ and $X^{(2)}(\theta)=X(\theta^*,\theta)$ . For $i=1,2$ , it is obvious that the family $\{X^{(i)}(\theta),\theta\in\mathcal{S}_{\theta^*}\}$ is admissible, RC, and LC ${\mathcal{E}}$ . In order to aggregate this family using Theorem 5.1, we need to extend its definition to all stopping times $\theta\in\mathcal{S}_0$ . One of the candidates is

\begin{align*}\widetilde{X}^{(i)}(\theta)=X^{(i)}(\theta)I_{\{\theta\geq \theta^*\}}-I_{\{\theta<\theta^*\}}.\end{align*}

It is easy to check that the family $\big\{\widetilde{X}^{(i)}(\theta),\theta\in\mathcal{S}_0\big\}$ is admissible, RC, and left-continuous in expectation along stopping times greater than $\theta^*$ . By Theorem 5.1, there exists a progressive process $\{\widetilde{X}^{(i)}_t\}_{t\in[0,T]}$ that aggregates $\big\{\widetilde{X}^{(i)}(\theta),\theta\in\mathcal{S}_{0}\big\}$ . Consider the following value function:

\begin{align*}\widetilde{v}^{(i)}(S)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_S}\mathcal{E}_S\big[\widetilde{X}^{(i)}(\tau)\big].\end{align*}

Applying Theorem 3.2, we obtain that the family $\big\{\widetilde{v}^{(i)}(S),S\in\mathcal{S}_0\big\}$ is an RC $\mathcal{E}$ $\mathcal{E}$ -supermartingale system. Furthermore, for any $S\geq \theta^*$ , we have $\widetilde{v}^{(i)}(S)=u_i(S)$ , where $u_i$ is defined by (4.2). By Proposition 5.2, there exists an RCLL process $\big\{\widetilde{v}^i_t\big\}_{t\in[0,T]}$ that aggregates the family $\{\widetilde{v}^{(i)}(S),S\in\mathcal{S}_{0}\}$ . Now, we define

\begin{align*}\theta^*_i=\inf\!\big\{t\geq \theta^*\,:\,\widetilde{v}^i_t=\widetilde{X}^{(i)}_t\big\}.\end{align*}

By an analysis similar to the one in the proof of Theorem 3.2, Theorem 5.2 still holds for the reward family given by $\big\{\widetilde{X}^{(i)}(\theta),\theta\in\mathcal{S}_0\big\}$ , which implies that the stopping time $\theta^*_i$ is optimal for $\widetilde{v}_i(\theta^*)$ , and then optimal for $u_i(\theta^*)$ . Now, set $B=\{u_1(\theta^*)\leq u_2(\theta^*)\}=\big\{\widetilde{v}^{(1)}(\theta^*)\leq \widetilde{v}^{(2)}(\theta^*)\big\}=\big\{\widetilde{v}^1_{\theta^*}\leq \widetilde{v}^2_{\theta^*}\big\}$ . By Proposition 4.2, the pair of stopping times $\big(\tau_1^*,\tau_2^*\big)$ given by

\begin{align*}\tau_1^*=\theta^* I_B+\theta_1^* I_{B^c}, \qquad \tau_2^*=\theta^*_2I_B+\theta^*I_{B^c}\end{align*}

is optimal for v(S). The proof is complete.

Appendix A

As in Section 4, we assume that the $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ satisfies Assumptions (H0)–(H5). Now we introduce the optimal d-stopping time problem. The reward family should satisfy the following conditions.

Definition A.1. A family of random variables $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is said to be d-admissible if it satisfies the following conditions:

  1. (1) for all $\tau=(\tau_1,\cdots,\tau_d)\in\mathcal{S}_0^d$ , $X(\tau)\in \textrm{Dom}^+_{\tau_1\vee\cdots\vee\tau_d}(\mathcal{E})$ ;

  2. (2) for all $\tau,\sigma\in\mathcal{S}_0^d$ , $X(\tau)=X(\sigma)$ a.s. on $\{\tau=\sigma\}$ .

For each fixed stopping time $S\in\mathcal{S}_0$ , the value function of the optimal d-stopping time problem associated with the reward family $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is given by

(A.1) \begin{equation} v(S)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_S^d}\mathcal{E}_S[X(\tau)]=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits\{\mathcal{E}_S[X(\tau_1,\cdots,\tau_d)],\tau_1,\cdots,\tau_d\in\mathcal{S}_S\}. \end{equation}

Similarly to the optimal double stopping time case, the family $\{v(S),S\in\mathcal{S}_0\}$ is admissible and is an $\mathcal{E}$ -supermartingale system, as the following proposition shows.

Proposition A.1. Let $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ be a d-admissible family of random variables with $\sup_{\tau\in\mathcal{S}_0^d}[X(\tau)]<\infty$ . Then the value function $\{v(S),S\in\mathcal{S}_0\}$ defined by (A.1) satisfies the following properties:

  1. (i) $\{v(S),S\in\mathcal{S}_0\}$ is an admissible family;

  2. (ii) for each $S\in\mathcal{S}_0$ , there exists a sequence of stopping times $\{\tau^n\}_{n\in\mathbb{N}}\subset \mathcal{S}_S^d$ such that $\mathcal{E}_S[X(\tau^n)]$ converges monotonically up to v(S);

  3. (iii) $\{v(S),S\in\mathcal{S}_0\}$ is an $\mathcal{E}$ -supermartingale system;

  4. (iv) for each $S\in\mathcal{S}_0$ , we have $\mathcal{E}[v(S)]=\sup_{\tau\in\mathcal{S}_S^d}\mathcal{E}[X(\tau)]$ .

In the following, we will interpret the value function v(S) defined in (A.1) as the value function of an optimal single stopping problem associated with a new reward family. For this purpose, for each $i=1,\cdots,d$ and $\theta\in\mathcal{S}_0$ , consider the following random variable:

(A.2) \begin{equation} u^{(i)}(\theta)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_\theta^{d-1}}\mathcal{E}_\theta\big[X^{(i)}(\tau,\theta)\big], \end{equation}

where

(A.3) \begin{equation} X^{(i)}(\tau_1,\cdots,\tau_{d-1},\theta)=X(\tau_1,\cdots,\tau_{i-1},\theta,\tau_{i+1},\cdots,\tau_{d-1}). \end{equation}

It is easy to see that $u^{(i)}(\theta)$ is the value function of the optimal $(d-1)$ -stopping problem corresponding to the reward $\big\{X^{(i)}(\tau,\theta),\tau\in\mathcal{S}_\theta^{d-1}\big\}$ . Now we define

(A.4) \begin{equation} \widehat{X}(\theta)=\max\!\big\{u^{(1)}(\theta),\cdots,u^{(d)}(\theta)\big\} \end{equation}

and

(A.5) \begin{equation} u(S)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_S}\mathcal{E}_S\big[\widehat{X}(\tau)\big]. \end{equation}

The following theorem indicates that the value function v defined by (A.1) coincides with u.

Theorem A.1. Let $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ be a d-admissible family with $\sup_{\tau\in\mathcal{S}_0^d}\mathcal{E}[X(\tau)]<\infty$ . Then, for any $S\in\mathcal{S}_0$ , we have $v(S)=u(S)$ .

With the above characterization of the value function, we may propose a possible construction of the optimal multiple stopping times by induction.

Proposition A.2. For any fixed $S\in\mathcal{S}_0$ , suppose the following:

  1. (1) there exists $\theta^*\in\mathcal{S}_S$ such that $u(S)=\mathcal{E}_S\big[\widehat{X}(\theta^*)\big]$ ;

  2. (2) for any $i=1,\cdots,d$ , there exists $\theta^{(i)*}= \left(\theta_1^{(i)*},\cdots,\theta_{i-1}^{(i)*},\theta_{i+1}^{(i)*},\cdots,\theta_d^{(i)*} \right)\in\mathcal{S}_{\theta^*}^{d-1}$ such that $u^{(i)}(\theta^*)=\mathcal{E}_{\theta^*}[X^{(i)}(\theta^{(i)*},\theta^*)]$ .

Let $\{B_i\}_{i=1}^d$ be an $\mathcal{F}_{\theta^*}$ -measurable and disjoint partition of $\Omega$ such that $\widehat{X}(\theta^*)=u^{(i)}(\theta^*)$ on the set $B_i$ , $i=1,\cdots,d$ . Set

(A.6) \begin{equation}\tau_j^*=\theta^* I_{B_j}+\sum_{i\neq j,i=1}^d \theta^{(i)*}_j I_{B_i}.\end{equation}

Then $\tau^*=\big(\tau_1^*,\cdots,\tau_d^*\big)$ is optimal for v(S), and $\tau_1^*\wedge\cdots\wedge\tau_d^*=\theta^*$ .

Proposition A.3. For any fixed $S\in\mathcal{S}_0$ , suppose that $\tau^*=\big(\tau_1^*,\cdots,\tau_d^*\big)$ is optimal for v(S). Then we have the following:

  1. (1) $\tau_1^*\wedge\cdots\wedge \tau_d^*$ is optimal for u(S);

  2. (2) for any $i=1,\cdots,d$ , $\big(\tau_1^*,\cdots,\tau_{i-1}^*,\tau^*_{i+1},\cdots,\tau_d^*\big)$ is optimal for $u^{(i)}\big(\tau_i^*\big)$ on the set $\big\{\tau_1^*\wedge\cdots\wedge \tau_d^*=\tau_i^*\big\}$ .

Remark A.1. None of the above results in this section need any regularity assumption on the reward family $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ .

The definition of continuity for the reward with d parameters is similar to the one for the double stopping case.

Definition A.2. A d-admissible family $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is said to be right-continuous (resp., left-continuous) along stopping times in $\mathcal{E}$ -expectation [RC $\mathcal{E}$ (resp., LC $\mathcal{E}$ )] if, for any $\tau\in\mathcal{S}_0^d$ and any sequence $\{\tau_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_0^d$ such that $\tau_n\downarrow \tau$ (resp., $\tau_n\uparrow \tau$ ), one has $\mathcal{E}[X(\tau)]=\lim_{n\rightarrow\infty}\mathcal{E}[X(\tau_n)]$ . If the family $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is both RC $\mathcal{E}$ and LC $\mathcal{E}$ , it is said to be continuous along stopping times in $\mathcal{E}$ -expectation (C $\mathcal{E}$ ).

Proposition A.4. Suppose that $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is an RC $\mathcal{E}$ d-admissible family with $\sup_{\tau\in\mathcal{S}_0^d}\mathcal{E}[X(\tau)]<\infty$ . Then the family $\{v(S),S\in\mathcal{S}_0\}$ is RC $\mathcal{E}$ .

Remark A.2. As in the analysis of Remark 3.4, suppose that $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is a d-admissible family with $\sup_{\tau\in\mathcal{S}_0^d}\mathcal{E}[X(\tau)]<\infty$ and is right-continuous in $\mathcal{E}$ -expectation along stopping times greater than $\sigma$ (i.e., if a sequence of stopping times $\{\tau_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_\sigma^d$ satisfies $\tau_n\downarrow \tau$ , then one has $\mathcal{E}[X(\tau)]=\lim_{n\rightarrow\infty}\mathcal{E}[X(\tau_n)]$ ). Then the family of value functions $\{v(S),S\in\mathcal{S}_0\}$ is right-continuous in $\mathcal{E}$ -expectation along stopping times greater than $\sigma$ .

By Theorem A.1 and Proposition A.2, the value function and the optimal multiple stopping times of the optimal d-stopping problem can be constructed from those of the optimal $(d-1)$ -stopping problem. Therefore, by induction, the multiple stopping problem can be reduced to nested single stopping problems. In addition, the existence of the optimal stopping time for the single stopping problem associated with the new reward $\big\{\widehat{X}(S),S\in\mathcal{S}_0\big\}$ is the building block for constructing the optimal stopping time for the original d-stopping problem. According to Theorem 3.1, it remains to investigate the regularity of this new reward family.

Definition A.3. A d-admissible family $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is said to be uniformly right-continuous (resp., left-continuous) along stopping times in $\mathcal{E}$ -expectation [URC $\mathcal{E}$ (resp., ULC $\mathcal{E}$ )] if for each $i=1,\cdots,d$ , $S\in\mathcal{S}_0$ , and sequence of stopping times $\{S_n\}_{n\in\mathbb{N}}$ such that $S_n\downarrow S$ (resp., $S_n\uparrow S$ ), one has

\begin{align*}\lim_{n\rightarrow\infty}\sup_{\theta\in\mathcal{S}_0^{d-1}}\mathcal{E}[|X^{(i)}(\theta,S_n)-X^{(i)}(\theta,S)|]=0. \end{align*}

Proposition A.5. Let $\big(\widetilde{\mathcal{E}},{Dom}\big(\widetilde{\mathcal{E}}\big)\big)$ be an $\mathbb{F}$ -expectation satisfying Assumptions (H0)–(H5). Suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ is dominated by $\big(\widetilde{\mathcal{E}},{Dom}\big(\widetilde{\mathcal{E}}\big)\big)$ and $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is a URC $\widetilde{\mathcal{E}}$ d-admissible family with $\sup_{\tau\in\mathcal{S}_0^d}\mathcal{E}[X(\tau)]<\infty$ . Then the family $\big\{\widehat{X}(\tau),\tau\in\mathcal{S}_0\big\}$ defined by (A.4) is RC $\mathcal{E}$ .

Since the left-continuity along stopping times in $\mathcal{E}$ -expectation relies on the existence of optimal stopping times, the conditions under which the LC $\mathcal{E}$ holds is more restrictive than the RC $\mathcal{E}$ case, and the proof of LC $\mathcal{E}$ is more complicated, as explained before the statement of Theorem 4.3 in Section 4.

Proposition A.6. Suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ satisfies (H0)–(H7) and $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is a UC ${\mathcal{E}}$ d-admissible family (i.e., both URC $\mathcal{E}$ and ULC $\mathcal{E}$ ) with $\sup_{\tau\in\mathcal{S}_0^d}\mathcal{E}[X(\tau)]<\infty$ . Then the family $\big\{\widehat{X}(\tau),\tau\in\mathcal{S}_0\big\}$ defined by (A.4) is LC $\mathcal{E}$ .

With the help of Propositions A.2, A.5, and A.6, we can now establish the existence result for the optimal stopping times for the multiple stopping problem.

Theorem A.2. Suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ satisfies all the assumptions (H0)–(H7) and $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is a UC ${\mathcal{E}}$ d-admissible family with $\sup_{\tau\in\mathcal{S}_0^d}\mathcal{E}[X(\tau)]<\infty$ . Then there exists an optimal stopping time $\tau^*\in\mathcal{S}_S^d$ for v(S), that is,

\begin{align*}v(S)=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{sup}}}\limits_{\tau\in\mathcal{S}_S^d}\mathcal{E}_S[X(\tau)]=\mathcal{E}_S[X(\tau^*)]. \end{align*}

In order to characterize the optimal multiple stopping times in a minimal way, we should first define a partial order relation $\prec_d$ on $\mathbb{R}^d$ . This relation can be found in [15]; for the reader’s convenience, we also state it here. For $d=1$ and any $a,b\in\mathbb{R}$ , $a\prec_1 b$ if and only if $a\leq b$ , and for $d>1$ and any $(a_1,\cdots,a_d),(b_1,\cdots,b_d)\in\mathbb{R}^d$ , $(a_1,\cdots,a_d)\prec_d(b_1,\cdots,b_d)$ if and only if either $a_1\wedge\cdots\wedge a_d<b_1\wedge \cdots\wedge b_d$ , or

\begin{align*}\begin{cases}a_1\wedge\cdots\wedge a_d=b_1\wedge \cdots\wedge b_d, \textrm{ and, for } i=1,2,\cdots,d,\\a_i=a_1\wedge\cdots\wedge a_d\Rightarrow\begin{cases}b_i=b_1\wedge \cdots\wedge b_d \textrm{ and }\\\big(a_1,\cdots,a_{i-1},a_{i+1}\cdots,a_d\big)\prec_{d-1}\big(b_1,\cdots,b_{i-1},b_{i+1}\cdots,b_d\big).\end{cases}\end{cases}\end{align*}

Definition A.4. For each fixed $S\in\mathcal{S}_0$ , a d-stopping time $(\tau_1,\cdots,\tau_d)\in\mathcal{S}_S^d$ is said to be d-minimal optimal for the value function v(S) defined by (A.1) if it is minimal for the order $\prec_d$ in the set $\big\{\tau\in\mathcal{S}_S^d\,:\,v(S)=\mathcal{E}_S[X(\tau)]\big\}$ , which is the collection of all optimal stopping times.

Proposition A.7. For each fixed $S\in\mathcal{S}_0$ , a d-stopping time $(\tau_1,\cdots,\tau_d)\in\mathcal{S}_S^d$ is d-minimal optimal for the value function v(S) defined by (A.1) if and only if the following hold:

  1. (1) $\theta^*=\tau_1\wedge\cdots\wedge \tau_d$ is the minimal optimal stopping time for u(S) defined by (A.5);

  2. (2) for $i=1,\cdots,d$ , $\theta^{*(i)}=\tau_i\in\mathcal{S}_S^{d-1}$ is the $(d-1)$ -minimal optimal stopping time for $u^{(i)}(\theta^*)$ defined by (A.2) on the set $\big\{u^{(i)}(\theta^*)\geq \vee_{k\neq i} u^{(k)}(\theta^*)\big\}$ .

Acknowledgements

We thank the associate editor and the anonymous referees for their pertinent comments, which helped to improve this work.

Funding information

The authors gratefully acknowledge financial support from the Qilu Young Scholars Program of Shandong University and the German Research Foundation (DFG) through the Collaborative Research Centre 1283, Taming uncertainty and profiting from randomness and low regularity in analysis, stochastics and their applications.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Bayraktar, E. and Yao, S. (2011). Optimal stopping for non-linear expectations—Part I. Stoch. Process. Appl. 121, 185211.CrossRefGoogle Scholar
Bayraktar, E. and Yao, S. (2011). Optimal stopping for non-linear expectations—Part II. Stoch. Process. Appl. 121, 212264.CrossRefGoogle Scholar
Bender, C. and Schoenmakers, J. (2006). An iterative method for multiple stopping: convergence and stability. Adv. Appl. Prob. 38, 729749.CrossRefGoogle Scholar
Carmona, R and Dayanik, S. (2008). Optimal multiple stopping of linear diffusions. Math. Operat. Res. 33, 446460.CrossRefGoogle Scholar
Carmona, R. and Touzi, N. (2008). Optimal multiple stopping and valuation of swing options. Math. Finance 18, 239268.CrossRefGoogle Scholar
Cheng, X. and Riedel, F. (2013). Optimal stopping under ambiguity in continuous time. Math. Financial Econom. 7, 2968.CrossRefGoogle Scholar
Coquet, F., Hu, Y., Mémin, J. and Peng, S. (2002). Filtration-consistent nonlinear expectations and related g-expectations. Prob. Theory Relat. Fields 123, 127.CrossRefGoogle Scholar
El Karoui, N. (1981). Les aspects probabilistes du controle stochastique. In Ecole d’Eté de Probabilités de Saint-Flour IX, Springer, Berlin, pp. 73238.Google Scholar
El Karoui, N. and Quenez, M. C. (1996). Non-linear pricing theory and backward stochastic differential equations. In Financial Mathematics, Springer, Berlin, Heidelberg, pp. 191246.Google Scholar
Ferrari, G., Li, H. and Riedel, F. (2022). A Knightian irreversible investment problem. To appear in J. Math. Anal. Appl. 507.CrossRefGoogle Scholar
Föllmer, H. and Schied, A. (2016). Stochastic Finance: An Introduction in Discrete Time, 4th edn. Walter de Gruyter, Berlin.CrossRefGoogle Scholar
Grigorova, M. et al. (2017). Reflected BSDEs when the obstacle is not right-continuous and optimal stopping. Ann. Appl. Prob. 27, 31533188.CrossRefGoogle Scholar
Grigorova, M. et al. (2020). Optimal stopping with f-expectations: the irregular case. Stoch. Process. Appl. 130, 12581288.CrossRefGoogle Scholar
Grigorova, M., Quenez, M. C. and Sulem, A. (2020). European options in a non-linear incomplete market model with default. SIAM J. Financial Math. 11, 849880.CrossRefGoogle Scholar
Kobylanski, M., Quenez, M. C. and Rouy-Mironescu, E. (2011). Optimal multiple stopping time problem. Ann. Appl. Prob. 21, 13641399.CrossRefGoogle Scholar
Meinshausen, N. and Hambly, B. M. (2004). Monte Carlo methods for the valuation of multiple-exercise options. Math. Finance 14, 557583.CrossRefGoogle Scholar
Peskir, G. and Shiryaev, A. N. (2006). Optimal Stopping and Free-Boundary Problems. Birkhäuser, Basel.Google Scholar