1. Introduction
Consider two insurance companies that aim to collaborate so as to maximize their joint survival probability, or equivalently, to minimize the probability that one of the two companies gets ruined. Assume that the two companies can commit themselves to helping the other in case of financial distress. To assess the benefit of a collaboration, Grandits [Reference Grandits5] has set up a model where the endowment processes, also called surplus processes, of the two companies are given by two independent Brownian motions with drift, and the companies can collaborate by transfer payments. These payments are assumed to be absolutely continuous with respect to the Lebesgue measure and to be bounded in such a way that each company keeps a minimal positive drift rate.
The collaborations considered in [Reference Grandits5] are assumed to have an impact only on the drift rates of the companies’ endowment processes. There are types of collaboration, however, that also entail a change of the diffusion rates; for example, think of mutual reinsurance agreements or agreements to transfer high-risk subsidiaries. In this paper we address the question of how to quantify the maximal benefit if a collaboration also has an impact on the diffusion rate of the two endowment processes.
To measure the benefit of collaboration we introduce a control problem, where an agent can continuously allocate a drift and diffusion rate to two diffusion processes representing the endowment processes of the two companies. The aggregate drift and diffusion rates are assumed to be constant and independent of the allocation plan. Moreover, we assume that the set of implementable drift rates is bounded, and the set of implementable diffusion rates is bounded and bounded away from zero. The agent aims at choosing an allocation plan that maximizes the joint survival probability of the two companies. One can think of the agent as a mediator between the companies suggesting a mutual help contract.
As in [Reference Grandits5], the optimal control turns out to be of bang-bang type: it is optimal that the agent implements the highest possible risk-adjusted return, defined as the ratio of the drift rate and the volatility squared, in the endowment dynamics of the company behind. Besides, the formula for the value function reveals that the maximal joint survival probability only depends on the maximal implementable risk-adjusted return and on the risk-adjusted return of the aggregate endowment process. Our assumptions entail that the latter does not depend on the allocation strategy.
We solve the control problem via a classical verification technique. To this end we construct an explicit solution of the associated Hamilton–Jacobi–Bellman (HJB) equation. We use the fact that the optimal control can be characterized as a bang-bang feedback function jumping at the line bisecting the first quadrant, where the first quadrant is interpreted as the set of non-negative endowment pairs. Since the optimal control is of bang-bang type, the HJB is linear below the bisector and above the bisector. The boundary conditions and a smooth fit condition along the bisector lead to a specific solution of the HJB equation, which can be verified to coincide with the value function. We remark that our construction of the solution of the HJB equation and also the verification bear some similarities to the approach used in [Reference Grandits5].
McKean and Shepp [Reference McKean and Shepp10] and Grandits [Reference Grandits5] both consider the problem of maximizing the joint survival probability of two firms whose endowment processes are given by independent Brownian motions with drift and which are allowed to collaborate by transfer payments. In [Reference McKean and Shepp10] these transfer payments are at most as high as the drift rates, whereas in [Reference Grandits5] each company keeps a given positive minimal drift rate. In both cases the value function is derived and turns out to be a classical solution to the associated HJB equation. We emphasize that we allow for negative drift rates in our model. Grandits and Klein [Reference Grandits and Klein6] extend the model of [Reference Grandits5] and [Reference McKean and Shepp10] to endowment processes driven by Brownian motions that are correlated. In all three articles [Reference Grandits5, Reference Grandits and Klein6, Reference McKean and Shepp10] the derived optimal strategy is of bang-bang type and implements the highest possible risk-adjusted return for the company behind.
Schmidli [Reference Schmidli11] deals with maximizing the survival probability of one company by choosing an optimal dynamic proportional reinsurance strategy in the diffusion model. Also in this model the optimal strategy maximizes the risk-adjusted return among all admissible strategies.
Finally, the literature also comprises several articles analyzing the ruin probability within multidimensional Brownian risk models with non-controllable dynamics; see e.g. [Reference Dębicki, Hashorva and Michna4] and [Reference Ji7].
The paper is organized as follows. In Section 2 we introduce our model and provide the value function and an optimal strategy. We explain how to derive the formula for the value function in Section 3. Finally, we prove our results in Section 4.
2. Model and main results
Let $\underline \sigma, \overline \sigma \in (0, \infty)$ with $\underline{\sigma} \le \overline \sigma$ and $\underline \mu, \overline \mu \in \mathbb{R}$ such that $\underline \mu \le \overline \mu$ . We define
and assume that
Let D be a measurable, non-empty subset of $[\underline \mu, \overline \mu] \times [\underline \sigma, \overline \sigma]$ such that
We interpret an element $(\mu, \sigma) \in D$ as an implementable pair of drift and diffusion rate for the endowment process of each company. Condition (2.2) means that the sets of implementable drift and diffusion rate pairs for the two companies coincide. At the end of the section we provide an explicit example for D.
The set of admissible controls consists of all measurable functions $(\mu, \sigma)\colon \mathbb{R}^2 \to D$ and is denoted by $\mathcal{M}$ .
We denote the endowment processes of the two companies by $X = (X_t)_{t \in [0, \infty)}$ and $Y = (Y_t)_{t \in [0, \infty)}$ , respectively. Given a control $(\mu, \sigma)$ , we assume that the dynamics of the pair (X, Y) satisfy the stochastic differential equation (SDE)
where $W = (W^1, W^2)$ denotes a two-dimensional Brownian motion and $(x,y) \in \mathbb{R}^2$ . Note that we use the notation $\sqrt{\sigma(X_t,Y_t)}$ for the volatility instead of the more commonly used $\sigma(X_t,Y_t)$ in (2.3), as this provides some advantages in later considerations. For every $(\mu, \sigma) \in \mathcal{M}$ and $(x,y) \in \mathbb{R}^2$ , there exists a weak solution of (2.3) satisfying the initial condition $(X_0, Y_0) = (x,y)$ , and we have uniqueness in law for (2.3) (see Theorem 3 and the following comment in [Reference Krylov9]). Recall that a weak solution of (2.3) consists of a tuple $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t \in [0, \infty)}, \mathbb{P}, W, X,Y)$ , where the first four components build a filtered probability space, W is a two-dimensional Brownian motion with respect to the filtration $(\mathcal{F}_t)$ , and the processes X, Y, W satisfy the SDE (2.3) (see e.g. Section 5.3 in [Reference Karatzas and Shreve8]).
Now let $x,y \in [0, \infty)$ and $(\mu, \sigma) \in \mathcal{M}$ . Let $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t \in [0, \infty)}, \mathbb{P}, W, X,Y)$ be a weak solution of (2.3) with initial condition $(X_0, Y_0) = (x,y)$ . The probability that both companies survive is given by
We refer to $J(x,y,\mu, \sigma)$ as the joint survival probability of the two companies, given initial endowments (x, y) and a collaboration control $(\mu, \sigma)$ . The maximal joint survival probability for an initial endowment $(x,y)\in [0,\infty)^2$ is given by
We comment further on the model assumptions. Notice that we allow for Markov controls only. The time-homogeneous dynamics (2.3) entail that there exists an optimal control that is a Markov control. To simplify the outline of the model, we restrict the control set to Markov controls upfront.
Notice that the volatilities of both processes X and Y are bounded away from zero. Hence the probability in (2.4) does not change if we replace $\ge$ with the strict inequality symbol $>$ .
Assumption (2.1) means that the drift rate of the aggregate endowment process $X + Y$ is positive. If M is non-positive, then with probability one the aggregate process hits zero. This further implies that at least one of the two companies gets ruined, and hence the value function (2.5) is constant equal to zero. Thus the only interesting case is where (2.1) is satisfied.
The symmetry (2.2) of D facilitates the search for the optimal strategy and a closed-form formula of the value function that turns out to be symmetric around the line bisecting the first quadrant.
It turns out that the maximal joint survival probability essentially depends only on the two ratios
Notice that $L\leq {{\overline\mu}/{\underline{\sigma}}}<\infty$ , because $D\subseteq[\underline\mu, \overline\mu]\times[\underline\sigma,\overline\sigma]$ .
Our main result is as follows.
Theorem 2.1. The value function of the optimal control problem (2.5) is given by
If L is attained in D, say by $(\hat\mu,\hat\sigma)$ , then an optimal control is given by
Remark 2.1.
-
(a) The value function V only depends on the ratios L – the maximal implementable risk-adjusted return – and S – the risk-adjusted return of the aggregate endowment process.
-
(b) One can show that the value function V is continuous and strictly increasing in L and S. This fact is supported by the following observations. Since L is the maximal possible risk-adjusted return which is assigned to the company behind, increasing L implies that the joint survival probability increases, too. In addition, the higher S, the smaller is the ruin probability of the aggregated endowment process.
Remark 2.2. Observe that we can change the definition of the optimal control $(\mu^*, \sigma^*)$ on the set $\{x=y\}$ and obtain indistinguishable processes $(X_t^*, Y_t^*), t\in [0,\infty)$ , because with probability one the set $\{t \in [0,\infty) \colon X_t^*-Y_t^*=0\}$ has Lebesgue measure zero; for details see Appendix C in [Reference Beneš2].
Since we have an explicit formula for the value function, we can quantify the gain of collaboration. To this end, we assume that in the case of no collaboration both endowment processes have a constant drift rate of ${{M}/{2}}$ and a constant diffusion rate of $\sqrt{{{\Sigma}/{2}}}$ .
The probability for a Brownian motion with drift rate ${{M}/{2}}$ and diffusion rate $\sqrt{{{\Sigma}/{2}}}$ and starting in $z \in (0, \infty)$ to hit zero is given by ${\textrm{e}}^{-2\,S\,z}$ (see e.g. [Reference Asmussen and Albrecher1, Chapter V.5, equation (5.6)]). Thus, in the case of no collaboration, the joint survival probability is given by
Notice that (2.8) also follows from (2.6) by restricting D to the set containing only the element $({{M}/{2}}, {{\Sigma}/{ }}2)$ .
In order to quantify the gain of collaboration we introduce
Note that R is the relative increase of the maximal joint survival probability due to a collaboration.
Corollary 2.1. R is non-increasing in both x and y,
Moreover, for every $a>0$ we have
Remark 2.3. For a set D of implementable drift and diffusion rate satisfying (2.2) and $L>S$ , the relative increase of the maximal joint survival probability also only depends on L and S. Corollary 2.1 implies that a risk transfer is of particular interest if one company is (or both companies are) close to ruin.
See Figure 1 for the function R for different L and $S=1$ .
Remark 2.4. Observe that $L=L(D)\geq S> 0$ . Moreover, we have $L>S$ if and only if there exists $(\mu,\sigma)\in D$ with ${{\mu}/{\sigma}}\neq S$ . To show the claim we distinguish three cases.
If D contains an element $(\mu, \sigma)$ with ${{\mu}/{\sigma}}>S$ , then also $L=\sup_{(\mu,\sigma)\in D }{{\mu}/{\sigma}}>S.$
If there exists $(\mu,\sigma)\in D$ with ${{\mu}/{\sigma}}<S$ , then $(M-\mu,\Sigma-\sigma)\in D$ by assumption (2.2) and
Finally, if ${{\mu}/{\sigma}}=S$ for all $(\mu,\sigma)\in D$ , then $L=S$ holds true.
We close the section with an example for the set D of implementable drift and diffusion rate.
Example 2.1. Suppose that the firms can divide among them two assets, where the first asset has a return with drift rate $\mu_1\in \mathbb{R}$ and diffusion rate $\sigma_1 \in (0, \infty)$ and the second asset has drift rate $\mu_2\in \mathbb{R}$ and diffusion rate $\sigma_2 \in (0, \infty)$ . Suppose that $\mu_1+\mu_2>0$ with $\mu_1 < \mu_2$ , $\sigma_1 < \sigma_2$ and that each firm wants to possess at least $\delta \in (0, 1)$ assets shares. Then the set of implementable drift and diffusion pairs consists of
Notice that the smallest rectangle containing D is $[\underline \mu, \overline \mu] \times [\underline \sigma, \overline \sigma]$ with
In the case ${{\mu_1}/{\sigma_1}} < {{\mu_2}/{\sigma_2}}$ , the set D is a hexagon (see Figure 2). The bold borderline of the hexagon consists of all drift and diffusion pairs with maximal ratio L. Hence any optimal control assigns to the firm with smaller endowment a pair from the bold line, i.e. a share of the second asset but not of the first one. The example reveals, in particular, that the optimal control is not unique in general.
3. Deriving the value function
In this section we explain how one can derive a solution of the Hamilton–Jacobi–Bellman (HJB) equation associated to (2.5) and thus obtain a candidate for the value function V. Our approach is based on [Reference Grandits5], where a ruin problem for two independent Brownian motions with controllable drift is considered. In our setting this corresponds to $\underline{\sigma}=\overline{\sigma}=1$ and $\underline{\mu}>0$ .
First observe that the HJB equation associated to (2.5) is given by
with boundary conditions
We now comment on these boundary conditions for the HJB equation (3.1). Condition (3.2) is due to the fact that if one endowment process is already zero, then the joint survival probability equals zero. If the endowment process of one company attains infinity, then this process is assumed to survive forever. The smaller process obtains the highest possible risk-adjusted return to maximize its survival probability, which is given by the right-hand side of equation (3.3) or (3.4), respectively (see e.g. [Reference Asmussen and Albrecher1, Chapter V.5, equation (5.6)]).
We first consider the case where the set of implementable drift and diffusion rate is given by the rectangle $D=[\underline{\mu}, \overline{\mu}]\times[\underline{\sigma},\overline{\sigma}]$ . In this case $L={{\overline{\mu}}/{\underline{\sigma}}}$ . Moreover, the supremum over D in (3.1) can be separated and the HJB equation is given by
on $(0,\infty)\times(0,\infty)$ .
Note that in the HJB equation (3.5) we maximize a linear function in $\sigma$ and $\mu$ , respectively, over a compact interval. Hence each supremum is attained at the boundary of the corresponding interval. More precisely,
and
In the following we make several assumptions on the solution v of the HJB equation. After obtaining the explicit formula given on the right-hand side of (2.6) we can check that all the assumptions are satisfied. Finally, one has to verify that v is indeed the value function of our problem (2.5).
We assume that v is a classical solution of the HJB equation (3.5), that is,
with boundary conditions (3.2), (3.3), and (3.4). Since our control problem (2.5) is symmetric in the initial values of the endowment processes, every candidate v for the value function should satisfy $v(x,y)=v(y,x)$ . Due to this symmetry and the monotonicity of the maximization problem (2.5), we impose that
Observe that this implies that the smaller endowment process is assigned the lowest possible volatility and the highest possible drift rate to minimize the risk that this firm is ruined. In other words, the agent chooses the maximal implementable risk-adjusted return for the company behind.
Using $v(x,y)=v(y,x)$ , we only focus on the set
In the interior of G it holds – under our assumptions – that v has to satisfy
with
We make the ansatz
The function $(x,y)\mapsto f(x)g(y)$ fulfills (3.6) in the interior of G. More precisely,
with
Note that we do not impose an additional assumption on $(x,y)\mapsto f(x)g(y)$ to guarantee (3.7) because it turns out that the solution that we construct for (3.8) satisfying (3.9), (3.10), and (3.11) directly implies that condition (3.7) for v is fulfilled; see (3.18) below.
Provided that $f(x)g(y)\neq 0$ for all (x, y) in the interior of G, equation (3.8) can be reformulated as
The above equation can only hold true for all (x, y) in the interior of G if
for some $\lambda\in\mathbb{R}$ .
First we consider the case $\boxed{L\neq 2S}$ . This case is a bit more involved than the case $L=2S$ . We assume that
which guarantees real-valued solutions to (3.12) and (3.13). Later on we have to choose $\lambda$ in an appropriate way such that the boundary condition (3.11) is fulfilled. By Theorem 1 and Theorem 5 in [Reference Coddington3, Chapter 2], we obtain
for some $C_1,C_2, C_3,C_4\in \mathbb{R}$ , where
From (3.9) we conclude that $f(0)=0 $ and hence $C_2=-C_1$ . Since we are only interested in the product f(x)g(y), we can assume that $C_1=1$ without loss of generality. Note that for $\lambda \in (\!-{{\overline{\mu}^2}/(2\underline{\sigma})},0]$ condition (3.10) yields $C_3=0$ . Unfortunately, for $\lambda\in(0,{{\ \underline{\mu}^2\ }/{(2\overline{\sigma})}})$ this does not hold true. Nevertheless, we set $C_3=0$ and hope to obtain a solution. In addition, condition (3.11) on the diagonal results in
which has to be satisfied for all $t\in(0,\infty)$ . Therefore it is necessary that the exponent of one summand coincides with $-2\,L\,t$ . This directly implies that the coefficient of the other summand vanishes. More precisely, we determine $\lambda$ such that
or
Some standard but lengthy computations show that
is the unique
satisfying either (3.15) or (3.16). More precisely, if $L<2S $ then (3.15) holds, and (3.16) is fulfilled if $L>2S$ .
For $\lambda=\lambda^*$ and $L<2S$ equation (3.14) is given by
Thus
Similarly, for $L>2\,S$ we conclude that
To sum up, we have
Now we use the symmetry of our problem and obtain v on $[0,\infty)\times[0,\infty)$ by just mirroring $1-{\textrm{e}}^{-2\,L\,x}+f(x)g(y)$ , where $(x,y)\mapsto f(x)g(y)$ is given by (3.17) at the line bisecting the first quadrant, which yields that v is given by the right-hand side of (2.6).
It remains to check that the assumptions made on v are satisfied. Indeed, it holds that $f(x)g(y)\neq 0$ for all $x,y\in(0,\infty)$ , the function given on the right-hand side of (2.6) is in $C^2((0,\infty)\times(0,\infty))\cap C( [0,\infty)\times[0,\infty))$ , and
Hence all assumptions made on v are satisfied.
For the case $\boxed{L=2S}$ we also use
which in this case simplifies to $\lambda^*=-S\,\overline{\mu}$ . Then the solutions of (3.12) and (3.13) are given by
for some constants $C_1,C_2,C_3,C_4\in\mathbb{R}$ ; again see Theorems 1 and 5 in [Reference Coddington3, Chapter 2]. Using (3.9) we conclude that $C_1=0$ and (3.10) implies that $C_3=0$ . Thus
Using (3.11) results in $C_2C_4=-2\,L$ , and mirroring at the line bisecting the first quadrant yields
Finally, one can check that the function in (3.19) satisfies all our assumptions made on v.
In the next step we explain how to obtain a solution of the HJB equation (3.1) if D is a proper subset of $[\underline{\mu}, \overline{\mu}]\times[\underline{\sigma}, \overline{\sigma}]$ . As a candidate v for the value function we choose the function on the right-hand side of (2.6), which we derived in the case where D is a rectangle, and adjust the maximal risk-adjusted return L to $L=\sup_{(\mu,\sigma)\in D }{{\mu}/{\sigma}}$ . Recall that the risk-adjusted return of the aggregate endowment process equals S and does not have to be changed. Now we want to show that our candidate solves the HJB equation (3.1). To this end, observe that for $(\mu,\sigma)\in D$ we have
Since D satisfies (2.2), we have
Hence, for all $x,y\in(0,\infty)$ ,
For simplicity we assume that L is attained in D, say by $(\hat\mu, \hat\sigma)$ . Then $L={{\hat\mu}/{\hat\sigma}}$ and (3.20) equals zero for $(\hat\mu,\hat\sigma)$ if $x<y$ . If $x>y$ then (3.20) is zero for $(M-\hat\mu, \Sigma-\hat\sigma)$ . Therefore the HJB equation (3.1) is fulfilled and v is a candidate for our value function.
Now it remains to verify that the right-hand side of (2.6) is indeed the value function of the optimal control problem (2.5), i.e. to prove Theorem 2.1.
4. Proofs
First, we prove our main result, Theorem 2.1.
Proof of Theorem 2.1. Let v denote the function given by the right-hand side of (2.6). We first show that v is an upper bound for the joint survival probability. For this purpose let $(\mu, \sigma)\in \mathcal{M}$ be an arbitrary admissible control for the drift and diffusion rate. Denote the ruin time of the controlled process $(X_t,Y_t)=\bigl(X^{x,\mu,\sigma}_t, Y^{y,\mu,\sigma}_t\bigr)$ by
Using that $v\in C^2((0,\infty)\times(0,\infty))$ , Itô’s formula implies
Since v solves the HJB equation (3.1), the drift part in (4.1) is non-positive. Hence $(v(X_t,Y_t))_{t\in[0,\infty)}$ and thus $(v(X_{t\wedge \tau},Y_{t\wedge \tau}))_{t\in[0,\infty)}$ are local supermartingales. Moreover, since v is bounded, $(v(X_{t\wedge \tau},Y_{t\wedge \tau}))_{t\in[0,\infty)}$ is a uniformly integrable supermartingale. Therefore the supermartingale convergence theorem yields that
exists $\mathbb{P}$ -a.s. By dominated convergence we conclude that
On $\{\tau<\infty\}$ the boundary conditions (3.2) imply that $v(X_\tau, Y_\tau)=0$ . We claim that on $\{\tau=\infty\}$ we have $\lim_{t\to\infty}v(X_{t},Y_{t})=1$ .
In order to show this, first observe that
where W is a Brownian motion. Thus we know that $\lim_{t \to \infty} (X+Y)_t= \infty$ , $\mathbb{P}$ -a.s. Moreover, the supermartingale convergence theorem guarantees that on $\{\tau = \infty\}$ ,
exists $\mathbb{P}$ -a.s. Combining this with the particular form of v yields that on $\{\tau=\infty\}$ ,
exists, so $\lim_{t \to \infty} \min\{X_t,Y_t\} \in \mathbb{R} \cup \{+\infty\}$ exists $\mathbb{P}$ -a.s. We now show that
By (4.3) and the identity
it follows that on $\{\lim_{t\to\infty}\min\{X_t,Y_t\}<\infty\}$ we have $\lim_{t \to \infty} |X_t-Y_t| =\infty$ . Hence there exists a time point $t_0=t_0(\omega)$ beyond which the paths of X and Y do not intersect. Since the paths are continuous this implies that
Thus we have
Now, to show that $\mathbb{P}[\lim_{t\to\infty} X_t<\infty]=0$ , recall that
Let $A(t)\;:\!=\; \int_{0}^{t} \sigma(X_t,Y_t)\,{\textrm{d}} s$ . Notice that A(t) is strictly increasing, so we can introduce the time-changed process
Note that
is a Brownian motion since
Further, a simple substitution in the deterministic integral yields
We know that $({{\mu}/{\sigma}})(X_s,Y_s) \leq L$ for all s by the definition of L. For the Brownian motion B it is well known that
This directly implies
Consequently, $\mathbb{P}[\!\lim_{t \to \infty} \widetilde{X}_t < \infty]=0$ . Moreover, $\underline{\sigma}>0 $ yields $\lim_{t\to\infty} A(t)= \infty$ . Thus
Similarly, one can show that Y does not converge with probability one. Hence we see that $ \mathbb{P}[\lim_{t \to \infty} \min\{X_t,Y_t\}< \infty] = 0.$ Therefore it follows that on $\{\tau= \infty\}$ we have
and the particular form of v implies that
Thus, plugging (4.4) into (4.2), we see that
and hence $v\geq V$ .
Now assume that L is attained in D. Then the strategy $(\mu^*, \sigma^*)$ given in (2.7) is admissible, for $(\mu^*, \sigma^*)$ the drift rate in (4.1) vanishes, and therefore the process $(v(X_{t\wedge \tau},Y_{t\wedge \tau}))_{t\in[0,\infty)}$ is a uniformly integrable martingale. Hence equality holds in (4.5), which implies that v is the value function of the optimal control problem (2.5) and $(\mu^*, \sigma^*)$ is an optimal control.
So far we have shown that the value function V is given by the right-hand side of (2.6) if L is attained in D. Now we consider the case where L is not attained in D, i.e. $\textrm{arg max}_{(\mu,\sigma)\in D}\, {{\mu}/{\sigma}}=\emptyset$ . Then there exists a sequence $(\mu_n, \sigma_n)_{n\in\mathbb{N}}\subseteq D$ with $L_n\;:\!=\; {{\mu_n}/{\sigma_n}}\nearrow L$ as $n\to\infty$ . Without loss of generality we can assume that $L_n\geq S$ (see Remark 2.4) and that
because $D\subseteq[\underline{\mu},\overline{\mu}]\times[\underline{\sigma},\overline{\sigma}]$ . In particular, we have ${{\tilde\mu}/{\tilde\sigma}}=L$ . Let
Then $\widetilde{D}\subseteq [\underline{\mu},\overline{\mu}]\times[\underline{\sigma},\overline{\sigma}]$ , $\widetilde{D}$ satisfies (2.2) and
where we use $L\geq S={{M}/{\Sigma}} $ by Remark 2.4 and thus ${{(M-\tilde\mu)}/{(\Sigma-\tilde\sigma)}}\leq S\leq L$ . In particular, $(\tilde\mu,\tilde\sigma)\in \textrm{arg max}_{(\mu,\sigma)\in \widetilde{D}}\,{{\mu}/{\sigma}}$ . Hence the value function $V^{L(\widetilde D)}$ for maximizing the joint survival probability over controls taking values in $\widetilde{D}$ is given by (2.6) with $L(\widetilde{D})=L$ . Moreover, $V\leq V^{L(\widetilde D)}=V^L$ .
To derive a lower bound for V, let
By definition $D_n$ satisfies (2.2). Since $(\mu_n,\sigma_n)\in D$ , it holds that $D_n\subseteq D$ . Moreover,
since ${{\mu_n}/{\sigma_n}}=L_n\geq S$ and therefore
In particular, we have $(\mu_n,\sigma_n)\in\textrm{arg max}_{(\mu,\sigma)\in D_n}{{\mu}/{\sigma}}$ . Hence the value function $V^{L_n}$ of (2.5) for controls taking values in $D_n$ is given by (2.6) and $V^{L_n}\leq V$ . Since the function on the right-hand side of (2.6) is continuous in the parameter L, we conclude that for all $x,y\in [0,\infty)$
Therefore, in this case the value function is given by (2.6), too.
Finally, we prove Corollary 2.1.
Proof of Corollary 2.1. We only show that R is non-increasing. The other results follow by straightforward calculations.
Since R is symmetric, we only need to consider the part of the domain where $x\leq y$ . Moreover, we only consider the case $L>2S$ . The cases $L< 2S$ and $L = 2S$ can be proved similarly.
One can show that ${{\partial R}/{\partial x}}$ is non-positive if and only if
Since $L \geq S$ , one can show by using convexity that ${\textrm{e}}^{2\, S\, x} L - {\textrm{e}}^{2\, L\,x } S - (L- S) \leq 0$ . Thus the left-hand side of (4.6) is non-increasing in y. Hence (4.6) is fulfilled for all $y \geq x$ if and only if it is fulfilled for $y=x$ . Thus we need to verify that
Inequality (4.7) is satisfied if and only if the term in the rectangular bracket on the left-hand side is non-positive, which is equivalent to
and hence equivalent to
Inequality (4.8) holds true, because $z \mapsto {{\sinh\!(z)}/{z}}$ is strictly increasing for $z \geq 0$ . To sum up, we have shown that (4.6) is satisfied and thus ${{\partial R }/{\partial x}}$ is non-positive.
The partial derivative ${{\partial R}/{\partial y}}$ can be shown to be non-positive if and only if
The left-hand side coincides with the bracket terms of (4.7) and is thus non-positive.
Acknowledgement
We thank anonymous referees for carefully reading the manuscript and highly appreciate their comments, which helped to improve our paper.
Funding information
There are no funding bodies to thank relating to this creation of this article.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.