Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-29T04:28:35.416Z Has data issue: false hasContentIssue false

Ruin probabilities in a Markovian shot-noise environment

Published online by Cambridge University Press:  11 November 2022

Simon Pojer*
Affiliation:
Graz University of Technology
Stefan Thonhauser*
Affiliation:
Graz University of Technology
*
*Postal address: Institute of Statistics, University of Technology Graz, Kopernikusgasse 24/III, 8010 Graz, Austria.
*Postal address: Institute of Statistics, University of Technology Graz, Kopernikusgasse 24/III, 8010 Graz, Austria.
Rights & Permissions [Opens in a new window]

Abstract

We consider a risk model with a counting process whose intensity is a Markovian shot-noise process, to resolve one of the disadvantages of the Cramér–Lundberg model, namely the constant intensity of the Poisson process. Due to this structure, we can apply the theory of piecewise deterministic Markov processes on a multivariate process containing the intensity and the reserve process, which allows us to identify a family of martingales. Eventually, we use change of measure techniques to derive an upper bound for the ruin probability in this model. Exploiting a recurrent structure of the shot-noise process, even the asymptotic behaviour of the ruin probability can be determined.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

The theory of doubly stochastic Poisson processes described in [Reference Brémaud3] allows the generalization of the well-known Cramér–Lundberg model to the broad class of Cox models, which are discussed, e.g., in [Reference Grandell8]. Members of this family are, for example, the Markov-modulated risk model, where the intensity is modelled by a continuous-time Markov chain ([Reference Asmussen and Albrecher2, Chapter VII] and [Reference Rolski, Schmidli, Schmidt and Teugels13, Chapter 8]), the Björk–Grandell model considered in [Reference Schmidli14], and diffusion-driven models studied in [Reference Grandell and Schmidli9].

In particular, arrivals of claims caused by catastrophic events can be realistically modelled using shot-noise intensity. This was done in [Reference Albrecher and Asmussen1, Reference Dassios, Jang and Zhao6, Reference Macci and Torrisi10], where the asymptotic behaviour of the ruin probability in general shot-noise environments was studied. In these settings, upper and lower bounds could be derived. The idea of applying the theory of piecewise deterministic Markov processes to a Cox model with Markovian shot-noise intensity was used in [Reference Dassios and Jang4, Reference Dassios and Jang5] in the context of pricing reinsurance contracts.

Interested in the behaviour of the ruin probability in this model, we follow the piecewise deterministic Markov process (PDMP) approach to find suitable alternative probability measures. Further, we take advantage of the properties of the process under these measures to obtain an exponentially decreasing upper bound. Exploiting a recurrent behaviour of the shot-noise process and applying the extended renewal theory obtained in [Reference Schmidli14], we eventually derive the exact asymptotic behaviour of the ruin probability.

2. The Markovian shot-noise ruin model

We assume for the rest of this paper the existence of a complete probability space $(\Omega, \mathcal{F}, \mathbb{P})$ which is big enough to contain all the mentioned stochastic processes and random variables. For some stochastic process Z we denote the right-continuous natural filtration by $\big\lbrace\mathcal{F}^Z_t\big\rbrace_{t \geq 0}$ . For the shot-noise environment we consider the following four objects: a Poisson process $N^\lambda$ with constant intensity $\rho>0$ and jump times $\big\lbrace T^\lambda_i\big\rbrace_{i \in \mathbb{N}}$ , a sequence $\lbrace Y_i\rbrace_{i \in \mathbb{N}}$ of positive independent and identically distributed (i.i.d.) random variables with distribution function $F_Y$ , a non-negative function w, and a positive starting value $\lambda_0$ . With these components we define the multiplicative shot-noise process by $\lambda_t\,:\!=\, \lambda_0w(t) + \sum_{i=1}^{N^\lambda_t}Y_i w(t-T^\lambda_i)$ . Since we want to exploit the theory of PDMPs, it would be preferable if the process $\lambda$ satisfies the Markov property. This is equivalent to the existence of some $\delta>0$ such that $w(t) = \text{e}^{-\delta t}$ . Due to this, we define the Markovian shot-noise process in the following way.

Definition 1. Let $N^\lambda$ be a Poisson process with intensity $\rho >0$ and jump times $\big\lbrace T^\lambda_i\big\rbrace_{i \in \mathbb{N}}$ , $\lbrace Y_i \rbrace _{i \in \mathbb{N}}$ i.i.d. copies of a positive random variable Y with distribution function $F_Y$ and independent of the process $N^\lambda$ , $\lambda_0>0$ , and $\delta >0$ constant. Then, we define the Markovian shot-noise process by $\lambda_t = \lambda_0\text{e}^{-\delta t} + \sum_{i=1}^{N^\lambda_t}Y_i \text{e}^{-\delta(t-T^\lambda_i)}$ .

As shown in [Reference Dassios and Jang5], the Markovian shot-noise process is a piecewise-deterministic Markov process with generator

\begin{equation*} \mathcal{A}^\lambda f(\lambda) = - \delta \lambda \frac{\partial f(\lambda)}{\partial \lambda} + \rho \int_0^\infty \left(f(\lambda + y) - f(\lambda)\right) \, F_Y(\text{d}y). \end{equation*}

Further information about PDMPs can be found in [Reference Davis7] or [Reference Rolski, Schmidli, Schmidt and Teugels13, Chapter 11]. To fully specify our model we will now define the surplus process.

Definition 2. Let $\lambda$ be a Markovian shot-noise process, N a Cox process with intensity $\lambda$ , and $\lbrace U_i\rbrace_{i \in \mathbb{N}}$ a sequence of i.i.d. copies of a positive random variable U with continuous distribution $F_U$ , which are independent of N and $\lambda$ . For some initial capital u and constant premium rate $c>0$ we define the surplus process by $X_t= u + ct - \sum_{i=1}^{N_t} U_i$ .

Now define $\mathcal{F}_t\,:\!=\, \mathcal{F}^X_t \vee \mathcal{F}^\lambda_t$ ; hence, $\lbrace\mathcal{F}_t\rbrace_{t \geq 0}$ is the combined filtration of the Markovian shot-noise process and the surplus process. If not mentioned differently, we will from now on consider the filtered probability space $\big(\Omega, \mathcal{F}, \left\lbrace \mathcal{F}_t\right\rbrace_{t \geq 0}, \mathbb{P}_{(u,\lambda_0)}\big)$ , where we define the measure $\mathbb{P}_{(u,\lambda_0)}$ as the measure $\mathbb{P}$ under the conditions that the initial capital of the surplus process is u and the starting intensity is $\lambda_0$ . We will denote the expectation of a random variable Z under this measure by $\mathbb{E}_{(u,\lambda_0)}[Z]$ , or $\mathbb{E}[Z]$ if Z is independent of the initial values.

The multivariate process $(X, \lambda, \cdot) \,:\!=\, ((X_t,\lambda_t,t))_{t \geq 0}$ is a càdlàg PDMP without active boundary and with generator

\begin{align*} \mathcal{A}f(x,\lambda,t) & = c\frac{\partial f(x,\lambda, t)}{\partial x} - \delta \lambda\frac{\partial f(x,\lambda, t)}{\partial \lambda} + \frac{\partial f(x,\lambda, t)}{\partial t}\\[7pt] & \quad + \lambda \int_0^\infty (f(x-u,\lambda,t)-f(x,\lambda,t)) \, F_U(\text{d}u) \\ & \quad + \rho \int_0^\infty (f(x,\lambda+y,t)-f(x,\lambda,t))\, F_Y(\text{d}y).\end{align*}

Its domain consists of all functions f which are absolutely continuous and satisfy the integrability condition

\begin{equation*}\mathbb{E}_{(u,\lambda_0)} \Bigg[ \sum_{i=1}^{\tilde N_t} \vert f(X_{T_i}, \lambda_{T_i}, T_i) - f(X_{T_i-}, \lambda_{T_i-}, T_i-)\vert \Bigg] < \infty\end{equation*}

for all $t\geq 0$ , where $\tilde N$ denotes the process counting the random jumps of the PDMP $(X,\lambda,\cdot)$ . Similar to the Cramér–Lundberg model, we want to state a net profit condition, which is necessary to ensure that ruin does not occur with probability 1.

Lemma 1. The surplus process satisfies

\begin{equation*} \lim_{t \to \infty} \frac{\mathbb{E}_{(u,\lambda_0)}[X_t]}{t} = c- \frac{\rho}{\delta} \mathbb{E} [U]\mathbb{E}[Y].\end{equation*}

Proof. The function $\bar f(x,\lambda,t)\,:\!=\, x$ is in the domain of the generator. Consequently,

\begin{align*} \mathbb{E}_{(u,\lambda_0)}[X_t] = u + \mathbb{E}_{(u,\lambda_0)}\bigg[\int_0^t \mathcal{A}\bar f(X_s,\lambda_s,s) \, \text{d}s\bigg] = u + ct - \mathbb{E}_{(u,\lambda_0)} \bigg[ \int_0^t \lambda_s \mathbb{E}\left[U\right] \, \text{d}s\bigg]. \end{align*}

The process $\lambda$ is positive so we can use Tonelli’s theorem and interchange expectation and integration, which leads to

(1) \begin{align} \mathbb{E}_{(u,\lambda_0)}[X_t] = u + ct- \mathbb{E}[U]\int_0^t \mathbb{E}_{(u,\lambda_0)} [\lambda_s] \, \text{d}s. \end{align}

Now we use the same procedure to obtain an equation for $\mathbb{E}_{(u,\lambda_0)}[\lambda_s]$ . Defining the function $\tilde f(x,\lambda,t)\,:\!=\, \lambda$ we get

\begin{align*} \mathbb{E}_{(u,\lambda_0)}[\lambda_s] = \lambda_0 - \delta \int_0^s \mathbb{E}_{(u,\lambda_0)}[\lambda_u]\, \text{d}u + \rho s \mathbb{E}[Y]. \end{align*}

Differentiating both sides with respect to s gives us that $\mathbb{E}_{(u,\lambda_0)}[\lambda_s]$ is the solution to the differential equation $g^{\prime}(s) = -\delta g(s) + \rho \, \mathbb{E}[Y]$ , with initial value $g(0)=\lambda_0$ . The solution of the ordinary differential equation is

(2) \begin{align} \mathbb{E}_{(u,\lambda_0)}[\lambda_s] = \lambda_0\text{e}^{-\delta s} + \frac{\rho}{\delta} \mathbb{E}[Y] (1-\text{e}^{-\delta s}). \end{align}

Using (2) in (1) leads to

\begin{align*} \mathbb{E}_{(u,\lambda_0)}[X_t] = u + ct - \mathbb{E}[U]\frac{\rho}{\delta} \mathbb{E}[Y] t + \mathbb{E}[U] \bigg(\frac{\lambda_0}{\delta} - \frac{\rho}{\delta^2} \mathbb{E}[Y] \bigg)(1-\text{e}^{-\delta t}). \end{align*}

Now, let us divide by t and let it tend to infinity to obtain

\begin{equation*} \lim_{t \to \infty} \frac{\mathbb{E}_{(u,\lambda_0)}[X_t]}{t} = c - \frac{\rho}{\delta} \mathbb{E}[U]\mathbb{E}[Y]. \end{equation*}

Motivated by this result, we make the following assumption.

Assumption 1. From now on we assume that the net profit condition $c > ({\rho}/{\delta}) \mathbb{E}[U]\mathbb{E}[Y]$ is satisfied.

3. Martingales and change of measure

To obtain the asymptotic behaviour of the ruin probability in this model, we want to exploit the following result derived in [Reference Schmidli14].

Theorem 1. [Reference Schmidli14, Theorem 2] Assume that z(u) is directly Riemann integrable, that $0\leq p(u,x) \leq 1$ is continuous in u, and that $\int_0^u p(u,y) \, B(\text{d}y)$ is directly Riemann integrable. Denote by Z(u) the solution to $Z(u) = \int_0^u Z(u-y)(1-p(u,y)) \, B(\text{d}y) + z(u)$ , which is bounded on bounded intervals. Then, the limit $\lim_{u \to \infty} Z(u)$ exists and is finite provided B(u) is not arithmetic. If B(u) is arithmetic with span $\gamma$ , then $\lim_{n \to \infty} Z(x+n\gamma)$ exists and is finite for all x fixed.

Unfortunately, we cannot apply this theorem directly to our model because of two problems. The first issue is that the ruin probability depends on the initial intensity level $\lambda_0$ . To bypass this, we have to choose appropriate renewal times such that $\lambda$ always has the same level, which we will do in Section 4. The second problem is that suitable choices of B are defective under the original measure $\mathbb{P}_{(u,\lambda_0)}$ . This is a common issue and can be solved through change of measure techniques.

To do so we have to find martingales of the form $M_t=h(X_t, \lambda_t,t)$ . Our approach is a function of the form $h(x,\lambda, t) \,:\!=\, \beta \exp\!({-}\theta(r) t - \alpha(r) \lambda - r x)$ . To motivate the explicit choice of our parameters, let us assume that h is in the domain of the generator and apply $\mathcal{A}$ to h. This gives us

\begin{align*} \mathcal{A}h(x,\lambda,t) = & -\theta h(x,\lambda,t) - crh(x,\lambda,t) + \delta \lambda \alpha h(x,\lambda,t) \\& + \lambda h(x,\lambda,t) \int_0^\infty (\text{e}^{ru} - 1) \, F_U(\text{d}u) + \rho h(x,\lambda,t) \int_0^\infty (\text{e}^{-\alpha y}-1)\, F_Y(\text{d}y)\overset{!}{=} 0.\end{align*}

Since h is strictly positive, we can reformulate the equation to $\delta \lambda \alpha -cr-\theta +\lambda(M_U(r)-1) + \rho (M_Y({-}\alpha)-1) = 0$ . Here, $M_U(s)$ and $M_Y(s)$ denote the moment-generating functions of the random variables U and Y, which we assume to be finite. This equation has to hold for any $\lambda >0$ ; hence, this is equivalent to

\begin{equation*} \delta \alpha + M_U(r)-1 =0, \qquad -cr-\theta + \rho (M_Y({-}\alpha)-1) =0.\end{equation*}

Solving these equations for some fixed r, we get the unique solutions

\begin{equation*}\alpha(r) = \frac{1-M_U(r)}{\delta}, \qquad \theta(r) = -cr + \rho \bigg(M_Y\bigg(\frac{M_U(r)-1}{\delta}\bigg)-1\bigg).\end{equation*}

Now we still have to show that, for this explicit choice of the parameters, the process $h(X,\lambda,\cdot)$ is indeed a martingale.

Lemma 2. Let r be constant such that $M_U(r)$ is finite, and define $\alpha(r)\,:\!=\, ({1-M_U(r)})/{\delta}$ . Assume further that $M_Y({-}\alpha(r))$ is finite. If $\theta (r) \,:\!=\, -cr + \rho (M_Y({-}\alpha(r))-1)$ and $\beta = \exp\!(ru+\alpha(r)\lambda_0)$ , then $h(X_t,\lambda_t,t)$ is integrable and has expectation 1 for all $t\geq 0$ .

Proof. The expectation can be rewritten as

\begin{multline*} \mathbb{E}_{(u,\lambda_0)}[ \beta \exp\!({-}rX_t -\alpha(r)\lambda_t -\theta(r)t )] \\ = \exp\!({-}rct -\theta(r) t + \alpha(r) \lambda_0) \mathbb{E}_{(u,\lambda_0)}\Bigg[\!\exp\!\Bigg({-}r \sum_{i=1}^{N_t} U_i - \alpha(r) \lambda_t\Bigg)\Bigg]. \end{multline*}

Conditioned on $\mathcal{F}^\lambda_t$ , the counting process N is an inhomogeneous Poisson process and, as shown in [Reference Albrecher and Asmussen1], its integrated compensator has the form

\begin{equation*} \Lambda_t = \int_0^t \lambda_s \, \text{d}s = \frac{1}{\delta}\Bigg(\lambda_0 + \sum_{j=1}^{N^\lambda_t} Y_j - \lambda_t \Bigg).\end{equation*}

Using this, we get

\begin{align*} \exp\!({-}rct - & \theta(r) t + \alpha(r) \lambda_0)\mathbb{E}_{(u,\lambda_0)}\Bigg[\!\exp\!\Bigg({-}r \sum_{i=1}^{N_t} U_i - \alpha(r) \lambda_t\Bigg)\Bigg] \\ & = \exp\!({-}rct -\theta(r) t + \alpha(r) \lambda_0)\mathbb{E}_{(u,\lambda_0)}\big[\!\exp\!(( M_U(r)-1)\Lambda_t - \alpha(r) \lambda_t)\big] \\ & = \exp\!({-}rct -\theta(r) t)\mathbb{E}\Bigg[\!\exp\!\Bigg({-}\alpha(r) \sum_{j=1}^{N^\lambda_t} Y_j\Bigg)\Bigg]. \end{align*}

The process $\sum_{j=1}^{N^\lambda_t} Y_j$ is a compound Poisson process, whose moment-generating function is $\exp\!(\rho t (M_Y({-}\alpha(r)){-}1)$ . By this and the definition of $\theta(r)$ we get that $h(X_t,\lambda_t,t)$ has expectation 1.

These result leads immediately to the following theorem.

Theorem 2. Under the assumptions of Lemma 2, the process $M^r_t\,:\!=\,h(X_t,\lambda_t,t)$ is a martingale with expectation 1.

Proof. By Lemma 2, the process is integrable and has constant expectation 1. Consequently, we just have to show that, for all $t>s$ , $\mathbb{E}_{(u,\lambda_0)}[h(X_t,\lambda_t,t) \mid \mathcal{F}_s] = h(X_s,\lambda_s,s)$ . The function h is strictly positive for all values x, $\lambda$ , and t, and hence we can simply expand the conditional expectation above by ${h(X_s,\lambda_s,s)}/{h(X_s,\lambda_s,s)}$ . Consequently, we get

\begin{align*} \mathbb{E}_{(u,\lambda_0)}[h&(X_t,\lambda_t,t) \mid \mathcal{F}_s ] = h(X_s,\lambda_s,s) \mathbb{E}_{(u,\lambda_0)}\bigg[\frac{ h(X_t,\lambda_t,t) }{h(X_s,\lambda_s,s)} \, \big| \, \mathcal{F}_s \bigg] \\ & = h(X_s,\lambda_s,s) \mathbb{E}_{(u,\lambda_0)}[\!\exp\!({-}\theta(r) (t-s) -r(X_t-X_s) - \alpha(r) (\lambda_t-\lambda_s) ) \mid \mathcal{F}_s ] \\ & = h(X_s,\lambda_s,s) \mathbb{E}_{(X_s, \lambda_s)}[ h(X_{t-s}, \lambda_{t-s}, t-s) ] = h(X_s,\lambda_s,s) . \end{align*}

Using these martingales, we can define a family of measures $\mathbb{Q}^{(r)}$ such that

\begin{equation*}\frac{\text{d}\mathbb{Q}^{\text{(r)}}}{\text{d}\mathbb{P}_{(u,\lambda_0)}}\bigg\vert_{\mathcal{F}_t} = M^{r}_t.\end{equation*}

The exponential form of the change of measure allows us to exploit the results shown in [Reference Palmowski and Rolski12] to derive the behaviour of the combined process under the new measures $\mathbb{Q}^{(r)}$ .

Lemma 3. Let r be such that $M^r$ is well defined. Then, under the measure $\mathbb{Q}^{(r)}$ , the process $(X, \lambda, \cdot)$ is again a PDMP with generator

\begin{align*} \mathcal{A}^{(r)} f(x,\lambda,t) & = c \frac{\partial f(x,\lambda,t)}{\partial x} - \delta \lambda \frac{\partial f(x,\lambda,t)}{\partial \lambda} + \frac{\partial f(x,\lambda,t)}{\partial t} \\ & \quad + \lambda \int_0^\infty \! (f(x-u,\lambda,t) -f(x,\lambda,t)) \text{e}^{ru} \, F_U(\text{d}u) \\ & \quad + \rho \int_0^\infty \! (f(x, \lambda+y, t) - f(x,\lambda,t) ) \text{e}^{-\alpha(r) y} \, F_Y(\text{d}y). \end{align*}

So far, we have found a new family of measures but we have to identify a measure that fits our needs. Motivated by the definition of the adjustment coefficient in the classical model, we consider the function $\theta(r) = -cr + \rho (M_Y({-}\alpha(r)) -1 )$ .

Lemma 4. The function $\theta(r)$ is convex on $\lbrace r \mid M_U(r) < \infty, \,M_Y({-}\alpha(r)) < \infty\rbrace$ and satisfies $\theta(0)=0$ .

Proof. To show convexity we use the fact that moment-generating functions are log-convex, and therefore convex. Moreover, they are twice differentiable. Consequently, $\theta$ is twice differentiable too and its derivatives are

\begin{align*} \theta^{\prime}(r) & = -c + \frac{\rho}{\delta} M^{\prime}_Y\bigg(\frac{M_U(r)-1}{\delta}\bigg) M^{\prime}_U(r) , \\ \theta^{\prime\prime}(r) & = \frac{\rho}{\delta^2} M^{\prime\prime}_Y\bigg(\frac{M_U(r)-1}{\delta}\bigg) M^{\prime}_U(r)^2 + \frac{\rho}{\delta} M^{\prime}_Y\bigg(\frac{M_U(r)-1}{\delta}\bigg)M^{\prime\prime}_U(r). \end{align*}

By the convexity of the moment-generating functions, we know that their second derivatives are non-negative. To ensure that $\theta$ is convex, we have to check whether the first derivative of the moment-generating function of Y is non-negative too. Equivalently, we show that the moment-generating function of Y is monotone increasing. Now let $r>s$ ; then $\mathbb{E}[\text{e}^{rY}] = \mathbb{E}[\text{e}^{sY} \text{e}^{(r-s)Y}]$ . The random variable Y is almost surely positive, and $r-s$ is positive too. Hence, $\text{e}^{(r-s)Y} > 1$ almost surely. This gives us $M_Y(r) = \mathbb{E}[\text{e}^{sY} \text{e}^{(r-s)Y}] > \mathbb{E}[\text{e}^{sY}] = M_Y(s)$ . Consequently, the first derivative of $M_Y(r)$ is non-negative. Therefore, $\theta$ is convex and, since $M_U(0)=M_Y(0)=1$ , we get that $\theta(0)=0$ .

Lemma 5. Let r be such that the measure $\mathbb{Q}^{(r)}$ is well defined, and assume there is some $\varepsilon >0$ such that $M_U(r+\varepsilon)$ and $M_Y({-}\alpha(r) + \varepsilon)$ are finite. Then,

\begin{equation*} \lim_{t \to \infty} \frac{\mathbb{E}^{\mathbb{Q}^{(r)}}[X_t]}{t} = - \theta^{\prime}(r). \end{equation*}

Proof. To show this property, we can use the ideas of the proof of Lemma 1. The main difference is that we apply the generator $\mathcal{A}^{(r)}$ . Again we obtain

\begin{equation*}\mathbb{E}^{\mathbb{Q}^{(r)}}[X_t] = u + ct - M_U(r) \mathbb{E}^{\mathbb{Q}^{(r)}}[U] \int_0^t \! \mathbb{E}^{\mathbb{Q}^{(r)}}[\lambda_s] \, \text{d}s.\end{equation*}

The expectation of $\lambda_t$ under $\mathbb{Q}^{(r)}$ satisfies

\begin{equation*}\mathbb{E}^{\mathbb{Q}^{(r)}}[\lambda_t] = \frac{\rho}{\delta} M_Y({-}\alpha(r)) \mathbb{E}^{\mathbb{Q}^{(r)}}[Y] (1- \text{e}^{-\delta t}) + \text{e}^{-\delta t} \lambda_0.\end{equation*}

The expectations $\mathbb{E}^{\mathbb{Q}^{(r)}}[U]$ and $\mathbb{E}^{\mathbb{Q}^{(r)}}[Y]$ can easily be obtained from

\begin{equation*} M^{\mathbb{Q}^{(r)}}_U(s) = \frac{M_U(s+r)}{M_U(r)} , \qquad M^{\mathbb{Q}^{(r)}}_Y(s) = \frac{M_Y(s-\alpha(r))}{M_Y({-}\alpha(r))}. \end{equation*}

Consequently,

\begin{equation*}\mathbb{E}^{\mathbb{Q}^{(r)}}[U] = \frac{M^{\prime}_U(r)}{M_U(r)} , \qquad \mathbb{E}^{\mathbb{Q}^{(r)}}[Y] = \frac{M^{\prime}_Y({-}\alpha(r))}{M_Y({-}\alpha(r))}.\end{equation*}

Combining these results, we get

\begin{align*} \lim_{t \to \infty} \frac{\mathbb{E}^{\mathbb{Q}^{(r)}}[X_t]}{t} = c-\frac{\rho}{\delta}M^{\prime}_Y({-}\alpha(r))M^{\prime}_U(r) = - \theta^{\prime}(r). \end{align*}

Assumption 2. From now on we assume that there exists a positive solution R to the equation $\theta(R) =0$ , that $\mathbb{Q}^{(R)}$ is well defined, and that, for some $\varepsilon>0$ , both $M_U(R+\varepsilon)$ and $M_Y(\varepsilon-\alpha(R))$ are finite.

This assumption ensures that the measure $\mathbb{Q}^{(R)}$ is well defined, and that we can express the expectation of Y and U in terms of their original moment-generating functions. One example where this is satisfied is the following.

Example 1. Let $\mu$ and $\kappa$ be positive constants. If $Y \sim \text{Exp}(\mu)$ and $U \sim \text{Exp}(\kappa)$ , the net profit condition simplifies to $c > {\rho}/{\delta \kappa \mu}$ . The moment-generating functions are given by $M_U(r) = {\kappa}/({\kappa-r}) $ and $M_Y({-}\alpha(r)) = {\mu}/({\mu+\alpha(r)})$ , where $r<\kappa$ and $-\alpha(r) < \mu$ . If we fix some $r < {\mu \delta \kappa}/({1+\delta\mu})$ , we can determine the functions $ \alpha(r) = - {r}/{\delta (\kappa -r)}$ and

\begin{equation*}\theta(r) = -cr + \rho\bigg(\frac{r}{\mu\delta(\kappa-r)-r}\bigg).\end{equation*}

Solving the equation $\theta(r) = 0$ gives us the solutions $r_1 = 0$ and

\begin{equation*} R\,:\!=\,r_2 = \frac{\mu\delta\kappa c - \rho}{(1+\mu \delta)c},\end{equation*}

which is positive by the net profit condition. Now we want to show that there is some $\varepsilon>0$ such that $R+\varepsilon < [{\mu\delta}/({1+\mu\delta})]\kappa$ and $\varepsilon-\alpha(R) < \mu$ . The first inequality is equivalent to

\begin{equation*}\varepsilon < \frac{\rho}{(1+\mu \delta)c},\end{equation*}

which is a strictly positive upper bound. The second condition can be rewritten as

\begin{equation*} \varepsilon < \frac{\mu\rho \delta + \rho}{\delta\kappa c + \rho \delta},\end{equation*}

which is positive too. Consequently, Assumption 2 is satisfied.

Lemma 6. For every $u\geq0$ and $\lambda_0>0$ , $\mathbb{Q}^{(R)} [ \tau_u < \infty] =1.$

Proof. We already know that $\lim_{t \to \infty} \big({\mathbb{E}^{\mathbb{Q}^{(R)}}[X_t]}/{t}\big) = - \theta^{\prime}(R)$ . If we can show that $\theta^{\prime}(R) >0$ , then ruin occurs almost surely under the new measure. The function $\theta$ is convex and satisfies $\theta(0)=\theta(R)=0$ . Further, we have that

\begin{equation*}\theta^{\prime}(0)= -c + \frac{\rho}{\delta} \mathbb{E}[Y]\mathbb{E}[U], \end{equation*}

which is smaller than 0 by the net profit condition. Therefore, there exists $0<r<R$ such that $\theta(r) <0$ . Since $\theta(R)>\theta(r)$ , it follows by the mean-value theorem that there is an $\tilde r \in (r,R)$ such that

\begin{equation*} \theta^{\prime}(\tilde r) = \frac{\theta(R)-\theta(r)}{R-r} >0.\end{equation*}

By convexity, we know that $\theta^{\prime}$ is a monotone increasing function and $\theta^{\prime}(R)\geq \theta^{\prime}(\tilde r)>0$ .

Similar to the classical model and the Björk–Grandell model considered in [Reference Schmidli14], we have found a new measure under which ruin occurs almost surely. We can use this to get an upper bound for the ruin probability.

Theorem 3. Under our assumptions, $\psi(u,\lambda_0) \leq \text{e}^{-\alpha(R)\lambda_0} \text{e}^{-Ru}$ .

Proof. The ruin probability can be rewritten as

\begin{align*} \psi(u,\lambda_0) & = \mathbb{E}_{(u,\lambda_0)}\big[\textbf{1}_{\lbrace \tau_u < \infty\rbrace} \big] = \mathbb{E}^{\mathbb{Q}^{(R)}}\big[\textbf{1}_{\lbrace \tau_u < \infty\rbrace} \big(M_{\tau_u}^R\big)^{-1}\big] \\ & = \exp\!({-}Ru-\alpha(R) \lambda_0) \mathbb{E}^{\mathbb{Q}^{(R)}}[\!\exp\!(RX_{\tau_u} + \alpha(R) \lambda_{\tau_u})]. \end{align*}

By the definition of $\tau_u$ the value $X_{\tau_u}$ is negative, and since $R>0$ we have that $M_U(R)>1$ . Consequently, $\alpha(R) <0$ . By this, we get that $\exp\!(RX_{\tau_u} + \alpha(R) \lambda_{\tau_u}) \leq 1$ and $\psi(u,\lambda_0) \leq \exp\!({-}Ru-\alpha(R) \lambda_0)$ .

4. The renewal equation

We now want to use Theorem 1 to get information about the asymptotic behaviour of the ruin probability $\psi(u,\lambda_0)$ . Because of the dependence on $\lambda_0$ , we have to choose the renewal times $\lbrace S_+(i)\rbrace_{i \in \mathbb{N}}$ such that $\lambda_{S_+(i)}=\lambda_0$ . To exploit the renewal equation, we have to ensure that there are infinitely many renewal times, and that they are almost surely finite. For this, we will use the ideas from [Reference Orsingher and Battaglia11] to get an intensity for the number of upcrossings of the process $\lambda$ through some level l.

Lemma 7. Let $\lambda$ be the Markovian shot-noise process and $l>0$ be arbitrary. The process counting all upcrossings of $\lambda$ through l has intensity

\begin{equation*}\nu^+_l(t) = \rho \int_0^l (1-F_Y(l-z)) \, F_\lambda(\text{d}z,t),\end{equation*}

where $F(z,t)= \mathbb{P}_{(u,\lambda_0)}[\lambda_t \leq z]$ is the cumulative distribution function of $\lambda_t$ .

Proof. Consider, for some small $\Delta t >0$ , the probability $\mathbb{P}_{(u,\lambda_0)}[ \lambda_t \leq l, \lambda_{t+\Delta t} > l ]$ . The jumps of $\lambda$ are governed by a Poisson process with rate $\rho$ ; hence,

\begin{align*} \mathbb{P}_{(u,\lambda_0)} & [ \lambda_t \leq l, \lambda_{t+\Delta t} > l ] \\ & = \mathbb{P}_{(u,\lambda_0)} \big[ N^{\lambda}_{t +\Delta t} - N^\lambda_t = 1, \lambda_t \leq l, \lambda_{t+\Delta t} > l \big] + o(\Delta t) \\ & = \mathbb{P}_{(u,\lambda_0)} \big[ N^{\lambda}_{t +\Delta t} - N^\lambda_t = 1, \lambda_t \leq l, \lambda_t \text{e}^{-\delta \Delta t} + Y \text{e}^{-\delta (t+\Delta t -T)} > l \big] + o(\Delta t) \\ & = \mathbb{P}_{(u,\lambda_0)} \big[ N^{\lambda}_{t +\Delta t} - N^\lambda_t = 1, \lambda_t \leq l, \lambda_t + Y \text{e}^{-\delta (t -T)} > l\text{e}^{\delta \Delta t} \big] + o(\Delta t). \end{align*}

Here, T denotes the jump time occurring between t and $t+\Delta t$ , and Y is the corresponding shock. The random time $T-t$ can be represented as $\Theta \Delta t$ , where $\Theta$ is a random variable which takes values in the interval (0, 1). Consequently, we have $Y\text{e}^{\delta \Theta \Delta t} \in (Y,Y\text{e}^{\delta \Delta t})$ . Using this, we can bound the above probability by

\begin{align*} \mathbb{P}_{(u,\lambda_0)} & \big[ N^{\lambda}_{t +\Delta t} - N^\lambda_t = 1, \lambda_t \leq l, \lambda_t + Y \text{e}^{\delta \Delta t} > l\text{e}^{\delta \Delta t} \big] + o(\Delta t) \\ & \geq \mathbb{P}_{(u,\lambda_0)} \big[ N^{\lambda}_{t +\Delta t} - N^\lambda_t = 1, \lambda_t \leq l, \lambda_t + Y \text{e}^{-\delta \Theta \Delta t} > l\text{e}^{\delta \Delta t} \big] + o(\Delta t) \\ & \geq \mathbb{P}_{(u,\lambda_0)} \big[ N^{\lambda}_{t +\Delta t} - N^\lambda_t = 1, \lambda_t \leq l, \lambda_t + Y > l\text{e}^{\delta \Delta t} \big] + o(\Delta t). \end{align*}

Let us focus on the upper bound. The term $N^\lambda_{t+\Delta t} - N^\lambda_t$ is independent of $\lambda_t$ and Y, so we can rewrite

\begin{align*} \mathbb{P}_{(u,\lambda_0)} & \big[ N^{\lambda}_{t +\Delta t} - N^\lambda_t = 1, \lambda_t \leq l, \lambda_t + Y \text{e}^{\delta \Delta t} > l\text{e}^{\delta \Delta t} \big] + o(\Delta t) \\ & = \rho \Delta t \mathbb{P}_{(u,\lambda_0)} \big[ \lambda_t \leq l, \lambda_t + Y \text{e}^{\delta \Delta t} > l\text{e}^{\delta \Delta t} \big] + o(\Delta t) \\ & = \rho \Delta t\, \mathbb{E}_{(u,\lambda_0)} \big[\mathbb{E}_{(u,\lambda_0)} \big[ \textbf{1}_{\lbrace\lambda_t \leq l\rbrace} \textbf{1}_{\lbrace Y> l- \lambda_t \text{e}^{-\delta \Delta t}\rbrace} \mid \lambda_t \big]\big] + o(\Delta t) \\ & = \rho \Delta t \int_0^l \! (1-F_Y(l-z\text{e}^{-\delta \Delta t})) \, F_\lambda(\text{d}z, t) + o(\Delta t). \end{align*}

Now, let us divide by $\Delta t$ and consider the limit of $\Delta t \to 0$ . Since $F_Y(l-z\text{e}^{-\delta \Delta t})$ decreases as $\Delta t$ becomes smaller, we get, by the right continuity of cumulative distribution functions, that this tends to $\rho \int_0^l \! (1-F_Y(l-z)) \, F_\lambda(\text{d}z, t)$ . Using the same arguments, we can show that the lower bound divided by $\Delta t$ converges to the same value. Hence, the term $({1}/{\Delta t}) \mathbb{P}_{(u,\lambda_0)} [ \lambda_t \leq l, \lambda_{t+\Delta t} > l ]$ converges too.

Assumption 3. From now on we assume that

\begin{equation*} \int_0^\infty \int_0^{\lambda_0} \big(1-F_Y^{\mathbb{Q}^{(R)}}(\lambda_0-z)\big) F^{\mathbb{Q}^{(R)}}_\lambda(\text{d} z,t) \, \text{d}t = \infty, \end{equation*}

where $F^{\mathbb{Q}^{(R)}}_\lambda(z,t)\,:\!=\, \mathbb{Q}^{(R)}[\lambda_t \leq z]$ and $F_Y^{\mathbb{Q}^{(R)}}(x) = \mathbb{Q}^{(R)}[Y \leq x]$ .

This assumption guarantees that there are infinitely many upcrossings of the process through $\lambda_0$ under the measure $\mathbb{Q}^{(R)}$ ; hence, that the intensity is Harris recurrent. The structure of our Markovian shot-noise process means that upcrossings can only happen through shock events, and downcrossings are due to the continuous drift. Consequently, there have to be infinitely many continuous downcrossings and recurrence times $\lbrace S(i)\rbrace_{i \in \mathbb{N}}$ such that $\lambda_{S(i)} = \lambda_0$ .

One example which satisfies Assumption 3 is the following.

Example 2. Consider the same configuration as in Example 1. Under the new measure $\mathbb{Q}^{(R)}$ , the shocks are again exponentially distributed with parameter $\mu+\alpha(R)$ , and the new intensity of $\lambda$ is

\begin{equation*}\tilde{\rho} =\rho M_Y({-}\alpha(R))= \frac{\mu\delta\kappa c+ \mu \delta \rho}{\mu \delta +1}.\end{equation*}

Assume that ${\tilde \rho}/{\delta} = n \in \mathbb{N}$ . As in [Reference Orsingher and Battaglia11], we can determine the distribution of Y(t) using its characteristic function,

\begin{equation*} K_t(s) = \mathbb{E}^{\mathbb{Q}^{(R)}}[\!\exp\!(is \lambda(t))] = \bigg(\text{e}^{-\delta t} +(1-\text{e}^{\delta t})\frac{\mu+\alpha(R)}{(\mu+\alpha(R))-is}\bigg)^n.\end{equation*}

This is the characteristic function of the random variable $\eta = \sum_{i=1}^{B_t} Y_i$ , where $B_t \sim \mathit{B}(n,1-\text{e}^{-\delta t})$ . Consequently, $\lambda(t)$ admits a density of the form

\begin{equation*}f(z,t)= \sum_{j=1}^n \binom{n}{j} \text{e}^{-\delta t (n-j)} (1-\text{e}^{-\delta t})^j (\mu+\alpha(R))^j \text{e}^{-(\mu+\alpha(R)) z}\frac{z^{j-1}}{(j-1)!}.\end{equation*}

Using this, the intensity of the upcrossings is given by

\begin{equation*} \nu^+_{\lambda_0}(t) = \rho \sum_{j=0}^n \binom{n}{j}\text{e}^{-\delta t (n-j)} (1-\text{e}^{-\delta t})^j \frac{(\mu+\alpha(R))^j \lambda_0^j}{j!}\text{e}^{-(\mu+\alpha(R))\lambda_0}. \end{equation*}

Since ${(\mu+\alpha(R))^j \lambda_0^j}/{j!}$ has a positive lower bound $\tilde c$ , we get that

\begin{align*} \int_0^\infty \nu^+_{\lambda_0} (t) \, \text{d}t \geq \int_0^\infty \rho \tilde{c}\text{e}^{-(\mu+\alpha(R))\lambda_0}\, \text{d}t = \infty. \end{align*}

Using this, we can even show that there are infinitely many recurrence times if ${\tilde \rho}/{\delta}$ is any real number greater than 1. For this, we consider two auxiliary shot-noise processes,

\begin{equation*}\underline{\lambda}_t = \text{e}^{-\underline{\delta} t} \lambda_0 + \sum_{i=1}^{N^\lambda_t} \text{e}^{-\underline{\delta}(t-T^\lambda_i)}Y_i, \qquad \overline{\lambda}_t= \text{e}^{-\overline{\delta} t} \lambda_0 + \sum_{i=1}^{N^\lambda_t} \text{e}^{-\overline{\delta}(t-T^\lambda_i)}Y_i,\end{equation*}

where $\overline{\delta}$ and $\underline{\delta}$ are chosen such that

\begin{equation*} \frac{\tilde \rho}{\overline{\delta}} =N_1 > \frac{\tilde \rho}{\delta} > \frac{\tilde \rho}{\underline{\delta}} =N_2,\end{equation*}

with $N_1, N_2 \in \mathbb{N}$ . By construction, $ \underline{\lambda}_t \leq \lambda_t \leq \overline{\lambda}_t$ and both auxiliary processes cross $\lambda_0$ infinitely often. As a consequence, $\lambda$ crosses $\lambda_0$ infinitely often too.

If Assumption 3 holds, we have that, under the measure $\mathbb{Q}^{(R)}$ , the surplus process tends to $-\infty$ and $\lambda$ returns to $\lambda_0$ infinitely often. Hence, we can define a sequence of renewal times $\lbrace S_+(i) \rbrace_{i \in \mathbb{N}_0}$ via $S_+(0)=0$ and $S_+(i) = \min\lbrace S(i)>S_+(i-1) \mid X_{S(i)} < X_{S_+(i-1)}\rbrace$ which satisfies $\mathbb{Q}^{(R)}[ S_+(i) < \infty] =1$ for all i. We will use these renewal times similarly to the ladder epochs in the classical ruin model.

Define

\begin{align*} B(x) & = \mathbb{P}_{(u,\lambda_0)}[S_+(1) < \infty,\, u-X_{S_+(1)} \leq x ], \\ p(u,x) & = \mathbb{P}_{(u,\lambda_0)}[ \tau_u \leq S_+(1) \mid S_+(1) < \infty,\, X_{S_+(1)}=u-x ].\end{align*}

Then, the ruin probability satisfies

\begin{equation*} \psi(u,\lambda_0) = \int_0^u \psi(u-x,\lambda_0) (1-p(u,x)) \, B(\text{d}x) + \mathbb{P}_{(u,\lambda_0)}[\tau_u \leq S_+(1),\, \tau_u < \infty]. \end{equation*}

This may look like a renewal equation but the distribution B is defective. We solve this problem by multiplying both sides by $\text{e}^{Ru}$ , which is equivalent to a measure change from $\mathbb{P}$ to $\mathbb{Q}^{(R)}$ , and obtain

(3) \begin{align} \psi(u,\lambda_0)\text{e}^{Ru} & = \int_0^u \psi(u-x,\lambda_0)\text{e}^{R(u-x)} (1-p(u,x))\text{e}^{Rx} \, B(\text{d}x) \nonumber \\ & \quad + \mathbb{P}_{(u,\lambda_0)}[\tau_u \leq S_+(1),\, \tau_u < \infty]\text{e}^{Ru}.\end{align}

Lemma 8. The distribution $\tilde B$ defined by $\tilde B (\text{d}x) = \text{e}^{Rx} B(\text{d}x)$ is non-defective.

Proof. Using the definition of $\tilde B$ , we get

\begin{equation*}\int_\mathbb{R}\tilde B(\text{d}x) = \int_\mathbb{R} \text{e}^{Rx} B(\text{d}x) = \mathbb{E}_{(u,\lambda_0)} \big[ \text{e}^{R(u-X_{S_+(1)})}\textbf{1}_{\lbrace S_+(1) < \infty\rbrace} \big]. \end{equation*}

Now focus on our martingale $M^R$ at time $S_+(1)$ and observe that

\begin{align*} M_{S_+(1)}^R = \exp\!(\alpha(R) \lambda_0 + Ru-\alpha(R) \lambda_{S_+(1)} - RX_{S_+(1)}) = \exp\!(R(u-X_{S_+(1)})). \end{align*}

Using this leads to

\begin{equation*}\int_\mathbb{R} \tilde B(\text{d}x) = \mathbb{E}^{\mathbb{Q}^{(R)}}[\textbf{1}_{\lbrace S_+(1) < \infty\rbrace}] = \mathbb{Q}^{(R)}[S_+(1) < \infty ] =1.\end{equation*}

Consequently, $\tilde B$ is not defective.

Even though we have found a renewal equation, we still have to show that all the functions appearing in (3) satisfy the assumptions of Theorem 1.

Assumption 4. From now on, we assume that there exists an $\varepsilon>0$ such that, for $r\,:\!=\, (1+\varepsilon)R$ , the measure $\mathbb{Q}^{(r)}$ is well defined and $\mathbb{E}_{(u,\lambda_0)}\big[ \text{e}^{-r(X_{S_+(1)}-u)} \textbf{1}_{\lbrace S_+(1) < \infty\rbrace}\big] < \infty$ .

Since $S_+(1)$ depends on X and $\lambda$ , this assumption may be hard to check. Alternatively, we can use the following lemma, which allows us to focus on the first recurrence time S(1).

Lemma 9. Let $\varepsilon>0$ be such that, for $r\,:\!=\,(1+\varepsilon)R$ , the measure $\mathbb{Q}^{(r)}$ is well defined. Then, $\mathbb{E}_{(u,\lambda_0)}\big[\!\exp\!({-}r(X_{S_+(1)}-u))\textbf{1}_{\lbrace S_+(1) < \infty\rbrace}\big] <\infty$ if and only if $\mathbb{E}_{(u,\lambda_0)}\big[\!\exp\!({-}r(X_{S(1)}-u))\textbf{1}_{\lbrace S(1)<\infty\rbrace}\big] < \infty$ .

Proof. At first, assume that $\mathbb{E}_{(u,\lambda_0)}\big[\!\exp\!({-}r(X_{S_+(1)}-u))\textbf{1}_{\lbrace S_+(1) < \infty\rbrace}\big] <\infty$ holds. By definition, $S_+(1) \geq S(1)$ and $\theta(r)>0$ . Consequently,

\begin{align*} \mathbb{E}_{(u,\lambda_0)}\big[\!\exp\!({-}r(X_{S(1)}-u))\textbf{1}_{\lbrace S(1)<\infty\rbrace}\big] & = \mathbb{E}^{\mathbb{Q}^{(r)}}[\!\exp\!(\theta(r)S(1))] \\ & \leq \mathbb{E}^{\mathbb{Q}^{(r)}}[\!\exp\!(\theta(r)S_+(1))] \\ & = \mathbb{E}_{(u,\lambda_0)}\big[\!\exp\!({-}r(X_{S_+(1)}-u))\textbf{1}_{\lbrace S_+(1) < \infty\rbrace}\big] <\infty. \end{align*}

Now assume that $\mathbb{E}_{(u,\lambda_0)}\big[\!\exp\!({-}r(X_{S(1)}-u))\textbf{1}_{\lbrace S(1)<\infty\rbrace }\big]=\!:\,C < \infty$ holds. Then,

\begin{align*} \mathbb{E}_{(u,\lambda_0)}\big[\!\exp\!({-}r(X_{S_+(1)}-u))\textbf{1}_{\lbrace S_+(1) < \infty\rbrace}\big] & = \mathbb{E}^{\mathbb{Q}^{(R)}}[\!\exp\!({-}\varepsilon R(X_{S_+(1)}-u))] \\ & = \sum_{i=1}^\infty \mathbb{E}^{\mathbb{Q}^{(R)}}\big[\!\exp\!({-}\varepsilon R(X_{S(i)}-u))\textbf{1}_{ \lbrace S_+(1)=S(i)\rbrace}\big]. \end{align*}

The indicator can be rewritten as

\begin{equation*}\textbf{1}_{\lbrace S_+(1)=S(i)\rbrace} = \textbf{1}_{\lbrace S_+(1) >S(i-1)\rbrace}\textbf{1}_{\lbrace X_{S(i)} < u\rbrace} = \prod_{j=1}^{i-1} \textbf{1}_{\lbrace X_{S(j)} \geq u\rbrace} \textbf{1}_{\lbrace X_{S(i)} < u\rbrace}.\end{equation*}

With $S(0)=0$ we define the i.i.d. sequence of random variables $(\xi_j)_{j \geq 1}\,:\!=\,(X_{S(j)}-X_{S(j-1)})_{j \geq 1}$ . Consequently, $X_{S(i-1)}-u= \sum_{j=1}^{i-1} \xi_j$ holds for all i. Using this, we get

\begin{align*} \mathbb{E}^{\mathbb{Q}^{(R)}}&\big[\!\exp\!({-}\varepsilon R(X_{S(i)}-u))\textbf{1}_{\lbrace S_+(1)=S(i)\rbrace}\big] \\ & \leq \mathbb{E}^{\mathbb{Q}^{(R)}}\Bigg[\!\exp\!\Bigg({-}\varepsilon R\sum_{j=1}^{i-1} \xi_j\Bigg)\textbf{1}_{\left\lbrace \sum_{j=1}^{i-1} \xi_j >0\right\rbrace} \mathbb{E}^{\mathbb{Q}^{(R)}}\Bigg[\!\exp\!({-}\varepsilon R \xi_i) \textbf{1}_{\lbrace X_{S(i)} < u\rbrace }\,\big|\, \sum_{j=1}^{i-1} \xi_j\Bigg]\Bigg]. \end{align*}

Let us focus on the conditional expectation. The indicator is less than or equal to 1, and $\xi_i$ is independent of the condition. Hence,

\begin{equation*}\mathbb{E}^{\mathbb{Q}^{(R)}}\Bigg[\!\exp\!({-}\varepsilon R \xi_i) \textbf{1}_{\lbrace X_{S(i)} < u\rbrace}\,\big|\,\sum_{j=1}^{i-1} \xi_j\Bigg] \leq \mathbb{E}^{\mathbb{Q}^{(R)}}[ \exp\!({-}\varepsilon R \xi_i)] = C< \infty.\end{equation*}

From this, we get that

\begin{equation*}\mathbb{E}^{\mathbb{Q}^{(R)}}\big[\!\exp\!({-}\varepsilon R(X_{S(i)}-u))\textbf{1}_{\lbrace S_+(1)=S(i)\rbrace}\big] \leq C\, \mathbb{E}^{\mathbb{Q}^{(R)}}\Bigg[\!\exp\!\Bigg({-}\varepsilon R\sum_{j=1}^{i-1} \xi_j\Bigg)\textbf{1}_{\left\lbrace \sum_{j=1}^{i-1} \xi_j >0\right\rbrace}\Bigg].\end{equation*}

Now we want to bound the remaining expectation. For this, we observe that, for all $\tilde \varepsilon >0$ ,

\begin{equation*}\mathbb{E}^{\mathbb{Q}^{(R)}}\Bigg[\!\exp\!\Bigg({-}\varepsilon R\sum_{j=1}^{i-1} \xi_j\Bigg)\textbf{1}_{\left\lbrace \sum_{j=1}^{i-1} \xi_j >0\right\rbrace}\Bigg] \leq \mathbb{E}^{\mathbb{Q}^{(R)}}\Bigg[\!\exp\!\Bigg(\tilde\varepsilon R\sum_{j=1}^{i-1} \xi_j\Bigg)\Bigg].\end{equation*}

To choose $\tilde \varepsilon$ in a suitable way, we focus on the properties of $\theta$ . This function is convex and satisfies $\theta(0)=\theta(R) =0$ and $\theta^{\prime}(0)<0$ . Consequently, there exists an $\tilde r \in (0,R)$ such that $\theta(\tilde r) <0$ . Choosing $\tilde \varepsilon = 1-({\tilde r}/{R}) \in (0, 1)$ we have

\begin{align*} \mathbb{E}^{\mathbb{Q}^{(R)}}\Bigg[\!\exp\!\Bigg({-}\varepsilon R\sum_{j=1}^{i-1} \xi_j\Bigg)\textbf{1}_{\left\lbrace \sum_{j=1}^{i-1} \xi_j >0\right\rbrace}\Bigg] & \leq \mathbb{E}^{\mathbb{Q}^{(R)}}\Bigg[\!\exp\!\Bigg(\tilde\varepsilon R\sum_{j=1}^{i-1} \xi_j\Bigg)\Bigg] \\ & = \mathbb{E}^{\mathbb{Q}^{(R)}}[\!\exp\!(\tilde\varepsilon R\xi_1)]^{i-1} \\ & = \mathbb{E}_{(u,\lambda_0)}\big[\!\exp\!((\tilde\varepsilon-1) R(X_{S(1)}-u))\textbf{1}_{\lbrace S(1) < \infty\rbrace}\big]^{i-1} \\ & = \mathbb{E}_{(u,\lambda_0)}\big[\!\exp\!({-}\tilde r (X_{S(1)}-u))\textbf{1}_{\lbrace S(1) < \infty\rbrace}\big]^{i-1} \\ & = \mathbb{E}^{\mathbb{Q}^{(\tilde r)}}\big[\!\exp\!(\theta(\tilde r)S(1))\textbf{1}_{\lbrace S(1) < \infty\rbrace}\big]^{i-1}. \end{align*}

By construction, $\theta(\tilde r) < 0$ and $S(1)>0$ ; hence, $\mathbb{E}^{\mathbb{Q}^{(\tilde r)}}\big[\!\exp\!(\theta(\tilde r)S(1))\textbf{1}_{\lbrace S(1) < \infty\rbrace}\big] =p <1$ . Finally, we get

\begin{equation*} \mathbb{E}_{(u,\lambda_0)}\big[\!\exp\!({-}r(X_{S_+(1)}-u))\textbf{1}_{\lbrace S_+(1) < \infty\rbrace}\big] \leq C \sum_{i=1}^\infty p^{i-1} = \frac{C}{1-p} < \infty . \end{equation*}

Lemma 10. The function $\mathbb{P}_{(u,\lambda_0)}[\tau_u \leq S_+(1) , \tau_u< \infty] \text{e}^{Ru}$ is directly Riemann integrable in u.

Proof. Let r be as in Assumption 4. Observe that $\alpha(r)<0$ and $\theta(r)>0$ since $r>R>0$ . First, we show that $\mathbb{P}_{(u,\lambda_0)}[\tau_u \leq S_+(1), \tau_u < \infty]\text{e}^{ru}$ is uniformly bounded. Let $t>0$ be arbitrary but fixed. Then,

\begin{align*} \mathbb{P}_{(u,\lambda_0)}[ \tau_u \leq (S_+(1) \wedge t)] \text{e}^{ru} & = \mathbb{E}^{\mathbb{Q}^{(r)}}\big[\textbf{1}_{\lbrace \tau_u \leq (S_+(1) \wedge t)\rbrace} \text{e}^{\theta(r) \tau_u} \text{e}^{rX_{\tau_u}} \text{e}^{\alpha(r) \lambda_{\tau_u}}\big]\text{e}^{-\alpha(r) \lambda_0} \\ & \leq \mathbb{E}^{\mathbb{Q}^{(r)}}\big[\textbf{1}_{\lbrace \tau_u \leq (S_+(1) \wedge t)\rbrace} \text{e}^{\theta(r) \tau_u}\big] \text{e}^{-\alpha(r) \lambda_0} \\ & \leq \mathbb{E}^{\mathbb{Q}^{(r)}}[ e^{\theta(r) S_+(1)}] \text{e}^{-\alpha(r) \lambda_0} \\ & = \mathbb{E}_{(u,\lambda_0)} \big[ \text{e}^{-rX_{S_+(1)}+ru-\alpha(r) \lambda_{S_+(1)} + \alpha(r) \lambda_0 }\textbf{1}_{\lbrace S_+(1) < \infty\rbrace}\big]\text{e}^{-\alpha(r) \lambda_0} \\ & = \mathbb{E}_{(u,\lambda_0)}\big[\textbf{1}_{\lbrace S_+(1) < \infty\rbrace}\text{e}^{-r(X_{S_+(1)} -u)}\big] \text{e}^{-\alpha(r) \lambda_0} < \infty. \end{align*}

The upper bound is independent of t, so by letting t tend to infinity we get

\begin{equation*}\mathbb{P}_{(u,\lambda_0)}[\tau_u \leq S_+(1), \tau_u < \infty]\text{e}^{ru} \leq \mathbb{E}_{(u,\lambda_0)}\big[\textbf{1}_{\lbrace S_+(1) < \infty\rbrace}\text{e}^{-r(X_{S_+(1)} -u)}\big] \text{e}^{-\alpha(r) \lambda_0}.\end{equation*}

This bound is even independent of u. To see this, we consider the process $R_t = ct- \sum_{i=1}^{N_t} U_i$ and define the random time $T_+(1) \,:\!=\, \min\lbrace S(i) \mid R_{S(i)} < 0 \rbrace$ . They are independent of u, but under $\mathbb{P}_{(u,\lambda_0)}$ we have, almost surely, $R_t= X_t-u$ and $T_+(1)= S_+(1)$ . From this we see that $X_{S_+(1)}-u = R_{T_+(1)}$ does not depend on u.

Using the derived boundedness we get that there is some $K>0$ such that $\mathbb{P}_{(u,\lambda_0)}[\tau_u \leq S_+(1), \tau_u < \infty] \text{e}^{Ru} \leq K \text{e}^{-(r-R)u}$ , which is a directly Riemann integrable upper bound. Consequently, $\mathbb{P}_{(u,\lambda_0)}[\tau_u \leq S_+(1), \tau_u < \infty] \text{e}^{Ru}$ is directly Riemann integrable too.

Let us now focus on the properties of p(u, x).

Lemma 11. The function p(u,x) is continuous in u for $u>0$ .

Proof. To prove continuity, we will show that $\lim_{\varepsilon \to 0} p(u+\varepsilon,x) = \lim_{\varepsilon \to 0} p(u-\varepsilon,x) = p(u,x)$ . We start with the first limit. To do so we will consider a path of our surplus process X with initial capital u, and exactly the same path of the process $X^\varepsilon$ with initial capital $u+\varepsilon$ . The premium rate c, the claim sizes $U_i$ , and the counting process N do not depend on the initial capital; hence, $X^\varepsilon_t = X_t + \varepsilon$ . By the same line of argument as in the proof of Lemma 10, we see that $S_+(1)$ and the condition in the definition of p do not depend on u.

To be precise, let $\omega \in \Omega$ be an arbitrary event and let us compare the fixed paths of our processes. If $X(\omega)$ gets ruined before $S_+(1)(\omega)$ , there is some $\tilde \varepsilon>0$ such that, for all $\varepsilon < \tilde \varepsilon$ , the path $X^\varepsilon(\omega)$ gets ruined in the same moment. If $X(\omega)$ stays greater than or equal to 0 then $X^\varepsilon$ stays positive for all $\varepsilon>0$ . Consequently, we have that $\lim_{\varepsilon \to 0}\textbf{1}_{\lbrace \tau_{u+\varepsilon}< S_+(1)\rbrace}(\omega) = \textbf{1}_{\lbrace \tau_{u}< S_+(1)\rbrace}(\omega)$ and also, by dominated convergence, $p(u+\varepsilon,x) \to p(u,x)$ .

If we can exclude the case that X exactly hits the value 0, then the same arguments hold for $X^{-\varepsilon}_t\,:\!=\, X_t-\varepsilon$ .

The infimum of the surplus process can only occur at a jump time of our counting process N. Let T be an arbitrary claim time; then, $\mathbb{P}_{(u,\lambda_0)} [ X_T =0 ] = \mathbb{P}_{(u,\lambda_0)} [ X_{T-}-U_{N_T} =0 ] = \mathbb{E}_{(u,\lambda_0)}\big[\mathbb{P}_{(u,\lambda_0)} [ X_{T-}-U_{N_T} =0\mid \mathcal{F}_{T-} ]\big]$ . The random variable $U_{N_T}$ is independent of $\mathcal{F}_{T-}$ and its distribution is continuous. Hence, the probability of hitting exactly the value $ X_{T-}$ is 0. Consequently, $\mathbb{P}_{(u,\lambda_0)} [ X_T =0 ] = 0$ . Since we have only countably many jump times, the event that the surplus process hits 0 at any jump time has measure 0 too. Hence, $p(u-\varepsilon,x) \to p(u,x)$ . Combining these results we get that p(u, x) is continuous in u.

Lemma 12. Under our assumptions, $\int_0^u p(u,x) \text{e}^{Rx} \, B(\text{d}x) $ is directly Riemann integrable.

Proof. Again, let r be as in Assumption 4. Then,

\begin{align*} \int_0^u p(u,x) \text{e}^{Rx} \, B(\text{d}x) & \leq \text{e}^{Ru} \int_0^u p(u,x) \, B(\text{d}x) = \text{e}^{Ru} \mathbb{P}_{(u,\lambda_0)}[\tau_u \leq S_+(1)< \infty] \\ & \leq \text{e}^{Ru} \mathbb{P}_{(u,\lambda_0)}[\tau_u \leq S_+(1), \tau_u< \infty] \leq K\text{e}^{-(r-R)u}. \end{align*}

As before (see Lemma 10), we have a directly Riemann integrable upper bound, and therefore $\int_0^u p(u,x) \text{e}^{Rx} \, B(\text{d}x)$ is directly Riemann integrable.

The continuity of the distribution of U implies that B is not arithmetic. Consequently, all the conditions of Theorem 1 are satisfied. Hence, we can apply it to the renewal equation satisfied by $\psi(u)\text{e}^{Ru}$ and obtain our main result.

Theorem 4. Under our assumptions, $\lim_{u\to \infty}\psi(u,\lambda_0)\text{e}^{Ru}$ exists and is finite.

Finally, we consider an example where all our assumptions are satisfied.

Example 3. Let Y and U be exponentially distributed with parameter 1, $\delta =1$ , $\rho = 1.5$ , $\lambda_0 =1$ , and $c=\frac{15}{4}$ . The net profit condition is satisfied since

\begin{equation*}c=\frac{15}{4}> \frac{3}{2} = \frac{\rho}{\delta}\mathbb{E}[Y] \mathbb{E}[U].\end{equation*}

Further, the moment-generating function of U is given by $ M_U(r) = {1}/({1-r})$ and $\alpha(r) = 1-M_U(r) = -{r}/({1-r})$ . Consequently, $M_Y({-}\alpha(r)) = ({1-r})/({1-2r})$ is well defined for all $r< \frac{1}{2}$ and the adjustment coefficient is given by $R= \frac{3}{10}$ . The measure $\mathbb{Q}^{(R)}$ is well defined and the new intensity is $\tilde{\rho}^{(R)} = \frac{21}{8}$ . Since this is greater than 1, we know from Example 2 that there are infinitely many recurrence times S(i) under $\mathbb{Q}^{(R)}$ .

Choosing $r=\frac{1}{3}>R$ , we see that $\mathbb{Q}^{(r)}$ is well defined and $\tilde \rho^{(r)} = 3 \in \mathbb{N}$ . Following the results shown in [Reference Orsingher and Battaglia11], we know that, under the measure $\mathbb{Q}^{(r)}$ , the recurrence times S(i) have intensity

\begin{equation*}\nu(t) = \frac{1}{2\text{e}}(1+3e^{-t}-3e^{-2t}-e^{-3t}) \leq \frac{2(\sqrt{2}-1)}{\text{e}}.\end{equation*}

Therefore,

\begin{multline*} \mathbb{E}_{(u,\lambda_0)} \big[ \exp\!( r(X_{S(1)} -u) )\textbf{1}_{\lbrace S(1) < \infty\rbrace}\big] \\ = \mathbb{E}^{\mathbb{Q}^{(r)}}[\!\exp\left(\theta(r)S(1)\right)] = \int_0^\infty \exp\!(0.25 s) \nu(s) \exp\bigg({-}\int_0^s \nu(u) \, \text{d}u\bigg)\, \text{d}s.\end{multline*}

For $t\geq 1$ we have that $\nu(t) > 0.26$ , which gives us the existence of some constant c such that

\begin{align*} \int_0^\infty \exp\!(0.25 s) \nu(s) \exp\bigg({-}\int_0^s \nu(u) \, \text{d}u\bigg)\, \text{d}s < c \int_0^\infty \text{e}^{0.25s} \text{e}^{-0.26 (s-1)} \,\text{d}s < \infty. \end{align*}

By this, all the assumptions made are satisfied. Hence, there exists some constant C such that $\lim_{u \to \infty} \psi(u,\lambda_0)e^{0.3 u} =C$ .

Funding information

This research was funded in whole, or in part, by the Austrian Science Fund (FWF) P 33317. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Albrecher, H. and Asmussen, S. (2006). Ruin probabilities and aggregrate claims distributions for shot noise Cox processes. Scand. Actuarial J. 2006, 86110.CrossRefGoogle Scholar
Asmussen, S. and Albrecher, H. (2010). Ruin Probabilities, 2nd edn (Adv. Ser. Statist. Sci. Appl. Prob. 14). World Scientific, Hackensack, NJ.Google Scholar
Brémaud, P. (1981). Point Processes and Queues. Springer, New York.CrossRefGoogle Scholar
Dassios, A. and Jang, J.-W. (2003). Pricing of catastrophe reinsurance and derivatives using the Cox process with shot noise intensity. Finance Stoch. 7, 7395.CrossRefGoogle Scholar
Dassios, A. and Jang, J.-W. (2005). Kalman–Bucy filtering for linear systems driven by the Cox process with shot noise intensity and its application to the pricing of reinsurance contracts. J. Appl. Prob. 42, 93107.CrossRefGoogle Scholar
Dassios, A., Jang, J. and Zhao, H. (2015). A risk model with renewal shot-noise Cox process. Insurance Math. Econom. 65, 5565.CrossRefGoogle Scholar
Davis, M. H. A. (1993). Markov Models and Optimization (Monographs Statist. Appl. Prob. 49). Chapman & Hall, London.Google Scholar
Grandell, J. (1991). Aspects of Risk Theory. Springer, New York.CrossRefGoogle Scholar
Grandell, J. and Schmidli, H. (2011). Ruin probabilities in a diffusion environment. J. Appl. Prob. 48, 3950.CrossRefGoogle Scholar
Macci, C. and Torrisi, G. L. (2011). Risk processes with shot noise Cox claim number process and reserve dependent premium rate. Insurance Math. Econom. 48, 134145.CrossRefGoogle Scholar
Orsingher, E. and Battaglia, F. (1982). Probability distributions and level crossings of shot noise models. Stochastics 8, 4561.CrossRefGoogle Scholar
Palmowski, Z. and Rolski, T. (2002). A technique for exponential change of measure for Markov processes. Bernoulli 8, 767785.Google Scholar
Rolski, T., Schmidli, H., Schmidt, V. and Teugels, J. (1999). Stochastic Processes for Insurance and Finance. John Wiley, Chichester.CrossRefGoogle Scholar
Schmidli, H. (1997). An extension to the renewal theorem and an application to risk theory. Ann. Appl. Prob. 7, 121133.CrossRefGoogle Scholar