We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We consider two classes of irreducible Markovian arrival processes specified by the matrices C and D: the Markov-modulated Poisson process (MMPP) and the Markov-switched Poisson process (MSPP). The former exhibits a diagonal matrix D while the latter exhibits a diagonal matrix C. For these two classes we consider the following four statements: (I) the counting process is overdispersed; (II) the hazard rate of the event-stationary interarrival time is nonincreasing; (III) the squared coefficient of variation of the event-stationary process is greater than or equal to one; (IV) there is a stochastic order showing that the time-stationary interarrival time dominates the event-stationary interarrival time. For general MSPPs and order two MMPPs, we show that (I)–(IV) hold. Then for general MMPPs, it is easy to establish (I), while (II) is shown to be false by a counter-example. For general simple point processes, (III) follows from (IV). For MMPPs, we conjecture that (IV) and thus (III) hold. We also carry out some numerical experiments that fail to disprove this conjecture. Importantly, modelling folklore has often treated MMPPs as “bursty”, and implicitly assumed that (III) holds. However, to the best of our knowledge, proving this is still an open problem.
An iterated perturbed random walk is a sequence of point processes defined by the birth times of individuals in subsequent generations of a general branching process provided that the birth times of the first generation individuals are given by a perturbed random walk. We prove counterparts of the classical renewal-theoretic results (the elementary renewal theorem, Blackwell’s theorem, and the key renewal theorem) for the number of jth-generation individuals with birth times
$\leq t$
, when
$j,t\to\infty$
and
$j(t)={\textrm{o}}\big(t^{2/3}\big)$
. According to our terminology, such generations form a subset of the set of intermediate generations.
We present a new and straightforward algorithm that simulates exact sample paths for a generalized stress-release process. The computation of the exact law of the joint inter-arrival times is detailed and used to derive this algorithm. Furthermore, the martingale generator of the process is derived, and induces theoretical moments which generalize some results of [3] and are used to demonstrate the validity of our simulation algorithm.
We consider the near-critical Erdős–Rényi random graph G(n, p) and provide a new probabilistic proof of the fact that, when p is of the form
$p=p(n)=1/n+\lambda/n^{4/3}$
and A is large,
where
$\mathcal{C}_{\max}$
is the largest connected component of the graph. Our result allows A and
$\lambda$
to depend on n. While this result is already known, our proof relies only on conceptual and adaptable tools such as ballot theorems, whereas the existing proof relies on a combinatorial formula specific to Erdős–Rényi graphs, together with analytic estimates.
We consider a continuous Gaussian random field living on a compact set
$T\subset \mathbb{R}^{d}$
. We are interested in designing an asymptotically efficient estimator of the probability that the integral of the exponential of the Gaussian process over T exceeds a large threshold u. We propose an Asmussen–Kroese conditional Monte Carlo type estimator and discuss its asymptotic properties according to the assumptions on the first and second moments of the Gaussian random field. We also provide a simulation study to illustrate its effectiveness and compare its performance with the importance sampling type estimator of Liu and Xu (2014a).
Log-concavity of a joint survival function is proposed as a model for bivariate increasing failure rate (BIFR) distributions. Its connections with or distinctness from other notions of BIFR are discussed. A necessary and sufficient condition for a bivariate survival function to be log-concave (BIFR-LCC) is given that elucidates the impact of dependence between lifetimes on ageing. Illustrative examples are provided to explain BIFR-LCC for both positive and negative dependence.
Asymptotics deviation probabilities of the sum
$S_n=X_1+\dots+X_n$
of independent and identically distributed real-valued random variables have been extensively investigated, in particular when
$X_1$
is not exponentially integrable. For instance, Nagaev (1969a, 1969b) formulated exact asymptotics results for
$\mathbb{P}(S_n>x_n)$
with
$x_n\to \infty$
when
$X_1$
has a semiexponential distribution. In the same setting, Brosset et al. (2020) derived deviation results at logarithmic scale with shorter proofs relying on classical tools of large-deviation theory and making the rate function at the transition explicit. In this paper we exhibit the same asymptotic behavior for triangular arrays of semiexponentially distributed random variables.
We study an open discrete-time queueing network. We assume data is generated at nodes of the network as a discrete-time Bernoulli process. All nodes in the network maintain a queue and relay data, which is to be finally collected by a designated sink. We prove that the resulting multidimensional Markov chain representing the queue size of nodes has two behavior regimes depending on the value of the rate of data generation. In particular, we show that there is a nontrivial critical value of the data rate below which the chain is ergodic and converges to a stationary distribution and above which it is non-ergodic, i.e., the queues at the nodes grow in an unbounded manner. We show that the rate of convergence to stationarity is geometric in the subcritical regime.
Latouche and Nguyen (2015b) constructed a sequence of stochastic fluid processes and showed that it converges weakly to a Markov-modulated Brownian motion (MMBM). Here, we construct a different sequence of stochastic fluid processes and show that it converges strongly to an MMBM. To the best of our knowledge, this is the first result on strong convergence to a Markov-modulated Brownian motion. Besides implying weak convergence, such a strong approximation constitutes a powerful tool for developing deep results for sophisticated models. Additionally, we prove that the rate of this almost sure convergence is
$o(n^{-1/2} \log n)$
. When reduced to the special case of standard Brownian motion, our convergence rate is an improvement over that obtained by a different approximation in Gorostiza and Griego (1980), which is
$o(n^{-1/2}(\log n)^{5/2})$
.
For a non-negative separable random field Z(t),
$t\in \mathbb{R}^d$
, satisfying some mild assumptions, we show that
$ H_Z^\delta =\lim_{{T} \to \infty} ({1}/{T^d}) \mathbb{E}\{{\sup_{ t\in [0,T]^d \cap \delta \mathbb{Z}^d } Z(t) }\} <\infty$
for
$\delta \ge 0$
, where
$0 \mathbb{Z}^d\,:\!=\,\mathbb{R}^d$
, and prove that
$H_Z^0$
can be approximated by
$H_Z^\delta$
if
$\delta$
tends to 0. These results extend the classical findings for Pickands constants
$H_{Z}^\delta$
, defined for
$Z(t)= \exp( \sqrt{ 2} B_\alpha (t)- \lvert {t} \rvert^{2\alpha })$
,
$t\in \mathbb{R}$
, with
$B_\alpha$
a standard fractional Brownian motion with Hurst parameter
$\alpha \in (0,1]$
. The continuity of
$H_{Z}^\delta$
at
$\delta=0$
is additionally shown for two particular extensions of Pickands constants.
Network dynamics with point-process-based interactions are of paramount modeling interest. Unfortunately, most relevant dynamics involve complex graphs of interactions for which an exact computational treatment is impossible. To circumvent this difficulty, the replica-mean-field approach focuses on randomly interacting replicas of the networks of interest. In the limit of an infinite number of replicas, these networks become analytically tractable under the so-called ‘Poisson hypothesis’. However, in most applications this hypothesis is only conjectured. In this paper we establish the Poisson hypothesis for a general class of discrete-time, point-process-based dynamics that we propose to call fragmentation-interaction-aggregation processes, and which are introduced here. These processes feature a network of nodes, each endowed with a state governing their random activation. Each activation triggers the fragmentation of the activated node state and the transmission of interaction signals to downstream nodes. In turn, the signals received by nodes are aggregated to their state. Our main contribution is a proof of the Poisson hypothesis for the replica-mean-field version of any network in this class. The proof is obtained by establishing the propagation of asymptotic independence for state variables in the limit of an infinite number of replicas. Discrete-time Galves–Löcherbach neural networks are used as a basic instance and illustration of our analysis.
We consider fragmentation processes with values in the space of marked partitions of $\mathbb{N}$, i.e. partitions where each block is decorated with a nonnegative real number. Assuming that the marks on distinct blocks evolve as independent positive self-similar Markov processes and determine the speed at which their blocks fragment, we get a natural generalization of the self-similar fragmentations of Bertoin (Ann. Inst. H. Poincaré Prob. Statist.38, 2002). Our main result is the characterization of these generalized fragmentation processes: a Lévy–Khinchin representation is obtained, using techniques from positive self-similar Markov processes and from classical fragmentation processes. We then give sufficient conditions for their absorption in finite time to a frozen state, and for the genealogical tree of the process to have finite total length.
Drawdown/regret times feature prominently in optimal stopping problems, in statistics (CUSUM procedure), and in mathematical finance (Russian options). Recently it was discovered that a first passage theory with more general drawdown times, which generalize classic ruin times, may be explicitly developed for spectrally negative Lévy processes [9, 20]. In this paper we further examine the general drawdown-related quantities in the (upward skip-free) time-homogeneous Markov process, and then in its (general) tax process by noticing the pathwise connection between general drawdown and the tax process.
Consider the strong subordination of a multivariate Lévy process with a multivariate subordinator. If the subordinate is a stack of independent Lévy processes and the components of the subordinator are indistinguishable within each stack, then strong subordination produces a Lévy process; otherwise it may not. Weak subordination was introduced to extend strong subordination, always producing a Lévy process even when strong subordination does not. Here we prove that strong and weak subordination are equal in law under the aforementioned condition. In addition, we prove that if strong subordination is a Lévy process then it is necessarily equal in law to weak subordination in two cases: firstly when the subordinator is deterministic, and secondly when it is pure-jump with finite activity.
Consider a Lamperti–Kiu Markov additive process $(J, \xi)$ on $\{+, -\}\times\mathbb R\cup \{-\infty\}$, where J is the modulating Markov chain component. First we study the finiteness of the exponential functional and then consider its moments and tail asymptotics under Cramér’s condition. In the strong subexponential case we determine the subexponential tails of the exponential functional under some further assumptions.
Regular variation provides a convenient theoretical framework for studying large events. In the multivariate setting, the spectral measure characterizes the dependence structure of the extremes. This measure gathers information on the localization of extreme events and often has sparse support since severe events do not simultaneously occur in all directions. However, it is defined through weak convergence, which does not provide a natural way to capture this sparsity structure. In this paper, we introduce the notion of sparse regular variation, which makes it possible to better learn the dependence structure of extreme events. This concept is based on the Euclidean projection onto the simplex, for which efficient algorithms are known. We prove that under mild assumptions sparse regular variation and regular variation are equivalent notions, and we establish several results for sparsely regularly varying random vectors.
In the collector’s problem with group drawings, s out of n different types of coupon are sampled with replacement. In the uniform case, each s-subset of the types has the same probability of being sampled. For this case, we derive a Poisson limit theorem for the number of types that are sampled at most
$c-1$
times, where
$c \ge 1$
is fixed. In a specified approximate nonuniform setting, we prove a Poisson limit theorem for the special case
$c=1$
. As corollaries, we obtain limit distributions for the waiting time for c complete series of types in the uniform case and a single complete series in the approximate nonuniform case.
We study shot noise processes with cluster arrivals, in which entities in each cluster may experience random delays (possibly correlated), and noises within each cluster may be correlated. We prove functional limit theorems for the process in the large-intensity asymptotic regime, where the arrival rate gets large while the shot shape function, cluster sizes, delays, and noises are unscaled. In the functional central limit theorem, the limit process is a continuous Gaussian process (assuming the arrival process satisfies a functional central limit theorem with a Brownian motion limit). We discuss the impact of the dependence among the random delays and among the noises within each cluster using several examples of dependent structures. We also study infinite-server queues with cluster/batch arrivals where customers in each batch may experience random delays before receiving service, with similar dependence structures.
In this paper, we study some properties of the generalized Fokker–Planck equation induced by the time-changed fractional Ornstein–Uhlenbeck process. First of all, we exploit some sufficient conditions to show that a mild solution of such equation is actually a classical solution. Then, we discuss an isolation result for mild solutions. Finally, we prove the weak maximum principle for strong solutions of the aforementioned equation and then a uniqueness result.
In this paper we consider the one-dimensional, biased, randomly trapped random walk with infinite-variance trapping times. We prove sufficient conditions for the suitably scaled walk to converge to a transformation of a stable Lévy process. As our main motivation, we apply subsequential versions of our results to biased walks on subcritical Galton–Watson trees conditioned to survive. This confirms the correct order of the fluctuations of the walk around its speed for values of the bias that yield a non-Gaussian regime.