We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
1. Understanding whether a species still persists, or the timing of its extinction is challenging, however, such knowledge is fundamental for effective species management.
2. For the vast majority of species our understanding of their existence is based solely on sighting data that can range from museum specimens and clear photographs, through vocalisations, to markings and oral accounts.
3. Here we review the methods that have been developed to infer the extinction of species from a sighting record, providing an understanding of their assumptions and applications. We have also produced an RShiny package which can be used to implement some of the methods presented in the article.
4. While there are a number of potential areas that could be further developed, the methods reviewed provide a useful tool for inferring species extinction.
The concurrency of edges, quantified by the number of edges that share a common node at a given time point, may be an important determinant of epidemic processes in temporal networks. We propose theoretically tractable Markovian temporal network models in which each edge flips between the active and inactive states in continuous time. The different models have different amounts of concurrency while we can tune the models to share the same statistics of edge activation and deactivation (and hence the fraction of time for which each edge is active) and the structure of the aggregate (i.e., static) network. We analytically calculate the amount of concurrency of edges sharing a node for each model. We then numerically study effects of concurrency on epidemic spreading in the stochastic susceptible-infectious-susceptible and susceptible-infectious-recovered dynamics on the proposed temporal network models. We find that the concurrency enhances epidemic spreading near the epidemic threshold, while this effect is small in many cases. Furthermore, when the infection rate is substantially larger than the epidemic threshold, the concurrency suppresses epidemic spreading in a majority of cases. In sum, our numerical simulations suggest that the impact of concurrency on enhancing epidemic spreading within our model is consistently present near the epidemic threshold but modest. The proposed temporal network models are expected to be useful for investigating effects of concurrency on various collective dynamics on networks including both infectious and other dynamics.
In this paper, a new point process is introduced. It combines the nonhomogeneous Poisson process with the generalized Polya process (GPP) studied in recent literature. In reliability interpretation, each event (failure) from this process is minimally repaired with a given probability and GPP-repaired with the complementary probability. Characterization of the new process via the corresponding bivariate point process is presented. The mean numbers of events for marginal processes are obtained via the corresponding rates, which are used for considering an optimal replacement problem as an application.
We investigate expansions for connectedness functions in the random connection model of continuum percolation in powers of the intensity. Precisely, we study the pair-connectedness and the direct-connectedness functions, related to each other via the Ornstein–Zernike equation. We exhibit the fact that the coefficients of the expansions consist of sums over connected and 2-connected graphs. In the physics literature, this is known to be the case more generally for percolation models based on Gibbs point processes and stands in analogy to the formalism developed for correlation functions in liquid-state statistical mechanics.
We find a representation of the direct-connectedness function and bounds on the intensity which allow us to pass to the thermodynamic limit. In some cases (e.g., in high dimensions), the results are valid in almost the entire subcritical regime. Moreover, we relate these expansions to the physics literature and we show how they coincide with the expression provided by the lace expansion.
We study an N-player game where a pure action of each player is to select a nonnegative function on a Polish space supporting a finite diffuse measure, subject to a finite constraint on the integral of the function. This function is used to define the intensity of a Poisson point process on the Polish space. The processes are independent over the players, and the value to a player is the measure of the union of her open Voronoi cells in the superposition point process. Under randomized strategies, the process of points of a player is thus a Cox process, and the nature of competition between the players is akin to that in Hotelling competition games. We characterize when such a game admits Nash equilibria and prove that when a Nash equilibrium exists, it is unique and consists of pure strategies that are proportional in the same proportions as the total intensities. We give examples of such games where Nash equilibria do not exist. A better understanding of the criterion for the existence of Nash equilibria remains an intriguing open problem.
We consider a variant of a classical coverage process, the Boolean model in
$\mathbb{R}^d$
. Previous efforts have focused on convergence of the unoccupied region containing the origin to a well-studied limit C. We study the intersection of sets centered at points of a Poisson point process confined to the unit ball. Using a coupling between the intersection model and the original Boolean model, we show that the scaled intersection converges weakly to the same limit C. Along the way, we present some tools for studying statistics of a class of intersection models.
Chapter 11 addresses time- and/or space-variant structural reliability problems. It begins with a description of problem types as encroaching or outcrossing, subject to the type of dependence on the time or space variable. A brief review of essentials from the random process theory is presented, including second-moment characterization of the process in terms of mean and auto-covariance functions and the power spectral density. Special attention is given to Gaussian and Poisson processes as building blocks for stochastic load modeling. Bounds to the failure probability are developed in terms of mean crossing rates or using a series system representation through parameter discretization. A Poisson-based approximation for rare failure events is also presented. Next, the Poisson process is used to build idealized stochastic load models that describe macro-level load changes or intermittent occurrences with random magnitudes and durations. The chapter concludes with the development of the load-coincidence method for combination of stochastic loads. The probability distribution of the maximum combined load effect is derived and used to estimate the failure probability.
We study, under mild conditions, the weak approximation constructed from a standard Poisson process for a class of Gaussian processes, and establish its sample path moderate deviations. The techniques consist of a good asymptotic exponential approximation in moderate deviations, the Besov–Lèvy modulus embedding, and an exponential martingale technique. Moreover, our results are applied to the weak approximations associated with the moving average of Brownian motion, fractional Brownian motion, and an Ornstein–Uhlenbeck process.
In this paper, a novel statistical application of large deviation principle (LDP) to the robot trajectory tracking problem is presented. The exit probability of the trajectory from stability zone is evaluated, in the presence of small-amplitude Gaussian and Poisson noise. Afterward, the limit of the partition function for the average tracking error energy is derived by solving a fourth-order system of Euler–Lagrange equations. Stability and computational complexity of the proposed approach is investigated to show the superiority over the Lyapunov method. Finally, the proposed algorithm is validated by Monte Carlo simulations and on the commercially available Omni bundleTM robot.
We study an ergodic singular control problem with constraint of a regular one-dimensional linear diffusion. The constraint allows the agent to control the diffusion only at the jump times of an independent Poisson process. Under relatively weak assumptions, we characterize the optimal solution as an impulse-type control policy, where it is optimal to exert the exact amount of control needed to push the process to a unique threshold. Moreover, we discuss the connection of the present problem to ergodic singular control problems, and illustrate the results with different well-known cost and diffusion structures.
We prove an almost sure central limit theorem on the Poisson space, which is perfectly tailored for stabilizing functionals arising in stochastic geometry. As a consequence, we provide almost sure central limit theorems for (i) the total edge length of the k-nearest neighbors random graph, (ii) the clique count in random geometric graphs, and (iii) the volume of the set approximation via the Poisson–Voronoi tessellation.
Log-concavity [log-convexity] and their various properties play an increasingly important role in probability, statistics, operations research and other fields. In this paper, we first establish general preservation theorems of log-concavity and log-convexity under operator $\phi \longmapsto T(\phi , \theta )=\mathbb {E}[\phi (X_\theta )]$, θ ∈ Θ, where Θ is an interval of real numbers or an interval of integers, and the random variable $X_\theta$ has a distribution function belonging to the family $\{F_\theta , \theta \in \Theta \}$ possessing the semi-group property. The proofs are based on the theory of stochastic comparisons and weighted distributions. The main results are applied to some special operators, for example, operators occurring in reliability, Bernstein-type operators and Beta-type operators. Several known results in the literature are recovered.
The longest gap $L(t)$ up to time $t$ in a homogeneous Poisson process is the maximal time subinterval between epochs of arrival times up to time $t$; it has applications in the theory of reliability. We study the Laplace transform asymptotics for $L(t)$ as $t\rightarrow \infty$ and derive two natural and different large-deviation principles for $L(t)$ with two distinct rate functions and speeds.
The absence of quantitative in vitro cell–extracellular matrix models represents an important bottleneck for basic research and human health. Randomness of cellular distributions provides an opportunity for the development of a quantitative in vitro model. However, quantification of the randomness of random cell distributions is still lacking. In this paper, we have imaged cellular distributions in an alginate matrix using a multiview light sheet microscope and developed quantification metrics of randomness by modeling it as a Poisson process, a process that has constant probability of occurring in space or time. We imaged fluorescently labeled human mesenchymal stem cells embedded in an alginate matrix of thickness greater than 5 mm with $\sim\! {\rm 2}{\rm. 9} \pm {\rm 0}{\rm. 4}\,\mu {\rm m}$ axial resolution, the mean full width at half maximum of the axial intensity profiles of fluorescent particles. Simulated randomness agrees well with the experiments. Quantification of distributions and validation by simulations will enable quantitative study of cell–matrix interactions in tissue models.
In Weil (2001) formulae were proved for stationary Boolean models Z in ℝd with convex or polyconvex grains, which express the densities (specific mean values) of mixed volumes of Z in terms of related mean values of the underlying Poisson particle process X. These formulae were then used to show that in dimensions 2 and 3 the densities of mixed volumes of Z determine the intensity γ of X. For d = 4, a corresponding result was also stated, but the proof given was incomplete, since in the formula for the density of the Euler characteristic V̅0(Z) of Z a term $\overline V^{(0)}_{2,2}(X,X)$ was missing. This was pointed out in Goodey and Weil (2002), where it was also explained that a new decomposition result for mixed volumes and mixed translative functionals would be needed to complete the proof. Such a general decomposition result has recently been proved by Hug, Rataj, and Weil (2013), (2018) and is based on flag measures of the convex bodies involved. Here, we show that such flag representations not only lead to a correct derivation of the four-dimensional result, but even yield a corresponding uniqueness theorem in all dimensions. In the proof of the latter we make use of Alesker’s representation theorem for translation invariant valuations. We also discuss which shape information can be obtained in this way and comment on the situation in the nonstationary case.
The fractional nonhomogeneous Poisson process was introduced by a time change of the nonhomogeneous Poisson process with the inverse α-stable subordinator. We propose a similar definition for the (nonhomogeneous) fractional compound Poisson process. We give both finite-dimensional and functional limit theorems for the fractional nonhomogeneous Poisson process and the fractional compound Poisson process. The results are derived by using martingale methods, regular variation properties and Anscombe’s theorem. Eventually, some of the limit results are verified in a Monte Carlo simulation.
In the first part of this paper we consider a general stationary subcritical cluster model in ℝd. The associated pair-connectedness function can be defined in terms of two-point Palm probabilities of the underlying point process. Using Palm calculus and Fourier theory we solve the Ornstein–Zernike equation (OZE) under quite general distributional assumptions. In the second part of the paper we discuss the analytic and combinatorial properties of the OZE solution in the special case of a Poisson-driven random connection model.
We consider a general class of epidemic models obtained by applying the random time changes of Ethier and Kurtz (2005) to a collection of Poisson processes and we show the large deviation principle for such models. We generalise the approach followed by Dolgoarshinnykh (2009) in the case of the SIR epidemic model. Thanks to an additional assumption which is satisfied in many examples, we simplify the recent work of Kratz and Pardoux (2017).