We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The empty space function of a stationary point process in ℝd is the function that assigns to each r, r > 0, the probability that there is no point within distance r of O. In a recent paper Van Lieshout and Baddeley study the so-called J-function, which is defined as the ratio of the empty space function of a stationary point process and that of its corresponding reduced Palm process. They advocate the use of the J-function as a characterization of the type of spatial interaction.
Therefore it is natural to ask whether J ≡ 1 implies that the point process is Poisson. We restrict our analysis to the one-dimensional case and show that a classical construction by Szász provides an immediate counterexample. In this example the interpoint distances are still exponentially distributed. This raises the question whether it is possible to have J ≡ 1 but non-exponentially distributed interpoint distances. We construct a point process with J ≡ 1 but where the interpoint distances are bounded.
This paper considers a large class of non-stationary random fields which have fractal characteristics and may exhibit long-range dependence. Its motivation comes from a Lipschitz-Holder-type condition in the spectral domain.
The paper develops a spectral theory for the random fields, including a spectral decomposition, a covariance representation and a fractal index. From the covariance representation, the covariance function and spectral density of these fields are defined. These concepts are useful in multiscaling analysis of random fields with long-range dependence.
In this article we study stochastic perturbations of partial differential equations describing forced-damped vibrations of a string. Two models of such stochastic disturbances are considered; one is triggered by an initial white noise, and the other is in the form of non-Gaussian random forcing. Let uε (t, x) be the displacement at time t of a point x on a string, where the time variable t ≧ 0, and the space variable . The small parameter ε controls the intensity of the random fluctuations. The random fields uε (t, x) are shown to satisfy a large deviations principle, and the random deviations of the unperturbed displacement function are analyzed as the noise parameter ε tends to zero.
A random vibration model is investigated in this paper. The model is formulated as a cosine function with a constant frequency and a random walk phase. We show that this model is second-order stationary and can be rewritten as a vector-valued AR(1) model as well as a scalar ARMA(2, 1) model. The linear innovation sequence of the AR(1) model is shown to be a martingale difference sequence while the linear innovation sequence of the ARMA(2, 1) model is only an uncorrelated sequence. A non-linear predictor is derived from the AR(1) model while a linear predictor is derived from the ARMA(2, 1) model. We deduce that the non-linear predictor of this model has less mean square error than that of the linear predictor. This has significance, for example, for predicting seasonal phenomena with this model. In addition, the limit distributions of the sample mean, the finite Fourier transforms and the autocovariance functions are derived using a martingale approach. The limit distribution of autocovariance functions differs from the classical result given by Bartlett's formula.
A disaster occurs in a queue when a negative arrival causes all the work (and therefore customers) to leave the system instantaneously. Recent papers have addressed several issues pertaining to queueing networks with negative arrivals under the i.i.d. exponential service times assumption. Here we relax this assumption and derive a Pollaczek–Khintchine-like formula for M/G/1 queues with disasters by making use of the preemptive LIFO discipline. As a byproduct, the stationary distribution of the remaining service time process is obtained for queues operating under this discipline. Finally, as an application, we obtain the Laplace transform of the stationary remaining service time of the customer in service for unstable preemptive LIFO M/G/1 queues.
We prove a monotonicity property for a function of general square integrable pairs of martingales which is useful in fractal-based algorithms for compression of image data.
Prediction for autoregressive sequences with finite second moment and of general order is considered. It is shown that the best predictor with time reversed is linear if and only if the innovations are Gaussian. The connection to time reversibility is also discussed.
We consider two independent homogeneous Poisson processes Π0 and Π1 in the plane with intensities λ0 and λ1, respectively. We study additive functionals of the set of Π0-particles within a typical Voronoi Π1-cell. We find the first and the second moments of these variables as well as upper and lower bounds on their distribution functions, implying an exponential asymptotic behavior of their tails. Explicit formulae are given for the number and the sum of distances from Π0-particles to the nucleus within a typical Voronoi Π1-cell.
We investigate the ‘clumping versus local finiteness' behavior in the infinite backward tree for a class of branching particle systems in ℝd with symmetric stable migration and critical ‘genuine multitype' branching. Under mild assumptions on the branching we establish, by analysing certain ergodic properties of the individual ancestral process, a critical dimension dc such that the (measure-valued) tree-top is almost surely locally finite if and only if d > dc. This result is used to obtain L1-norm asymptotics of a corresponding class of systems of non-linear partial differential equations.
Let θ (a) be the first time when the range (Rn; n ≧ 0) is equal to a, Rn being equal to the difference of the maximum and the minimum, taken at time n, of a simple random walk on ℤ. We compute the g.f. of θ (a); this allows us to compute the distributions of θ (a) and Rn. We also investigate the asymptotic behaviour of θ (n), n going to infinity.
Kallenberg [2] introduced the concept of F-exchangeable sequences of random variables and produced some characterizations of F-exchangeability in terms of stopping times. In this paper ways of extending the concept of F-exchangeability to doubly indexed arrays of random variables are explored and some characterizations obtained for row and column exchangebale arrays, weakly exchangeable arrays and separately exchangeable continuous processes.
We study a classical stochastic control problem arising in financial economics: to maximize expected logarithmic utility from terminal wealth and/or consumption. The novel feature of our work is that the portfolio is allowed to anticipate the future, i.e. the terminal values of the prices, or of the driving Brownian motion, are known to the investor, either exactly or with some uncertainty. Results on the finiteness of the value of the control problem are obtained in various setups, using techniques from the so-called enlargement of filtrations. When the value of the problem is finite, we compute it explicitly and exhibit an optimal portfolio in closed form.
We derive optimal gambling and investment policies for cases in which the underlying stochastic process has parameter values that are unobserved random variables. For the objective of maximizing logarithmic utility when the underlying stochastic process is a simple random walk in a random environment, we show that a state-dependent control is optimal, which is a generalization of the celebrated Kelly strategy: the optimal strategy is to bet a fraction of current wealth equal to a linear function of the posterior mean increment. To approximate more general stochastic processes, we consider a continuous-time analog involving Brownian motion. To analyze the continuous-time problem, we study the diffusion limit of random walks in a random environment. We prove that they converge weakly to a Kiefer process, or tied-down Brownian sheet. We then find conditions under which the discrete-time process converges to a diffusion, and analyze the resulting process. We analyze in detail the case of the natural conjugate prior, where the success probability has a beta distribution, and show that the resulting limit diffusion can be viewed as a rescaled Brownian motion. These results allow explicit computation of the optimal control policies for the continuous-time gambling and investment problems without resorting to continuous-time stochastic-control procedures. Moreover they also allow an explicit quantitative evaluation of the financial value of randomness, the financial gain of perfect information and the financial cost of learning in the Bayesian problem.
We establish stability, monotonicity, concavity and subadditivity properties for open stochastic storage networks in which the driving process has stationary increments. A principal example is a stochastic fluid network in which the external inputs are random but all internal flows are deterministic. For the general model, the multi-dimensional content process is tight under the natural stability condition. The multi-dimensional content process is also stochastically increasing when the process starts at the origin, implying convergence to a proper limit under the natural stability condition. In addition, the content process is monotone in its initial conditions. Hence, when any content process with non-zero initial conditions hits the origin, it couples with the content process starting at the origin. However, in general, a tight content process need not hit the origin.
An aggregated Markov chain is a Markov chain for which some states cannot be distinguished from each other by the observer. In this paper we consider the identifiability problem for such processes in continuous time, i.e. the problem of determining whether two parameters induce identical laws for the observable process or not. We also study the order of a continuous-time aggregated Markov chain, which is the minimum number of states needed to represent it. In particular, we give a lower bound on the order. As a by-product, we obtain results of this kind also for Markov-modulated Poisson processes, i.e. doubly stochastic Poisson processes whose intensities are directed by continuous-time Markov chains, and phase-type distributions, which are hitting times in finite-state Markov chains.
In 1979, Melamed proved that, in an open migration process, the absence of ‘loops' is necessary and sufficient for the equilibrium flow along a link to be a Poisson process. In this paper, we prove approximation theorems with the same flavour: the difference between the equilibrium flow along a link and a Poisson process with the same rate is bounded in terms of expected numbers of loops. The proofs are based on Stein's method, as adapted for bounds on the distance of the distribution of a point process from a Poisson process in Barbour and Brown (1992b). Three different distances are considered, and illustrated with an example consisting of a system of tandem queues with feedback. The upper bound on the total variation distance of the process grows linearly with time, and a lower bound shows that this can be the correct order of approximation.
The paper introduces an approach focused towards the modelling of dynamics of financial markets. It is based on the three principles of market clearing, exclusion of instantaneous arbitrage and minimization of increase of arbitrage information. The last principle is equivalent to the minimization of the difference between the risk neutral and the real world probability measures. The application of these principles allows us to identify various market parameters, e.g. the risk-free rate of return. The approach is demonstrated on a simple financial market model, for which the dynamics of a virtual risk-free rate of return can be explicitly computed.
The so-called ‘Swiss Army formula', derived by Brémaud, seems to be a general purpose relation which includes all known relations of Palm calculus for stationary stochastic systems driven by point processes. The purpose of this article is to present a short, and rather intuitive, proof of the formula. The proof is based on the Ryll–Nardzewski definition of the Palm probability as a Radon-Nikodym derivative, which, in a stationary context, is equivalent to the Mecke definition.
As well as having complete knowledge of the future, a superprophet can also alter the order of observation as it is presented to a player without foresight, whose strategy is known to the prophet. It is shown that a superprophet can only do twice as well as his counterpart, if the underlying random sequence is independent.
Consider the optimal control problem of leaving an interval (– a, a) in a limited playing time. In the discrete-time problem, a is a positive integer and the player's position is given by a simple random walk on the integers with initial position x. At each time instant, the player chooses a coin from a control set where the probability of returning heads depends on the current position and the remaining amount of playing time, and the player is betting a unit value on the toss of the coin: heads returning +1 and tails − 1. We discuss the optimal strategy for this discrete-time game. In the continuous-time problem the player chooses infinitesimal mean and infinitesimal variance parameters from a control set which may depend upon the player's position. The problem is to find optimal mean and variance parameters that maximize the probability of leaving the interval [— a, a] within a finite time T > 0.