We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We study Gibbs measures with log-correlated base Gaussian fields on the d-dimensional torus. In the defocusing case, the construction of such Gibbs measures follows from Nelson’s argument. In this paper, we consider the focusing case with a quartic interaction. Using the variational formulation, we prove nonnormalizability of the Gibbs measure. When $d = 2$, our argument provides an alternative proof of the nonnormalizability result for the focusing $\Phi ^4_2$-measure by Brydges and Slade (1996). Furthermore, we provide a precise rate of divergence, where the constant is characterized by the optimal constant for a certain Bernstein’s inequality on $\mathbb R^d$. We also go over the construction of the focusing Gibbs measure with a cubic interaction. In the appendices, we present (a) nonnormalizability of the Gibbs measure for the two-dimensional Zakharov system and (b) the construction of focusing quartic Gibbs measures with smoother base Gaussian measures, showing a critical nature of the log-correlated Gibbs measure with a focusing quartic interaction.
We consider the super-replication problem for a class of exotic options known as life-contingent options within the framework of the Black–Scholes market model. The option is allowed to be exercised if the death of the option holder occurs before the expiry date, otherwise there is a compensation payoff at the expiry date. We show that there exists a minimal super-replication portfolio and determine the associated initial investment. We then give a characterisation of when replication of the option is possible. Finally, we give an example of an explicit super-replicating hedge for a simple life-contingent option.
This paper analyzes the training process of generative adversarial networks (GANs) via stochastic differential equations (SDEs). It first establishes SDE approximations for the training of GANs under stochastic gradient algorithms, with precise error bound analysis. It then describes the long-run behavior of GAN training via the invariant measures of its SDE approximations under proper conditions. This work builds a theoretical foundation for GAN training and provides analytical tools to study its evolution and stability.
In this paper we obtain a duality result for the exponential utility maximization problem where trading is subject to quadratic transaction costs and the investor is required to liquidate her position at the maturity date. As an application of the duality, we treat utility-based hedging in the Bachelier model. For European contingent claims with a quadratic payoff, we compute the optimal trading strategy explicitly.
We consider an insurance company modelling its surplus process by a Brownian motion with drift. Our target is to maximise the expected exponential utility of discounted dividend payments, given that the dividend rates are bounded by some constant. The utility function destroys the linearity and the time-homogeneity of the problem considered. The value function depends not only on the surplus, but also on time. Numerical considerations suggest that the optimal strategy, if it exists, is of a barrier type with a nonlinear barrier. In the related article of Grandits et al. (Scand. Actuarial J.2, 2007), it has been observed that standard numerical methods break down in certain parameter cases, and no closed-form solution has been found. For these reasons, we offer a new method allowing one to estimate the distance from an arbitrary smooth-enough function to the value function. Applying this method, we investigate the goodness of the most obvious suboptimal strategies—payout on the maximal rate, and constant barrier strategies—by measuring the distance from their performance functions to the value function.
In this paper we study a class of optimal stopping problems under g-expectation, that is, the cost function is described by the solution of backward stochastic differential equations (BSDEs). Primarily, we assume that the reward process is
$L\exp\bigl(\mu\sqrt{2\log\!(1+L)}\bigr)$
-integrable with
$\mu>\mu_0$
for some critical value
$\mu_0$
. This integrability is weaker than
$L^p$
-integrability for any
$p>1$
, so it covers a comparatively wide class of optimal stopping problems. To reach our goal, we introduce a class of reflected backward stochastic differential equations (RBSDEs) with
$L\exp\bigl(\mu\sqrt{2\log\!(1+L)}\bigr)$
-integrable parameters. We prove the existence, uniqueness, and comparison theorem for these RBSDEs under Lipschitz-type assumptions on the coefficients. This allows us to characterize the value function of our optimal stopping problem as the unique solution of such RBSDEs.
In this paper, we study the optimal multiple stopping problem under the filtration-consistent nonlinear expectations. The reward is given by a set of random variables satisfying some appropriate assumptions, rather than a process that is right-continuous with left limits. We first construct the optimal stopping time for the single stopping problem, which is no longer given by the first hitting time of processes. We then prove by induction that the value function of the multiple stopping problem can be interpreted as the one for the single stopping problem associated with a new reward family, which allows us to construct the optimal multiple stopping times. If the reward family satisfies some strong regularity conditions, we show that the reward family and the value functions can be aggregated by some progressive processes. Hence, the optimal stopping times can be represented as hitting times.
We prove a rate of convergence for the N-particle approximation of a second-order partial differential equation in the space of probability measures, such as the master equation or Bellman equation of the mean-field control problem under common noise. The rate is of order
$1/N$
for the pathwise error on the solution v and of order
$1/\sqrt{N}$
for the
$L^2$
-error on its L-derivative
$\partial_\mu v$
. The proof relies on backward stochastic differential equation techniques.
In this paper we propose a general framework for modeling an insurance liability cash flow in continuous time, by generalizing the reduced-form framework for credit risk and life insurance. In particular, we assume a nontrivial dependence structure between the reference filtration and the insurance internal filtration. We apply these results for pricing and hedging non-life insurance liabilities in hybrid financial and insurance markets, while taking into account the role of inflation under the benchmarked risk-minimization approach. This framework offers at the same time a general and flexible structure, and an explicit and treatable pricing-hedging formula.
We study an intertemporal consumption and portfolio choice problem under Knightian uncertainty in which agent’s preferences exhibit local intertemporal substitution. We also allow for market frictions in the sense that the pricing functional is nonlinear. We prove existence and uniqueness of the optimal consumption plan, and we derive a set of sufficient first-order conditions for optimality. With the help of a backward equation, we are able to determine the structure of optimal consumption plans. We obtain explicit solutions in a stationary setting in which the financial market has different risk premia for short and long positions.
A set of data with positive values follows a Pareto distribution if the log–log plot of value versus rank is approximately a straight line. A Pareto distribution satisfies Zipf’s law if the log–log plot has a slope of $-1$. Since many types of ranked data follow Zipf’s law, it is considered a form of universality. We propose a mathematical explanation for this phenomenon based on Atlas models and first-order models, systems of strictly positive continuous semimartingales with parameters that depend only on rank. We show that the stationary distribution of an Atlas model will follow Zipf’s law if and only if two natural conditions, conservation and completeness, are satisfied. Since Atlas models and first-order models can be constructed to approximate systems of time-dependent rank-based data, our results can explain the universality of Zipf’s law for such systems. However, ranked data generated by other means may follow non-Zipfian Pareto distributions. Hence, our results explain why Zipf’s law holds for word frequency, firm size, household wealth, and city size, while it does not hold for earthquake magnitude, cumulative book sales, and the intensity of wars, all of which follow non-Zipfian Pareto distributions.
It is well understood that a supercritical superprocess is equal in law to a discrete Markov branching process whose genealogy is dressed in a Poissonian way with immigration which initiates subcritical superprocesses. The Markov branching process corresponds to the genealogical description of prolific individuals, that is, individuals who produce eternal genealogical lines of descent, and is often referred to as the skeleton or backbone of the original superprocess. The Poissonian dressing along the skeleton may be considered to be the remaining non-prolific genealogical mass in the superprocess. Such skeletal decompositions are equally well understood for continuous-state branching processes (CSBP).
In a previous article [16] we developed an SDE approach to study the skeletal representation of CSBPs, which provided a common framework for the skeletal decompositions of supercritical and (sub)critical CSBPs. It also helped us to understand how the skeleton thins down onto one infinite line of descent when conditioning on survival until larger and larger times, and eventually forever.
Here our main motivation is to show the robustness of the SDE approach by expanding it to the spatial setting of superprocesses. The current article only considers supercritical superprocesses, leaving the subcritical case open.
We construct global-in-time singular dynamics for the (renormalized) cubic fourth-order nonlinear Schrödinger equation on the circle, having the white noise measure as an invariant measure. For this purpose, we introduce the ‘random-resonant / nonlinear decomposition’, which allows us to single out the singular component of the solution. Unlike the classical McKean, Bourgain, Da Prato-Debussche type argument, this singular component is nonlinear, consisting of arbitrarily high powers of the random initial data. We also employ a random gauge transform, leading to random Fourier restriction norm spaces. For this problem, a contraction argument does not work, and we instead establish the convergence of smooth approximating solutions by studying the partially iterated Duhamel formulation under the random gauge transform. We reduce the crucial nonlinear estimates to boundedness properties of certain random multilinear functionals of the white noise.
Let (Y, Z) denote the solution to a forward-backward stochastic differential equation (FBSDE). If one constructs a random walk $B^n$ from the underlying Brownian motion B by Skorokhod embedding, one can show $L_2$-convergence of the corresponding solutions $(Y^n,Z^n)$ to $(Y, Z).$ We estimate the rate of convergence based on smoothness properties, especially for a terminal condition function in $C^{2,\alpha}$. The proof relies on an approximative representation of $Z^n$ and uses the concept of discretized Malliavin calculus. Moreover, we use growth and smoothness properties of the partial differential equation associated to the FBSDE, as well as of the finite difference equations associated to the approximating stochastic equations. We derive these properties by probabilistic methods.
We give a dynamic extension result of the (static) notion of a deviation measure. We also study distribution-invariant deviation measures and show that the only dynamic deviation measure which is law invariant and recursive is the variance.
It is well understood that a supercritical continuous-state branching process (CSBP) is equal in law to a discrete continuous-time Galton–Watson process (the skeleton of prolific individuals) whose edges are dressed in a Poissonian way with immigration which initiates subcritical CSBPs (non-prolific mass). Equally well understood in the setting of CSBPs and superprocesses is the notion of a spine or immortal particle dressed in a Poissonian way with immigration which initiates copies of the original CSBP, which emerges when conditioning the process to survive eternally. In this article we revisit these notions for CSBPs and put them in a common framework using the well-established language of (coupled) stochastic differential equations (SDEs). In this way we are able to deal simultaneously with all types of CSBPs (supercritical, critical, and subcritical) as well as understanding how the skeletal representation becomes, in the sense of weak convergence, a spinal decomposition when conditioning on survival. We have two principal motivations. The first is to prepare the way to expand the SDE approach to the spatial setting of superprocesses, where recent results have increasingly sought the use of skeletal decompositions to transfer results from the branching particle setting to the setting of measure valued processes. The second is to provide a pathwise decomposition of CSBPs in the spirit of genealogical coding of CSBPs via Lévy excursions, albeit precisely where the aforesaid coding fails to work because the underlying CSBP is supercritical.
We introduce variance-optimal semi-static hedging strategies for a given contingent claim. To obtain a tractable formula for the expected squared hedging error and the optimal hedging strategy we use a Fourier approach in a multidimensional factor model. We apply the theory to set up a variance-optimal semi-static hedging strategy for a variance swap in the Heston model, which is affine, in the 3/2 model, which is not, and in a market model including jumps.
We study the stochastic cubic nonlinear Schrödinger equation (SNLS) with an additive noise on the one-dimensional torus. In particular, we prove local well-posedness of the (renormalized) SNLS when the noise is almost space–time white noise. We also discuss a notion of criticality in this stochastic context, comparing the situation with the stochastic cubic heat equation (also known as the stochastic quantization equation).
In large storage systems, files are often coded across several servers to improve reliability and retrieval speed. We study load balancing under the batch sampling routeing scheme for a network of n servers storing a set of files using the maximum distance separable (MDS) code (cf. Li (2016)). Specifically, each file is stored in equally sized pieces across L servers such that any k pieces can reconstruct the original file. When a request for a file is received, the dispatcher routes the job into the k-shortest queues among the L for which the corresponding server contains a piece of the file being requested. We establish a law of large numbers and a central limit theorem as the system becomes large (i.e. n → ∞), for the setting where all interarrival and service times are exponentially distributed. For the central limit theorem, the limit process take values in ℓ2, the space of square summable sequences. Due to the large size of such systems, a direct analysis of the n-server system is frequently intractable. The law of large numbers and diffusion approximations established in this work provide practical tools with which to perform such analysis. The power-of-d routeing scheme, also known as the supermarket model, is a special case of the model considered here.