We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is well known and readily seen that the maximum of n independent and uniformly on [0, 1] distributed random variables, suitably standardised, converges in total variation distance, as n increases, to the standard negative exponential distribution. We extend this result to higher dimensions by considering copulas. We show that the strong convergence result holds for copulas that are in a differential neighbourhood of a multivariate generalised Pareto copula. Sklar’s theorem then implies convergence in variational distance of the maximum of n independent and identically distributed random vectors with arbitrary common distribution function and (under conditions on the marginals) of its appropriately normalised version. We illustrate how these convergence results can be exploited to establish the almost-sure consistency of some estimation procedures for max-stable models, using sample maxima.
In this paper we integrate two strands of the literature on stability of general state Markov chains: conventional, total-variation-based results and more recent order-theoretic results. First we introduce a complete metric over Borel probability measures based on ‘partial’ stochastic dominance. We then show that many conventional results framed in the setting of total variation distance have natural generalizations to the partially ordered setting when this metric is adopted.
The hybrid variational model for restoration of texture images corrupted by blur and Gaussian noise we consider combines total variation regularisation and a fractional-order regularisation, and is solved by an alternating minimisation direction algorithm. Numerical experiments demonstrate the advantage of this model over the adaptive fractional-order variational model in image quality and computational time.
Image segmentation is a fundamental problem in both image processing and computer vision with numerous applications. In this paper, we propose a two-stage image segmentation scheme based on inexact alternating direction method. Specifically, we first solve the convex variant of the Mumford-Shah model to get the smooth solution, the segmentation are then obtained by apply the K-means clustering method to the solution. Some numerical comparisons are arranged to show the effectiveness of our proposed schemes by segmenting many kinds of images such as artificial images, natural images, and brain MRI images.
This paper introduces a two-stage model for multi-channel image segmentation, which is motivated by minimal surface theory. Indeed, in the first stage, we acquire a smooth solution u from a convex variational model related to minimal surface property and different data fidelity terms are considered. This minimization problem is solved efficiently by the classical primal-dual approach. In the second stage, we adopt thresholding to segment the smoothed image u. Here, instead of using K-means to determine the thresholds, we propose a more stable hill-climbing procedure to locate the peaks on the 3D histogram of u as thresholds, in the meantime, this algorithm can also detect the number of segments. Finally, numerical results demonstrate that the proposed method is very robust against noise and superior to other image segmentation approaches.
Optical projection tomography (OPT) is a computed tomography technique at optical frequencies for samples of 0.5–15 mm in size, which fills an important “imaging gap” between confocal microscopy (for smaller samples) and large-sample methods such as fluorescence molecular tomography or micro magnetic resonance imaging. OPT operates in either fluorescence or transmission mode. Two-dimensional (2D) projections are taken over 360° with a fixed rotational increment around the vertical axis. Standard 3D reconstruction from 2D OPT uses the filtered backprojection (FBP) algorithm based on the Radon transform. FBP approximates the inverse Radon transform using a ramp filter that spreads reconstructed pixels to neighbor pixels thus producing streak and other types of artifacts, as well as noise. Artifacts increase the variation of grayscale values in the reconstructed images. We present an algorithm that improves the quality of reconstruction even for a low number of projections by simultaneously minimizing the sum of absolute brightness changes in the reconstructed volume (the total variation) and the error between measured and reconstructed data. We demonstrate the efficiency of the method on real biological data acquired on a dedicated OPT device.
Denoising of images corrupted by multiplicative noise is an important task in various
applications, such as laser imaging, synthetic aperture radar and ultrasound imaging.
We propose a combined first-order and second-order variational model for removal of
multiplicative noise. Our model substantially reduces the staircase effects while
preserving edges in the restored images, since it combines advantages of the
first-order and second-order total variation. The issues of existence and uniqueness
of a minimizer for this variational model are analysed. Moreover, a gradient descent
method is employed to solve the associated Euler–Lagrange equation, and
several numerical experiments are given to show the efficiency of our model. In
particular, a comparison with an existing model in terms of peak signal-to-noise
ratio and structural similarity index is provided.
In this paper, we propose a fast proximity point algorithm and apply it to total variation (TV) based image restoration. The novel method is derived from the idea of establishing a general proximity point operator framework based on which new first-order schemes for total variation (TV) based image restoration have been proposed. Many current algorithms for TV-based image restoration, such as Chambolle’s projection algorithm, the split Bregman algorithm, the Bermúdez-Moreno algorithm, the Jia-Zhao denoising algorithm, and the fixed point algorithm, can be viewed as special cases of the new first-order schemes. Moreover, the convergence of the new algorithm has been analyzed at length. Finally, we make comparisons with the split Bregman algorithm which is one of the best algorithms for solving TV-based image restoration at present. Numerical experiments illustrate the efficiency of the proposed algorithms.
A new hybrid variational model for recovering blurred images in the presence of multiplicative noise is proposed. Inspired by previous work on multiplicative noise removal, an I-divergence technique is used to build a strictly convex model under a condition that ensures the uniqueness of the solution and the stability of the algorithm. A split-Bregman algorithm is adopted to solve the constrained minimisation problem in the new hybrid model efficiently. Numerical tests for simultaneous deblurring and denoising of the images subject to multiplicative noise are then reported. Comparison with other methods clearly demonstrates the good performance of our new approach.
We discuss a numerical formulation for the cell problem related to a homogenization approach for the study of wetting on micro rough surfaces. Regularity properties of the solution are described in details and it is shown that the problem is a convex one. Stability of the solution with respect to small changes of the cell bottom surface allows for an estimate of the numerical error, at least in two dimensions. Several benchmark experiments are presented and the reliability of the numerical solution is assessed, whenever possible, by comparison with analytical one. Realistic three dimensional simulations confirm several interesting features of the solution, improving the classical models of study of wetting on roughness.
One of the classical optimization models for image segmentation is the well known Markov Random Fields (MRF) model. This model is a discrete optimization problem, which is shown here to formulate many continuous models used in image segmentation. In spite of the presence of MRF in the literature, the dominant perception has been that the model is not effective for image segmentation. We show here that the reason for the non-effectiveness is due to the lack of access to the optimal solution. Instead of solving optimally, heuristics have been engaged. Those heuristic methods cannot guarantee the quality of the solution nor the running time of the algorithm. Worse still, heuristics do not link directly the input functions and parameters to the output thus obscuring what would be ideal choices of parameters and functions which are to be selected by users in each particular application context.
We describe here how MRF can model and solve efficiently several known continuous models for image segmentation and describe briefly a very efficient polynomial time algorithm, which is provably fastest possible, to solve optimally the MRF problem. The MRF algorithm is enhanced here compared to the algorithm in Hochbaum (2001) by allowing the set of assigned labels to be any discrete set. Other enhancements include dynamic features that permit adjustments to the input parameters and solves optimally for these changes with minimal computation time. Several new theoretical results on the properties of the algorithm are proved here and are demonstrated for images in the context of medical and biological imaging. An interactive implementation tool for MRF is described, and its performance and flexibility in practice are demonstrated via computational experiments.
We conclude that many continuous models common in image segmentation have discrete analogs to various special cases of MRF and as such are solved optimally and efficiently, rather than with the use of continuous techniques, such as PDE methods, that restrict the type of functions used and furthermore, can only guarantee convergence to a local minimum.
In this paper we analyze the consistency, the accuracy and some entropy properties ofparticle methods with remeshing in the case of a scalar one-dimensional conservation law.As in [G.-H. Cottet and L. Weynans, C. R. Acad. Sci. Paris, Ser. I343 (2006) 51–56] we re-write particle methods with remeshing inthe finite-difference formalism. This allows us to prove the consistency of these methods,and accuracy properties related to the accuracy of interpolation kernels. Cottet and Magnidevised recently in [G.-H. Cottet and A. Magni, C. R. Acad. Sci. Paris, Ser. I347 (2009) 1367–1372] and [A. Magni and G.-H. Cottet, J.Comput. Phys. 231 (2012) 152–172] TVD remeshing schemes forparticle methods. We extend these results to the nonlinear case with arbitrary velocitysign. We present numerical results obtained with these new TVD particle methods for theEuler equations in the case of the Sod shock tube. Then we prove that with these new TVDremeshing schemes the particle methods converge toward the entropy solution of the scalarconservation law.
We propose a numerical procedure to extend to full aperture the acoustic far-field pattern (FFP) when measured in only few observation angles. The reconstruction procedure is a multi-step technique that combines a total variation regularized iterative method with the standard Tikhonov regularized pseudo-inversion. The proposed approach distinguishes itself from existing solution methodologies by using an exact representation of the total variation which is crucial for the stability and robustness of Newton algorithms. We present numerical results in the case of two-dimensional acoustic scattering problems to illustrate the potential of the proposed procedure for reconstructing the full aperture of the FFP from very few noisy data such as backscattering synthetic measurements.
When biological specimens are cut into physical sections for three-dimensional (3D) imaging by confocal laser scanning microscopy, the slices may get distorted or ruptured. For subsequent 3D reconstruction, images from different physical sections need to be spatially aligned by optimization of a function composed of a data fidelity term evaluating similarity between the reference and target images, and a regularization term enforcing transformation smoothness. A regularization term evaluating the total variation (TV), which enables the registration algorithm to account for discontinuities in slice deformation (ruptures), while enforcing smoothness on continuously deformed regions, was proposed previously. The function with TV regularization was optimized using a graph-cut (GC) based iterative solution. However, GC may generate visible registration artifacts, which impair the 3D reconstruction. We present an alternative, multilabel TV optimization algorithm, which in the examined samples prevents the artifacts produced by GC. The algorithm is slower than GC but can be sped up several times when implemented in a multiprocessor computing environment. For image pairs with uneven brightness distribution, we introduce a reformulation of the TV-based registration, in which intensity-based data terms are replaced by comparison of salient features in the reference and target images quantified by local image entropies.
We consider a class of discrete convex functionals which satisfy a (generalized) coarea formula. These functionals, based on submodular interactions, arise in discrete optimization and are known as a large class of problems which can be solved in polynomial time. In particular, some of them can be solved very efficiently by maximal flow algorithms and are quite popular in the image processing community. We study the limit in the continuum of these functionals, show that they always converge to some “crystalline” perimeter/total variation, and provide an almost explicit formula for the limiting functional.
Sums of independent random variables concentrated on discrete, not necessarily lattice, set of points are approximated by infinitely divisible distributions and signed compound Poisson measures. A version of Kolmogorov's first uniform theorem is proved. Second-order asymptotic expansions are constructed for distributions with pseudo-lattice supports.
For most repairable systems, the number N(t) of failed components at time t appears to be a good quality parameter, so it is critical to study this random function. Here the components are assumed to be independent and both their lifetime and their repair time are exponentially distributed. Moreover, the system is considered new at time 0. Our aim is to compare the random variable N(t) with N(∞), especially in terms of total variation distance. This analysis is used to prove a cut-off phenomenon in the same way as Ycart (1999) but without the assumption of identical components.
This paper deals with two complementary methods in noisy image
deblurring: a nonlinear shrinkage of wavelet-packets coefficients called FCNR
and Rudin-Osher-Fatemi's variational method. The FCNR has for objective to
obtain a restored image with a white noise. It will prove to be very efficient
to restore an image after an invertible blur but limited in the opposite
situation. Whereas the Total Variation based method, with its ability to
reconstruct the lost frequencies by interpolation, is very well adapted to
non-invertible blur, but that it tends to erase low contrast textures. This
complementarity is highlighted when the methods are applied to the restoration
of satellite SPOT images.
Stein's method is used to prove approximations in total variation to thedistributions of integer valued random variables by (possibly signed)compound Poisson measures. For sums of independent random variables,the results obtained are very explicit, and improve upon earlierwork of Kruopis (1983) and Čekanavičius (1997);coupling methods are used to derive concrete expressions for the errorbounds. An example is given to illustrate the potential for applicationto sums of dependent random variables.