Article contents
On the discrepancy principle and generalised maximum likelihood for regularisation
Published online by Cambridge University Press: 17 April 2009
Abstract
Let fnλ be the regularised solution of a general, linear operator equation, K f0 = g, from discrete, noisy data yi = g(xi ) + εi, i = 1, …, n, where εi are uncorrelated random errors with variance σ2. In this paper, we consider the two well–known methods – the discrepancy principle and generalised maximum likelihood (GML), for choosing the crucial regularisation parameter λ. We investigate the asymptotic properties as n → ∞ of the “expected” estimates λD and λM corresponding to these two methods respectively. It is shown that if f0 is sufficiently smooth, then λD is weakly asymptotically optimal (ao) with respect to the risk and an L2 norm on the output error. However, λD oversmooths for all sufficiently large n and also for all sufficiently small σ2. If f0 is not too smooth relative to the regularisation space W, then λD can also be weakly ao with respect to a whole class of loss functions involving stronger norms on the input error. For the GML method, we show that if f0 is smooth relative to W (for example f0 ∈ Wθ, 2, θ > m, if W = Wm, 2), then λM is asymptotically sub-optimal and undersmoothing with respect to all of the loss functions above.
- Type
- Research Article
- Information
- Bulletin of the Australian Mathematical Society , Volume 52 , Issue 3 , December 1995 , pp. 399 - 424
- Copyright
- Copyright © Australian Mathematical Society 1995
References
- 12
- Cited by