Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-25T20:25:00.331Z Has data issue: false hasContentIssue false

Risk bounds for mixture density estimation

Published online by Cambridge University Press:  15 November 2005

Alexander Rakhlin
Affiliation:
Center for Biological and Computational Learning, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; [email protected]
Dmitry Panchenko
Affiliation:
Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02143, USA.
Sayan Mukherjee
Affiliation:
Institute of Statistics and Decision Sciences, Institute for Genome Sciences and Policy, Duke University, Durham, NC 27708, USA.
Get access

Abstract

In this paper we focus on the problem of estimating a boundeddensity using a finite combination of densities from a givenclass. We consider the Maximum Likelihood Estimator (MLE) and thegreedy procedure described by Li and Barron (1999)under the additional assumption of boundedness of densities. Weprove an $O(\frac{1}{\sqrt{n}})$ bound on the estimation errorwhich does not depend on the number of densities in the estimatedcombination. Under the boundedness assumption,this improves the bound of Li and Barron by removing the $\log n$ factor and also generalizes it to the base classes with convergingDudley integral.

Type
Research Article
Copyright
© EDP Sciences, SMAI, 2005

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Barron, A.R., Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inform. Theory 39 (1993) 930945. CrossRef
Barron, A.R., Approximation and estimation bounds for artificial neural networks. Machine Learning 14 (1994) 115133. CrossRef
Birgé, L. and Massart, P., Rates of convergence for minimum contrast estimators. Probab. Theory Related Fields 97 (1993) 113150. CrossRef
R.M. Dudley, Uniform Central Limit Theorems. Cambridge University Press (1999).
Jones, L.K., A simple lemma on greedy approximation in Hilbert space and convergence rates for Projection Pursuit Regression and neural network training. Ann. Stat. 20 (1992) 608613. CrossRef
M. Ledoux and M. Talagrand, Probability in Banach Spaces. Springer-Verlag, New York (1991).
J. Li and A. Barron, Mixture density estimation, in Advances in Neural information processings systems 12, S.A. Solla, T.K. Leen and K.-R. Muller Ed. San Mateo, CA. Morgan Kaufmann Publishers (1999).
J. Li, Estimation of Mixture Models. Ph.D. Thesis, The Department of Statistics. Yale University (1999).
C. McDiarmid, On the method of bounded differences. Surveys in Combinatorics (1989) 148–188.
Mendelson, S., On the size of convex hulls of small sets. J. Machine Learning Research 2 (2001) 118.
Niyogi, P. and Girosi, F., Generalization bounds for function approximation from scattered noisy data. Adv. Comput. Math. 10 (1999) 5180. CrossRef
van de Geer, S.A., Rates of convergence for the maximum likelihood estimator in mixture models. Nonparametric Statistics 6 (1996) 293310. CrossRef
S.A. van de Geer, Empirical Processes in M-Estimation. Cambridge University Press (2000).
A.W. van der Vaart and J.A. Wellner, Weak Convergence and Empirical Processes with Applications to Statistics. Springer-Verlag, New York (1996).
Wong, W.H. and Shen, X., Probability inequalities for likelihood ratios and convergence rates for sieve mles. Ann. Stat. 23 (1995) 339362. CrossRef