Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- References
- Index
Part VII - Kernel methods and statistical estimation theory
Published online by Cambridge University Press: 05 July 2014
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- References
- Index
Summary
Linear prediction and system identification has become a well-established field in information sciences. On the other hand, kernel methods have acquired their popularity only during the past two decades, so they have a relatively short history. It is vital to establish a theoretical foundation linking kernel methods and the rich theory in estimation, prediction, and system identfication. This part contains two chapters addressing this important issue. Chapter 14 focuses on statistical analysis employing knowledge of the (joint) density functions of all the input and output variables and their respective noises. Chapter 15 focuses on estimation, prediction, and system identification using observed samples and prior knowledge of the irst- and second-order statistics of the system parameters.
In Chapter 14, the classical formulation for (linear) ridge regression will be extended to (nonlinear) kernel ridge regression (KRR). Using the notion of orthogonal polynomials (Theorem 14.1), closed-form results for the error analysis can be derived for KRR, see Theorem 14.2. The regression analysis can be further generalized to the errors-in-variables models, where the measurements of the input variable are no longer perfect. This leads to the development of perturbation-regulated regressors (PRRs), regarding which two major theoretical fronts will be explored:
Under the Gaussian distribution assumption, the analysis on PRR benefits greatly from the exact knowledge of the joint statistical property of the ideal and perturbed input variables. The property of conventional orthogonal polynomials (Theorem 14.1) is extended to harness the cross-orthogonality between the original and perturbed input variables.
- Type
- Chapter
- Information
- Kernel Methods and Machine Learning , pp. 457 - 458Publisher: Cambridge University PressPrint publication year: 2014