Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- 14 Statistical regression analysis and errors-in-variables models
- 15 Kernel methods for estimation, prediction, and system identification
- Part VIII Appendices
- References
- Index
14 - Statistical regression analysis and errors-in-variables models
from Part VII - Kernel methods and statistical estimation theory
Published online by Cambridge University Press: 05 July 2014
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- 14 Statistical regression analysis and errors-in-variables models
- 15 Kernel methods for estimation, prediction, and system identification
- Part VIII Appendices
- References
- Index
Summary
Introduction
Regression analysis has been a major theoretical pillar for supervised machine learning since it is applicable to a broad range of identification, prediction and classification problems. There are two major approaches to the design of robust regressors. The first category involves a variety of regularization techniques whose principle lies in incorporating both the error and the penalty terms into the cost function. It is represented by the ridge regressor. The second category is based on the premise that the robustness of the regressor could be enhanced by accounting for potential measurement errors in the learning phase. These techniques are known as errors-in-variables models in statistics and are relatively new to the machine learning community. In our discussion, such errors in variables are viewed as additive input perturbation.
This chapter aims at enhancing the robustness of estimators by incorporating input perturbation into the conventional regression analysis. It develops a kernel perturbation-regulated regressor (PRR) that is based on the errors-in-variables models. The PRR offers a strong smoothing capability that is critical to the robustness of regression or classification results. For Gaussian cases, the notion of orthogonal polynomials is instrumental to optimal estimation and its error analysis. More exactly, the regressor may be expressed as a linear combination of many simple Hermite regressors, each focusing on one (and only one) orthogonal polynomial.
This chapter will cover the fundamental theory of linear regression and regularization analysis. The analysis leads to a closed-form error formula that is critical for order-error tradeoff.
- Type
- Chapter
- Information
- Kernel Methods and Machine Learning , pp. 459 - 493Publisher: Cambridge University PressPrint publication year: 2014