Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- References
- Index
Part V - Support vector machines and variants
Published online by Cambridge University Press: 05 July 2014
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- References
- Index
Summary
This part contains three chapters with the focus placed on support vector machines (SVM). The SVM learning model lies right at the heart of kernel methods. It has been a major driving force of modern machine learning technologies.
Chapter 10 is focused on the basic SVM learning theory, which relies on the identification of a set of “support vectors” via the well-known Karush–Kuhn–Tucker (KKT) condition. The “support vectors” are solely responsible for the formation of the decision boundary. The LSP is obviously valid for SVM learning models and the kernelized SVM learning models have exactly the same form for linear and nonlinear problems. This is evidenced by Algorithm 10.1.
Chapter 11 covers support-vector-based learning models aiming at outlier detection. The support vector regression (SVR), see Algorithm 11.1, aims at finding an approximating function to fit the training data under the guidance of teacher values. The chapter explores, in addition, several SVM-based learning models for outlier detection, including hyperplane OCSVM (Algorithm 11.2), hypersphere OCSVM (Algorithm 11.3), and SVC. For all these learning models, the fraction of outliers can be analytically estimated – a sharp contrast to the other SVM learning models. In fact, for Gaussian kernels, it can be shown that all three algorithms coincide with each other. However, when polynomial kernels are adopted, the translation-invariance property is a legitimate concern for the hyperplane-OCSVM learning models.
Chapter 12 introduces the notion of a weight–error curve (WEC) for characterization of kernelized supervised learning models, including KDA, KRR, SVM, and Ridge-SVM.
- Type
- Chapter
- Information
- Kernel Methods and Machine Learning , pp. 341 - 342Publisher: Cambridge University PressPrint publication year: 2014
- 2
- Cited by