Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- 10 Support vector machines
- 11 Support vector learning models for outlier detection
- 12 Ridge-SVM learning models
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- References
- Index
10 - Support vector machines
from Part V - Support vector machines and variants
Published online by Cambridge University Press: 05 July 2014
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- 10 Support vector machines
- 11 Support vector learning models for outlier detection
- 12 Ridge-SVM learning models
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- References
- Index
Summary
Introduction
In Chapter 8, it is shown that the kernel ridge regressor (KRR) offers a unified treatment for over-determined and under-determined systems. Another way of achieving unification of these two linear systems approaches is by means of the support vector machine (SVM) learning model proposed by Vapnik [41, 280, 281].
Just like FDA, the objective of SVM aims at the separation of two classes. FDA is focused on the separation of the positive and negative centroids with the total data distribution taken into account. In contrast, SVM aims at the separation of only the so-called support vectors, i.e. only those which are deemed critical for class separation.
Just like ridge regression, the objective of the SVM classifier also involves minimization of the two-norm of the decision vector.
The key component in SVM learning is to identify a set of representative training vectors deemed to be most useful for shaping the (linear or nonlinear) decision boundary. These training vectors are called “support vectors.” The rest of the training vectors are called non-support vectors. Note that only support vectors can directly take part in the characterization of the decision boundary of the SVM.
SVM has successfully been applied to an enormously broad spectrum of application domains, including signal processing and classification, image retrieval, multimedia, fault detection, communication, computer vision, security/authentication, time-series prediction, biomedical prediction, and bioinformatics.
- Type
- Chapter
- Information
- Kernel Methods and Machine Learning , pp. 343 - 379Publisher: Cambridge University PressPrint publication year: 2014