Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- Appendix A Validation and testing of learning models
- Appendix B kNN, PNN, and Bayes classifiers
- References
- Index
Appendix B - kNN, PNN, and Bayes classifiers
from Part VIII - Appendices
Published online by Cambridge University Press: 05 July 2014
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- Appendix A Validation and testing of learning models
- Appendix B kNN, PNN, and Bayes classifiers
- References
- Index
Summary
There are two main learning strategies, namely inductive learning and transductive learning. These strategies are differentiated by their different ways of treating the (distribution of) testing data. The former adopts off-line learning models, see Figure B.1(a), but the latter usually adopts online learning models, see Figure B.1(b).
Inductive learning strategies. The decision rule, trained under an inductive setting, must cover all the possible data in the entire vector space. More explicitly, the discriminant function f(w, x) can be trained by inductive learning. This approach can effectively distill the information inherent in the training dataset off-line into a simple set of decision parameters w, thus enjoying the advantage of having a low classification complexity. As shown in Figure B.1(a), this approach contains two stages: (1) an off-line learning phase and (2) an on-field prediction phase. During the learning phase, the training dataset is used to learn the decision parameter w, which dictates the decision boundary: f(w, x) = 0. In the prediction phase, no more learning is required, so the decision making can be made on-the-fly with minimum latency.
Transductive learning strategies. In this case, the learner may explicitly make use of the structure and/or location of the putative test dataset in the decision process [281]. Hence, the discriminant function can be tailored to the specific test sample after it has been made known, presumably improving the prediction accuracy.
- Type
- Chapter
- Information
- Kernel Methods and Machine Learning , pp. 549 - 560Publisher: Cambridge University PressPrint publication year: 2014