Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- References
- Index
Part VI - Kernel methods for green machine learning technologies
Published online by Cambridge University Press: 05 July 2014
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- References
- Index
Summary
The traditional curse of dimensionality is often focused on the extreme dimensionality of the feature space, i.e. M. However, for kernelized learning models for big data analysis, the concern is naturally shifted to the extreme dimensionality of the kernel matrix, N, which is dictated by the size of the training dataset. For example, in some biomedical applications, the sizes may be hundreds of thousands. In social media applications, the sizes could be easily of the order of millions. This creates a new large-scale learning paradigm, which calls for a new level of computational tools, both in hardware and in software.
Given the kernelizability, we have at our disposal two learning models, respectively represented by two different kernel-induced vector spaces. Now our focus of attention should be shifted to the interplay between two kernel-induced representations. Even though the two models are theoretically equivalent, they could incur very different implementation costs for learning and prediction. For cost-effective system implementation, one should choose the lower-cost representation, whether intrinsic or empirical. For example, if the dimension of the empirical space is small and manageable, an empirical-space learning models will be more appealing. However, this will just be the opposite if the number of training vectors is extremely large, which is the case for the “big data” learning scenario. In this case, one must give a serious consideration to the intrinsic model whose cost can be controlled by properly adjusting the order of the kernel function.
- Type
- Chapter
- Information
- Kernel Methods and Machine Learning , pp. 419 - 420Publisher: Cambridge University PressPrint publication year: 2014