Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- 1 Introduction
- Part 1 Foundations
- Part 2 From Theory to Algorithms
- 9 Linear Predictors
- 10 Boosting
- 11 Model Selection and Validation
- 12 Convex Learning Problems
- 13 Regularization and Stability
- 14 Stochastic Gradient Descent
- 15 Support Vector Machines
- 16 Kernel Methods
- 17 Multiclass, Ranking, and Complex Prediction Problems
- 18 Decision Trees
- 19 Nearest Neighbor
- 20 Neural Networks
- Part 3 Additional Learning Models
- Part 4 Advanced Theory
- Appendix A Technical Lemmas
- Appendix B Measure Concentration
- Appendix C Linear Algebra
- References
- Index
12 - Convex Learning Problems
from Part 2 - From Theory to Algorithms
Published online by Cambridge University Press: 05 July 2014
- Frontmatter
- Dedication
- Contents
- Preface
- 1 Introduction
- Part 1 Foundations
- Part 2 From Theory to Algorithms
- 9 Linear Predictors
- 10 Boosting
- 11 Model Selection and Validation
- 12 Convex Learning Problems
- 13 Regularization and Stability
- 14 Stochastic Gradient Descent
- 15 Support Vector Machines
- 16 Kernel Methods
- 17 Multiclass, Ranking, and Complex Prediction Problems
- 18 Decision Trees
- 19 Nearest Neighbor
- 20 Neural Networks
- Part 3 Additional Learning Models
- Part 4 Advanced Theory
- Appendix A Technical Lemmas
- Appendix B Measure Concentration
- Appendix C Linear Algebra
- References
- Index
Summary
In this chapter we introduce convex learning problems. Convex learning comprises an important family of learning problems, mainly because most of what we can learn efficiently falls into it. We have already encountered linear regression with the squared loss and logistic regression, which are convex problems, and indeed they can be learned efficiently. We have also seen nonconvex problems, such as halfspaces with the 0-1 loss, which is known to be computationally hard to learn in the unrealizable case.
In general, a convex learning problem is a problem whose hypothesis class is a convex set, and whose loss function is a convex function for each example. We begin the chapter with some required definitions of convexity. Besides convexity, we will define Lipschitzness and smoothness, which are additional properties of the loss function that facilitate successful learning. We next turn to defining convex learning problems and demonstrate the necessity for further constraints such as Boundedness and Lipschitzness or Smoothness. We define these more restricted families of learning problems and claim that Convex-Smooth/Lipschitz-Bounded problems are learnable. These claims will be proven in the next two chapters, in which we will present two learning paradigms that successfully learn all problems that are either convex-Lipschitz-bounded or convex-smooth-bounded.
Finally, in Section 12.3, we show how one can handle some nonconvex problems by minimizing “surrogate” loss functions that are convex (instead of the original nonconvex loss function). Surrogate convex loss functions give rise to efficient solutions but might increase the risk of the learned predictor.
- Type
- Chapter
- Information
- Understanding Machine LearningFrom Theory to Algorithms, pp. 124 - 136Publisher: Cambridge University PressPrint publication year: 2014