We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter switches from classification to regression. Concentrating on the square loss, it explains how empirical risk minimization becomes a least-squares problem, with different characteristics in the underparametrized regime and in the overparametrized regime. Adding a regularizer to the empirical risk is common in the latter regime, and the examples of Tikhonov regularization and of LASSO are discussed. The chapter concludes by highlighting a way of interpreting classification problems as regularization problems.
This chapter introduces the standard compressive sensing problem, where one tries to recover sparse vectors from few linear observations. The problem is proved to be solvable using ?1-minimization as a recovery map if and only if the observation matrix satisfies the so-called null space property. This property is then shown to be a consequence of an atypical restricted isometry property from ?2 to ?1, which holds with high probability for Gaussian matrices.
This chapter examines an Lp-minimization program with interpolatory constraints as a way to introduce techniques commonly used in semidefinite programming. The case p = 2 reveals a link between positive semidefiniteness and Schur complements. The case p = ? illustrates the sum-of-squares techniques in connection with Riesz-Fejér theorem. The case p = 1 illustrates the method of moments in connection with the discrete trigonometric moment problem.
In this chapter, it is proved that the set of multivariate functions generated by shallow networks is dense in the space of continuous functions on a compact set if and only if the activation function is not a polynomial. For the specific choice of the ReLU activation function, a two-sided estimate of the approximation rate of Lipschitz functions by shallow networks is also provided. The argument for the lower estimate makes use of an upper estimate on the VC-dimension of shallow ReLU networks.
In this chapter, the problem of optimal recovery is studied relatively to a model set defined through approximation properties. For the two situations emphasized in the previous chapter, it is shown that the intrinsic errors can be computed exactly and that the linear optimal recovery maps can be efficiently constructed.
This text provides deep and comprehensive coverage of the mathematical background for data science, including machine learning, optimal recovery, compressed sensing, optimization, and neural networks. In the past few decades, heuristic methods adopted by big tech companies have complemented existing scientific disciplines to form the new field of Data Science. This text embarks the readers on an engaging itinerary through the theory supporting the field. Altogether, twenty-seven lecture-length chapters with exercises provide all the details necessary for a solid understanding of key topics in data science. While the book covers standard material on machine learning and optimization, it also includes distinctive presentations of topics such as reproducing kernel Hilbert spaces, spectral clustering, optimal recovery, compressed sensing, group testing, and applications of semidefinite programming. Students and data scientists with less mathematical background will appreciate the appendices that provide more background on some of the more abstract concepts.
This chapter describes methods based on gradient information that achieve faster rates than basic algorithms such as those described in Chapter 3. These accelerated gradient methods, most notably the heavy-ball method and Nesterov’s optimal method, use the concept of momentum which means that each step combines information from recent gradient values but also earlier steps. These methods are described and analyzed using an analysis based on Lyapunov functions. The cases of convex and strongly convex functions are analyzed separately. We motivate these methods using continuous-time limits, which link gradient methods to dynamical systems described by differential equations. We mention also the conjugate gradient method, which was developed separately from the other method but which also makes use of momentum. Finally, we discuss the concept of lower bounds on algorithmic complexity, introducing a function on which no method based on gradients can attain convergence faster than a certain given rate.
Here, we describe methods for minimizing a smooth function over a closed convex set, using gradient information. We first state results that characterize optimality of points in a way that can be checked, and describe the vital operation of projection onto the feasible set. We next describe the projected gradient algorithm, which is in a sense the extension of the steepest-descent method to the constrained case, analyze its convergence, and describe several extensions. We next analyze the conditional-gradient method (also known as “Frank-Wolfe”) for the case in which the feasible set is compact and demonstrate sublinear convergence of this approach when the objective function is convex.
Here, we discuss concepts of duality for convex optimization problems, and algorithms that make use of these concepts. We define the Lagrangian function and its augmented Lagrangian counterpart. We use the Lagrangian to derive optimality conditions for constrained optimization problems in which the constraints are expressed as linear algebraic conditions. We introduce the dual problem, and discuss the concepts of weak and strong duality, and show the existence of positive duality gaps in certain settings. Next, we discuss the dual subgradient method, the augmented Lagrangian method, and the alternating direction method of multipliers (ADMM), which are useful for several types of data science problems.
In this introductory chapter, we outline the ways in which various problems in data analysis can be formulated as optimization problems. Specifically, we discuss least squares problems, problems in matrix optimization (particularly those involving low-rank matrices), linear and kernel support vector machines, binary and multiclass logistic regression, and deep learning. We also outline the scope of the remainder of the book.
We describe the stochastic gradient method, the fundamental algorithm for several important problems in data science, including deep learning. We give several example problems for which this method is suitable, then described its operation for the simple problem of computing a mean of a collection of values. We related it to a classical method, the Kaczmarz method for solving a system of linear equalities and inequalities. Next, we describe the key assumptions to be used in convergence analysis, then describe the convergence rates attainable by several variants of stochastic gradient under several scenarios. Finally, we discuss several aspects of practical implementation of stochastic gradient, including minibatching and acceleration.
We outline theoretical foundations for smooth optimization problems. First, we define the different types of minimizers (solutions) of unconstrained optimization problems. Next, we state Taylor’s theorem, the fundamental theorem of smooth optimization, which allows us to approximate general smooth functions by simpler (linear or quadratic) functions based on information at the current point. We show how minima can be characterized by optimality conditions involving the gradient or Hessian, which can be checked in practice. Finally, we define the convexity of sets and functions, an important property that arises often in practice and that can be exploited by the algorithms described in the remainder of the book.
This chapter describes the coordinate descent approach, in which a single variable (or a block of variables) is updated at each iteration, usually based on partial derivative information for those variables, while the remainder are left unchanged. We describe two problems in machine learning for which this approach has potential advantages relative to the approaches described in previous chapters (which make use of the full gradient), and present convergence analyses for the randomized and cyclic versions of this approach. We show that convergence rates of block coordinate descent methods can be analyzed in a similar fashion to the basic single-component methods.