We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we introduce some of the more popular ML algorithms. Our objective is to provide the basic concepts and main ideas, how to utilize these algorithms using Matlab, and offer some examples. In particular, we discuss essential concepts in feature engineering and how to apply them in Matlab. Support vector machines (SVM), K-nearest neighbor (KNN), linear regression, Naïve Bayes algorithm, and decision trees are introduced and the fundamental underlying mathematics is explained while using Matlab’s corresponding Apps to implement each of these algorithms. A special section on reinforcement learning is included, detailing the key concepts and basic mechanism of this third ML category. In particular, we showcase how to implement reinforcement learning in Matlab as well as make use of some of the Python libraries available online and show how to use reinforcement learning for controller design.
Starting with the perceptron, in Chapter 6 we discuss the functioning, the training, and the use of neural networks. For the different neural network structures, the corresponding script in Matlab is provided and the limitations of the different neural network architectures are discussed. A detailed discussion and the underlying mathematical concept of the Backpropagation learning algorithm is accompanied with simple examples as well as sophisticated implementations using Matlab. Chapter 6 also includesconsiderations on quality measures of trained neural networks, such as the accuracy, recall, specificity, precision, prevalence, and some of the derived quantities such as the F-score and the receiver operating characteristic plot. We also look at the overfitting problem and how to handle it during the neural network training process.
Hybrid systems often try to leverage the advantages of one algorithm with the once of another while minimize its own disadvantages. Having discussed fuzzy logic and neural networks as well as a number of optimization algorithms, Chapter 7 presents several hybrid algorithms that can be used for optimization, controls, and modeling. In particular, we look at neural expert systems, expand these systems to neuro fuzzy systems and adaptive neuro-fuzzy inference systems, which we use for control applications. While revisiting the Mamdani and Sugeno fuzzy inference system, the Tsukamoto fuzzy system as well as different partitioning methods are discussed, such as the grid, the tree and the scatter partitioning. Examples using Matlab FIS app as well as Matlab’s ANFIS editor are used throughout the chapter.
Chapter 3 introduces the concept of the rule base along with the material from Chapter 2 to construct different fuzzy inference systems such as the Mamdani fuzzy inference system or the Sugeno fuzzy inference system. The Takagi-Sugeno fuzzy inference system is used to design fuzzy logic controllers and Lyapunov theory is utilized to investigate the closed-loop system stability of such controllers. Concepts such as local sector nonlinearity, globally asymptotical stability using state-space models are introduced and discussed to fashion controllers for nonlinear systems. Throughout the chapter, Matlab’s FIS editor is used to design fuzzy inference systems and corresponding controllers.
Starting with crisp set theory, fuzzy sets and concepts of fuzzy logic are introduced in Chapter 2. Some of the key operators are discussed and utilized in a number of examples. Membership functions, membership operators, their programming in Matlab, as well as logic operators using membership functions are explained. Along with conditional statements such as fuzzy rules and linguistic variables concepts such as antecedents, consequences and inference are discussed and shown how to implement this type of reasoning in Matlab.
Chapter 1 provides for an introduction to the key concepts of the book, including supervised and unsupervised learning, reinforcement learning, and controls. The objective is to provide an overview of the many methods and algorithms and how they are relate to each other as well as to controls applications.
Based on Chapter 6, in this chapter we expand the discussion of neural networks to include networks that have more than one hidden layer. Common structures such as the convolutional neural network (CNN) or the Long Short-Term Memory network (LSTM) are explained and used along with Matlab’s Deep Network Designer App as well as Matlab script to implement and train such networks. Issues such as the vanishing or exploding gradient, normalization, and training strategies are discussed. Concepts that address overfitting and the vanishing or exploding gradient are introduced, including dropout and regularization. Transfer learning is discussed and showcased using Matlab’s DND App.
In this chapter, we establish the mathematical foundation for hard computing optimization algorithms. We look at the classical optimization approaches and extend our discussion to include iterative methods, which hold a special role in machine learning. In particular, we review the gradient decent method, Newton’s method, the conjugate gradient method and the quasi-Newton’s method. Along with the discussion of these optimization methods, implementation using Matlab script as well as considerations for use in neural network training algorithms are provided. Finally, the Levenberg-Marquardt method is introduced, discussed, and implemented in Matlab script to compare its functioning with the other four iterative algorithms introduced in this chapter.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
We provide a short, self-contained introduction to deep neural networks that is aimed at mathematically inclined readers. We promote the use of a vect--matrix formalism that is well suited to the compositional structure of these networks and that facilitates the derivation/description of the backpropagation algorithm. We present a detailed analysis of supervised learning for the two most common scenarios, (i) multivariate regression and (ii) classification, which rely on the minimization of least squares and cross-entropy criteria, respectively.