We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter discusses the generalized linear classifier that results from convex optimization problem and takes in general nonexplicit form. Random matrix theory is combined with leave-one-out arguments to handle the technical difficulty due to implicity. Again, counterintuitive phenomena arise in popular machine learning methods such as logistic regression or SMV in the large-dimensional setting, a well-defined solution may not even exist, and if it does, it behaves dramatically from its small-dimensional counterpart.
This chapter covers the basics of random matrix theory, within the unified framework of resolvent- and deterministic-equivalent approach. Historical and foundational random matrix results are presented in the proposed framework, together with heuristic derivations as well as detailed proofs. Topics such as statistical inference and spiked models are covered. The concentration-of-measure framework, as a newly born yet very flexible and powerful technical approach, is discussed at the end of the chapter.
This chapter discusses the fundamental kernel methods, with applications to supervised (kernel ridge regression or LS-SVM), semi-supervised (graph Laplacian-based learning), or unsupervised learning (such as kernel spectral clustering) schemes. By focusing on the typical examples of distance and inner-product-type kernels, we show how large-dimensional kernel approach differs from our small-dimensional intuition, and perhaps more importantly, how random matrix theory plays a central role in understanding and tuning various kernel-based methods.
This chapter discusses the fundamentally different mental images of large-dimensional machine learning (versus its small-dimensional counterpart), through the examples of sample covariance matrices and kernel matrices, on both synthetic and real data. Random matrix theory is presented as a flexible and powerful tool to assess, understand, and improve classical machine learning methods in this modern large-dimensional setting.
This chapter exploits concentration-of-measure approach for real data modeling, via the recent advance of deep generative adversarial networks (GANs). This assessment theoretically supports the surprisingly good match between theory and practice observed on real-world data in previous chapters. Conclusion on the universality of large-dimensional machine learning is drawn at the end of the chapter.
This book presents a unified theory of random matrices for applications in machine learning, offering a large-dimensional data vision that exploits concentration and universality phenomena. This enables a precise understanding, and possible improvements, of the core mechanisms at play in real-world machine learning algorithms. The book opens with a thorough introduction to the theoretical basics of random matrices, which serves as a support to a wide scope of applications ranging from SVMs, through semi-supervised learning, unsupervised spectral clustering, and graph methods, to neural networks and deep learning. For each application, the authors discuss small- versus large-dimensional intuitions of the problem, followed by a systematic random matrix analysis of the resulting performance and possible improvements. All concepts, applications, and variations are illustrated numerically on synthetic as well as real-world data, with MATLAB and Python code provided on the accompanying website.
How can machine learning help the design of future communication networks – and how can future networks meet the demands of emerging machine learning applications? Discover the interactions between two of the most transformative and impactful technologies of our age in this comprehensive book. First, learn how modern machine learning techniques, such as deep neural networks, can transform how we design and optimize future communication networks. Accessible introductions to concepts and tools are accompanied by numerous real-world examples, showing you how these techniques can be used to tackle longstanding problems. Next, explore the design of wireless networks as platforms for machine learning applications – an overview of modern machine learning techniques and communication protocols will help you to understand the challenges, while new methods and design approaches will be presented to handle wireless channel impairments such as noise and interference, to meet the demands of emerging machine learning applications at the wireless edge.