We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Bridge the gap between theoretical concepts and their practical applications with this rigorous introduction to the mathematics underpinning data science. It covers essential topics in linear algebra, calculus and optimization, and probability and statistics, demonstrating their relevance in the context of data analysis. Key application topics include clustering, regression, classification, dimensionality reduction, network analysis, and neural networks. What sets this text apart is its focus on hands-on learning. Each chapter combines mathematical insights with practical examples, using Python to implement algorithms and solve problems. Self-assessment quizzes, warm-up exercises and theoretical problems foster both mathematical understanding and computational skills. Designed for advanced undergraduate students and beginning graduate students, this textbook serves as both an invitation to data science for mathematics majors and as a deeper excursion into mathematics for data science students.
Emphasizing how and why machine learning algorithms work, this introductory textbook bridges the gap between the theoretical foundations of machine learning and its practical algorithmic and code-level implementation. Over 85 thorough worked examples, in both Matlab and Python, demonstrate how algorithms are implemented and applied whilst illustrating the end result. Over 75 end-of-chapter problems empower students to develop their own code to implement these algorithms, equipping them with hands-on experience. Matlab coding examples demonstrate how a mathematical idea is converted from equations to code, and provide a jumping off point for students, supported by in-depth coverage of essential mathematics including multivariable calculus, linear algebra, probability and statistics, numerical methods, and optimization. Accompanied online by instructor lecture slides, downloadable Python code and additional appendices, this is an excellent introduction to machine learning for senior undergraduate and graduate students in Engineering and Computer Science.
A graduate-level introduction to advanced topics in Markov chain Monte Carlo (MCMC), as applied broadly in the Bayesian computational context. The topics covered have emerged as recently as the last decade and include stochastic gradient MCMC, non-reversible MCMC, continuous time MCMC, and new techniques for convergence assessment. A particular focus is on cutting-edge methods that are scalable with respect to either the amount of data, or the data dimension, motivated by the emerging high-priority application areas in machine learning and AI. Examples are woven throughout the text to demonstrate how scalable Bayesian learning methods can be implemented. This text could form the basis for a course and is sure to be an invaluable resource for researchers in the field.
Recommender systems are ubiquitous in modern life and are one of the main monetization channels for Internet technology giants. This book helps graduate students, researchers and practitioners to get to grips with this cutting-edge field and build the thorough understanding and practical skills needed to progress in the area. It not only introduces the applications of deep learning and generative AI for recommendation models, but also focuses on the industry architecture of the recommender systems. The authors include a detailed discussion of the implementation solutions used by companies such as YouTube, Alibaba, Airbnb and Netflix, as well as the related machine learning framework including model serving, model training, feature storage and data stream processing.
Artificial intelligence is dramatically reshaping scientific research and is coming to play an essential role in scientific and technological development by enhancing and accelerating discovery across multiple fields. This book dives into the interplay between artificial intelligence and the quantum sciences; the outcome of a collaborative effort from world-leading experts. After presenting the key concepts and foundations of machine learning, a subfield of artificial intelligence, its applications in quantum chemistry and physics are presented in an accessible way, enabling readers to engage with emerging literature on machine learning in science. By examining its state-of-the-art applications, readers will discover how machine learning is being applied within their own field and appreciate its broader impact on science and technology. This book is accessible to undergraduates and more advanced readers from physics, chemistry, engineering, and computer science. Online resources include Jupyter notebooks to expand and develop upon key topics introduced in the book.
Machine learning has become a dominant problem-solving technique in the modern world, with applications ranging from search engines and social media to self-driving cars and artificial intelligence. This lucid textbook presents the theoretical foundations of machine learning algorithms, and then illustrates each concept with its detailed implementation in Python to allow beginners to effectively implement the principles in real-world applications. All major techniques, such as regression, classification, clustering, deep learning, and association mining, have been illustrated using step-by-step coding instructions to help inculcate a 'learning by doing' approach. The book has no prerequisites, and covers the subject from the ground up, including a detailed introductory chapter on the Python language. As such, it is going to be a valuable resource not only for students of computer science, but also for anyone looking for a foundation in the subject, as well as professionals looking for a ready reckoner.
As machine learning applications gain widespread adoption and integration in a variety of applications, including safety and mission-critical systems, the need for robust evaluation methods grows more urgent. This book compiles scattered information on the topic from research papers and blogs to provide a centralized resource that is accessible to students, practitioners, and researchers across the sciences. The book examines meaningful metrics for diverse types of learning paradigms and applications, unbiased estimation methods, rigorous statistical analysis, fair training sets, and meaningful explainability, all of which are essential to building robust and reliable machine learning products. In addition to standard classification, the book discusses unsupervised learning, regression, image segmentation, and anomaly detection. The book also covers topics such as industry-strength evaluation, fairness, and responsible AI. Implementations using Python and scikit-learn are available on the book's website.
Maximise student engagement and understanding of matrix methods in data-driven applications with this modern teaching package. Students are introduced to matrices in two preliminary chapters, before progressing to advanced topics such as the nuclear norm, proximal operators and convex optimization. Highlighted applications include low-rank approximation, matrix completion, subspace learning, logistic regression for binary classification, robust PCA, dimensionality reduction and Procrustes problems. Extensively classroom-tested, the book includes over 200 multiple-choice questions suitable for in-class interactive learning or quizzes, as well as homework exercises (with solutions available for instructors). It encourages active learning with engaging 'explore' questions, with answers at the back of each chapter, and Julia code examples to demonstrate how the mathematics is actually used in practice. A suite of computational notebooks offers a hands-on learning experience for students. This is a perfect textbook for upper-level undergraduates and first-year graduate students who have taken a prior course in linear algebra basics.
Based on the authors' extensive teaching experience, this hands-on graduate-level textbook teaches how to carry out large-scale data analytics and design machine learning solutions for big data. With a focus on fundamentals, this extensively class-tested textbook walks students through key principles and paradigms for working with large-scale data, frameworks for large-scale data analytics (Hadoop, Spark), and explains how to implement machine learning to exploit big data. It is unique in covering the principles that aspiring data scientists need to know, without detail that can overwhelm. Real-world examples, hands-on coding exercises and labs combine with exceptionally clear explanations to maximize student engagement. Well-defined learning objectives, exercises with online solutions for instructors, lecture slides, and an accompanying suite of lab exercises of increasing difficulty in Jupyter Notebooks offer a coherent and convenient teaching package. An ideal teaching resource for courses on large-scale data analytics with machine learning in computer/data science departments.
Dive into the foundations of intelligent systems, machine learning, and control with this hands-on, project-based introductory textbook. Precise, clear introductions to core topics in fuzzy logic, neural networks, optimization, deep learning, and machine learning, avoid the use of complex mathematical proofs, and are supported by over 70 examples. Modular chapters built around a consistent learning framework enable tailored course offerings to suit different learning paths. Over 180 open-ended review questions support self-review and class discussion, over 120 end-of-chapter problems cement student understanding, and over 20 hands-on Arduino assignments connect theory to practice, supported by downloadable Matlab and Simulink code. Comprehensive appendices review the fundamentals of modern control, and contain practical information on implementing hands-on assignments using Matlab, Simulink, and Arduino. Accompanied by solutions for instructors, this is the ideal guide for senior undergraduate and graduate engineering students, and professional engineers, looking for an engaging and practical introduction to the field.
Discover the power of deep neural networks for image reconstruction with this state-of-the-art review of modern theories and applications. The background theory of deep learning is introduced step-by-step, and by incorporating modeling fundamentals this book explains how to implement deep learning in a variety of modalities, including X-ray, CT, MRI and others. Real-world examples demonstrate an interdisciplinary approach to medical image reconstruction processes, featuring numerous imaging applications. Recent clinical studies and innovative research activity in generative models and mathematical theory will inspire the reader towards new frontiers. This book is ideal for graduate students in Electrical or Biomedical Engineering or Medical Physics.
Providing a logical framework for student learning, this is the first textbook on adversarial learning. It introduces vulnerabilities of deep learning, then demonstrates methods for defending against attacks and making AI generally more robust. To help students connect theory with practice, it explains and evaluates attack-and-defense scenarios alongside real-world examples. Feasible, hands-on student projects, which increase in difficulty throughout the book, give students practical experience and help to improve their Python and PyTorch skills. Book chapters conclude with questions that can be used for classroom discussions. In addition to deep neural networks, students will also learn about logistic regression, naïve Bayes classifiers, and support vector machines. Written for senior undergraduate and first-year graduate courses, the book offers a window into research methods and current challenges. Online resources include lecture slides and image files for instructors, and software for early course projects for students.
Robust statistics is the study of designing estimators that perform well even when the dataset significantly deviates from the idealized modeling assumptions, such as in the presence of model misspecification or adversarial outliers in the dataset. The classical statistical theory, dating back to pioneering works by Tukey and Huber, characterizes the information-theoretic limits of robust estimation for most common problems. A recent line of work in computer science gave the first computationally efficient robust estimators in high dimensions for a range of learning tasks. This reference text for graduate students, researchers, and professionals in machine learning theory, provides an overview of recent developments in algorithmic high-dimensional robust statistics, presenting the underlying ideas in a clear and unified manner, while leveraging new perspectives on the developed techniques to provide streamlined proofs of these results. The most basic and illustrative results are analyzed in each chapter, while more tangential developments are explored in the exercises.
The mathematical theory of machine learning not only explains the current algorithms but can also motivate principled approaches for the future. This self-contained textbook introduces students and researchers of AI to the main mathematical techniques used to analyze machine learning algorithms, with motivations and applications. Topics covered include the analysis of supervised learning algorithms in the iid setting, the analysis of neural networks (e.g. neural tangent kernel and mean-field analysis), and the analysis of machine learning algorithms in the sequential decision setting (e.g. online learning, bandit problems, and reinforcement learning). Students will learn the basic mathematical tools used in the theoretical analysis of these machine learning problems and how to apply them to the analysis of various concrete algorithms. This textbook is perfect for readers who have some background knowledge of basic machine learning methods, but want to gain sufficient technical knowledge to understand research papers in theoretical machine learning.
Statistical and machine learning methods have many applications in the environmental sciences, including prediction and data analysis in meteorology, hydrology and oceanography; pattern recognition for satellite images from remote sensing; management of agriculture and forests; assessment of climate change; and much more. With rapid advances in machine learning in the last decade, this book provides an urgently needed, comprehensive guide to machine learning and statistics for students and researchers interested in environmental data science. It includes intuitive explanations covering the relevant background mathematics, with examples drawn from the environmental sciences. A broad range of topics is covered, including correlation, regression, classification, clustering, neural networks, random forests, boosting, kernel methods, evolutionary algorithms and deep learning, as well as the recent merging of machine learning and physics. End‑of‑chapter exercises allow readers to develop their problem-solving skills, and online datasets allow readers to practise analysis of real data.
This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This second volume, Inference, builds on the foundational topics established in volume I to introduce students to techniques for inferring unknown variables and quantities, including Bayesian inference, Monte Carlo Markov Chain methods, maximum-likelihood estimation, hidden Markov models, Bayesian networks, and reinforcement learning. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 350 end-of-chapter problems (including solutions for instructors), 180 solved examples, almost 200 figures, datasets and downloadable Matlab code. Supported by sister volumes Foundations and Learning, and unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, statistical analysis, data science and inference.
This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This final volume, Learning, builds on the foundational topics established in volume I to provide a thorough introduction to learning methods, addressing techniques such as least-squares methods, regularization, online learning, kernel methods, feedforward and recurrent neural networks, meta-learning, and adversarial attacks. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 350 end-of-chapter problems (including complete solutions for instructors), 280 figures, 100 solved examples, datasets and downloadable Matlab code. Supported by sister volumes Foundations and Inference, and unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, data and inference.
This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This first volume, Foundations, introduces core topics in inference and learning, such as matrix theory, linear algebra, random variables, convex optimization and stochastic optimization, and prepares students for studying their practical application in later volumes. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 600 end-of-chapter problems (including solutions for instructors), 100 figures, 180 solved examples, datasets and downloadable Matlab code. Supported by sister volumes Inference and Learning, and unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, statistical analysis, data science and inference.
This self-contained introduction to machine learning, designed from the start with engineers in mind, will equip students with everything they need to start applying machine learning principles and algorithms to real-world engineering problems. With a consistent emphasis on the connections between estimation, detection, information theory, and optimization, it includes: an accessible overview of the relationships between machine learning and signal processing, providing a solid foundation for further study; clear explanations of the differences between state-of-the-art techniques and more classical methods, equipping students with all the understanding they need to make informed technique choices; demonstration of the links between information-theoretical concepts and their practical engineering relevance; reproducible examples using Matlab, enabling hands-on student experimentation. Assuming only a basic understanding of probability and linear algebra, and accompanied by lecture slides and solutions for instructors, this is the ideal introduction to machine learning for engineering students of all disciplines.