- Publisher:
- Cambridge University Press
- Online publication date:
- November 2024
- Print publication year:
- 2024
- Online ISBN:
- 9781009003872
As machine learning applications gain widespread adoption and integration in a variety of applications, including safety and mission-critical systems, the need for robust evaluation methods grows more urgent. This book compiles scattered information on the topic from research papers and blogs to provide a centralized resource that is accessible to students, practitioners, and researchers across the sciences. The book examines meaningful metrics for diverse types of learning paradigms and applications, unbiased estimation methods, rigorous statistical analysis, fair training sets, and meaningful explainability, all of which are essential to building robust and reliable machine learning products. In addition to standard classification, the book discusses unsupervised learning, regression, image segmentation, and anomaly detection. The book also covers topics such as industry-strength evaluation, fairness, and responsible AI. Implementations using Python and scikit-learn are available on the book's website.
‘By its nature, machine learning has always had evaluation at its heart. As the authors of this timely and important book note, the importance of doing evaluation properly is only increasing as we enter the age of machine learning deployment. The book showcases Japkowicz’ and Boukouvalas’ encyclopaedic knowledge of the subject as well as their accessible and lucid writing style. Quite simply required reading for machine learning researchers and professionals.’
Peter Flach - University of Bristol
* Views captured on Cambridge Core between #date#. This data will be updated every 24 hours.
Usage data cannot currently be displayed.