Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-tf8b9 Total loading time: 0 Render date: 2024-11-20T13:26:27.299Z Has data issue: false hasContentIssue false

3 - A Framework for Secure Learning

from Part I - Overview of Adversarial Machine Learning

Published online by Cambridge University Press:  14 March 2019

Anthony D. Joseph
Affiliation:
University of California, Berkeley
Blaine Nelson
Affiliation:
Google
Benjamin I. P. Rubinstein
Affiliation:
University of Melbourne
J. D. Tygar
Affiliation:
University of California, Berkeley
Get access

Summary

In this chapter we introduce a framework for qualitatively assessing the security of machine learning systems that captures a broad set of security characteristics common to a number of related adversarial learning settings. There has been a rich set of work that examines the security of machine learning systems; here we survey prior studies of learning in adversarial environments, attacks against learning systems, and proposals for making systems secure against attacks. We identify different classes of attacks on machine learning systems (Section 3.3), categorizing a threat in terms of three crucial properties.

We also present secure learning as a game between an attacker and a defender— the taxonomy determines the structure of the game and its cost model. Further, this taxonomy provides a basis for evaluating the resilience of the systems described by analyzing threats against them to construct defenses. The development of defensive learning techniques is more tentative, but we also discuss a variety of techniques that show promise for defending against different types of attacks.

The work we present not only provides a common language for thinking and writing about secure learning, but goes beyond that to show how the framework applies to both algorithm design and the evaluation of real-world systems. Not only does the framework elicit common themes in otherwise disparate domains but it has also motivated our study of practical machine learning systems as presented in Chapters 5, 6, and 8. These foundational principles for characterizing attacks against learning systems are an essential first step if secure machine learning is to reach its potential as a tool for use in real systems in security-sensitive domains.

This chapter builds on earlier research (Barreno, Nelson, Sears, Joseph, & Tygar 2006; Barreno, Nelson, Joseph, & Tygar 2010; Barreno 2008).

Analyzing the Phases of Learning

Attacks can occur at each of the phases of the learning process that were outlined in Section 2.2. Figure 2.1(a) depicts how data flows through each phase of learning. We briefly outline how attacks against these phases differ.

The Measuring Phase

With knowledge of the measurement process, an adversary can design malicious instances to mimic the measurements of innocuous data. After a successful attack against the measurement mechanism, the system may require expensive reinstrumentation or redesign to accomplish its task.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2019

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×