Neural networks are vulnerable to adversarial perturbations: slight changes to inputs that can result in unexpected outputs. In neural network control systems, these inputs are often noisy sensor readings. In such settings, natural sensor noise – or an adversary who can manipulate them – may cause the system to fail. In this paper, we introduce the first technique to provably compute the minimum magnitude of sensor noise that can cause a neural network control system to violate a safety property from a given initial state. Our algorithm constructs a tree of possible successors with increasing noise until a specification is violated. We build on open-loop neural network verification methods to determine the least amount of noise that could change actions at each step of a closed-loop execution. We prove that this method identifies the unsafe trajectory with the least noise that leads to a safety violation. We evaluate our method on four systems: the Cart Pole and LunarLander environments from OpenAI gym, an aircraft collision avoidance system based on a neural network compression of ACAS Xu, and the SafeRL Aircraft Rejoin scenario. Our analysis produces unsafe trajectories where deviations under $1{\rm{\% }}$ of the sensor noise range make the systems behave erroneously.
Results
Provable observation noise robustness for neural network control systems
-
- Published online by Cambridge University Press:
- 08 January 2024, e1
-
- Article
-
- You have access
- Open access
- HTML
- Export citation
Question
Can we develop holistic approaches to delivering cyber-physical systems security?
-
- Published online by Cambridge University Press:
- 03 May 2024, e2
-
- Article
-
- You have access
- Open access
- HTML
- Export citation
Results
Multi-model workload specifications and their application to cyber-physical systems
-
- Published online by Cambridge University Press:
- 03 May 2024, e3
-
- Article
-
- You have access
- Open access
- HTML
- Export citation
Impact Paper
Robotic safe adaptation in unprecedented situations: the RoboSAPIENS project
-
- Published online by Cambridge University Press:
- 29 October 2024, e4
-
- Article
-
- You have access
- Open access
- HTML
- Export citation
Results
Template-based piecewise affine regression
-
- Published online by Cambridge University Press:
- 06 November 2024, e5
-
- Article
-
- You have access
- Open access
- HTML
- Export citation
Erratum
Multi-model workload specifications and their application to cyber-physical systems – ERRATUM
-
- Published online by Cambridge University Press:
- 07 November 2024, e6
-
- Article
-
- You have access
- Open access
- HTML
- Export citation