Robust in Practice: Adversarial Attacks on Quantum Machine Learning
Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Previously Published Works bannerUC Berkeley

Robust in Practice: Adversarial Attacks on Quantum Machine Learning

Creative Commons 'BY' version 4.0 license
Abstract

State-of-the-art classical neural networks are observed to be vulnerable to small crafted adversarial perturbations. A more severe vulnerability has been noted for quantum machine learning (QML) models classifying Haar-random pure states. This stems from the concentration of measure phenomenon, a property of the metric space when sampled probabilistically, and is independent of the classification protocol. In order to provide insights into the adversarial robustness of a quantum classifier on real-world classification tasks, we focus on the adversarial robustness in classifying a subset of encoded states that are smoothly generated from a Gaussian latent space. We show that the vulnerability of this task is considerably weaker than that of classifying Haar-random pure states. In particular, we find only mildly polynomially decreasing robustness in the number of qubits, in contrast to the exponentially decreasing robustness when classifying Haar-random pure states and suggesting that QML models can be useful for real-world classification tasks.

Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View