Universal Adversarial Perturbations for Speech Recognition Systems
Skip to main content
eScholarship
Open Access Publications from the University of California

Universal Adversarial Perturbations for Speech Recognition Systems

  • Author(s): Neekhara, Paarth
  • Hussain, Shehzeen
  • Pandey, Prakhar
  • Dubnov, Shlomo
  • McAuley, Julian
  • Koushanfar, Farinaz
  • et al.
Abstract

In this work, we demonstrate the existence of universal adversarial audio perturbations that cause mis-transcription of audio signals by automatic speech recognition (ASR) systems. We propose an algorithm to find a single quasi-imperceptible perturbation, which when added to any arbitrary speech signal, will most likely fool the victim speech recognition model. Our experiments demonstrate the application of our proposed technique by crafting audio-agnostic universal perturbations for the state-of-the-art ASR system -- Mozilla DeepSpeech. Additionally, we show that such perturbations generalize to a significant extent across models that are not available during training, by performing a transferability test on a WaveNet based ASR system.

Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.

Main Content
Current View