362
v1v2v3v4 (latest)

Comparative Study on Noise-Augmented Training and its Effect on Adversarial Robustness in ASR Systems

Computer Speech and Language (CSL), 2024
Main:11 Pages
4 Figures
Bibliography:3 Pages
4 Tables
Abstract

In this study, we investigate whether noise-augmented training can concurrently improve adversarial robustness in automatic speech recognition (ASR) systems. We conduct a comparative analysis of the adversarial robustness of four different ASR architectures, each trained under three different augmentation conditions: (1) background noise, speed variations, and reverberations; (2) speed variations only; (3) no data augmentation. We then evaluate the robustness of all resulting models against attacks with white-box or black-box adversarial examples. Our results demonstrate that noise augmentation not only enhances model performance on noisy speech but also improves the model's robustness to adversarial attacks.

View on arXiv
Comments on this paper