Boosting the Transferability of Audio Adversarial Examples with Acoustic Representation Optimization

With the widespread application of automatic speech recognition (ASR) systems, their vulnerability to adversarial attacks has been extensively studied. However, most existing adversarial examples are generated on specific individual models, resulting in a lack of transferability. In real-world scenarios, attackers often cannot access detailed information about the target model, making query-based attacks unfeasible. To address this challenge, we propose a technique called Acoustic Representation Optimization that aligns adversarial perturbations with low-level acoustic characteristics derived from speech representation models. Rather than relying on model-specific, higher-layer abstractions, our approach leverages fundamental acoustic representations that remain consistent across diverse ASR architectures. By enforcing an acoustic representation loss to guide perturbations toward these robust, lower-level representations, we enhance the cross-model transferability of adversarial examples without degrading audio quality. Our method is plug-and-play and can be integrated with any existing attack methods. We evaluate our approach on three modern ASR models, and the experimental results demonstrate that our method significantly improves the transferability of adversarial examples generated by previous methods while preserving the audio quality.
View on arXiv@article{jin2025_2503.19591, title={ Boosting the Transferability of Audio Adversarial Examples with Acoustic Representation Optimization }, author={ Weifei Jin and Junjie Su and Hejia Wang and Yulin Ye and Jie Hao }, journal={arXiv preprint arXiv:2503.19591}, year={ 2025 } }