Generalizable Detection of Audio Deepfakes

In this paper, we present our comprehensive study aimed at enhancing the generalization capabilities of audio deepfake detection models. We investigate the performance of various pre-trained backbones, including Wav2Vec2, WavLM, and Whisper, across a diverse set of datasets, including those from the ASVspoof challenges and additional sources. Our experiments focus on the effects of different data augmentation strategies and loss functions on model performance. The results of our research demonstrate substantial enhancements in the generalization capabilities of audio deepfake detection models, surpassing the performance of the top-ranked single system in the ASVspoof 5 Challenge. This study contributes valuable insights into the optimization of audio models for more robust deepfake detection and facilitates future research in this critical area.
View on arXiv@article{lopez2025_2507.01750, title={ Generalizable Detection of Audio Deepfakes }, author={ Jose A. Lopez and Georg Stemmer and Héctor Cordourier Maruri }, journal={arXiv preprint arXiv:2507.01750}, year={ 2025 } }