Inference Attacks for X-Vector Speaker Anonymization

Abstract
We revisit the privacy-utility tradeoff of x-vector speaker anonymization. Existing approaches quantify privacy through training complex speaker verification or identification models that are later used as attacks. Instead, we propose a novel inference attack for de-anonymization. Our attack is simple and ML-free yet we show experimentally that it outperforms existing approaches.
View on arXiv@article{bauer2025_2505.08978, title={ Inference Attacks for X-Vector Speaker Anonymization }, author={ Luke Bauer and Wenxuan Bao and Malvika Jadhav and Vincent Bindschaedler }, journal={arXiv preprint arXiv:2505.08978}, year={ 2025 } }
Comments on this paper