Model Inversion (MI) attacks aim at leveraging the output information of target models to reconstruct privacy-sensitive training data, raising critical concerns regarding the privacy vulnerabilities of Deep Neural Networks (DNNs). Unfortunately, in tandem with the rapid evolution of MI attacks, the absence of a comprehensive benchmark with standardized metrics and reproducible implementations has emerged as a formidable challenge. This deficiency has hindered objective comparison of methodological advancements and reliable assessment of defense efficacy. To address this critical gap, we build the first practical benchmark named MIBench for systematic evaluation of model inversion attacks and defenses. This benchmark bases on an extensible and reproducible modular-based toolbox which currently integrates a total of 19 state-of-the-art attack and defense methods and encompasses 9 standardized evaluation protocols. Capitalizing on this foundation, we conduct extensive evaluation from multiple perspectives to holistically compare and analyze various methods across different scenarios, such as the impact of target resolution, model predictive power, defense performance and adversarial robustness.
View on arXiv@article{qiu2025_2410.05159, title={ MIBench: A Comprehensive Framework for Benchmarking Model Inversion Attack and Defense }, author={ Yixiang Qiu and Hongyao Yu and Hao Fang and Tianqu Zhuang and Wenbo Yu and Bin Chen and Xuan Wang and Shu-Tao Xia and Ke Xu }, journal={arXiv preprint arXiv:2410.05159}, year={ 2025 } }