ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.05159
22
1

MIBench: A Comprehensive Framework for Benchmarking Model Inversion Attack and Defense

7 October 2024
Yixiang Qiu
Hongyao Yu
Hao Fang
Wenbo Yu
Wenbo Yu
Bin Chen
Shu-Tao Xia
Ke Xu
Ke Xu
    AAML
ArXivPDFHTML
Abstract

Model Inversion (MI) attacks aim at leveraging the output information of target models to reconstruct privacy-sensitive training data, raising critical concerns regarding the privacy vulnerabilities of Deep Neural Networks (DNNs). Unfortunately, in tandem with the rapid evolution of MI attacks, the absence of a comprehensive benchmark with standardized metrics and reproducible implementations has emerged as a formidable challenge. This deficiency has hindered objective comparison of methodological advancements and reliable assessment of defense efficacy. To address this critical gap, we build the first practical benchmark named MIBench for systematic evaluation of model inversion attacks and defenses. This benchmark bases on an extensible and reproducible modular-based toolbox which currently integrates a total of 19 state-of-the-art attack and defense methods and encompasses 9 standardized evaluation protocols. Capitalizing on this foundation, we conduct extensive evaluation from multiple perspectives to holistically compare and analyze various methods across different scenarios, such as the impact of target resolution, model predictive power, defense performance and adversarial robustness.

View on arXiv
@article{qiu2025_2410.05159,
  title={ MIBench: A Comprehensive Framework for Benchmarking Model Inversion Attack and Defense },
  author={ Yixiang Qiu and Hongyao Yu and Hao Fang and Tianqu Zhuang and Wenbo Yu and Bin Chen and Xuan Wang and Shu-Tao Xia and Ke Xu },
  journal={arXiv preprint arXiv:2410.05159},
  year={ 2025 }
}
Comments on this paper