ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.07588
16
0

Visual Privacy Auditing with Diffusion Models

12 March 2024
Kristian Schwethelm
Johannes Kaiser
Moritz Knolle
Daniel Rueckert
Daniel Rueckert
Alexander Ziller
    DiffM
    AAML
ArXivPDFHTML
Abstract

Data reconstruction attacks on machine learning models pose a substantial threat to privacy, potentially leaking sensitive information. Although defending against such attacks using differential privacy (DP) provides theoretical guarantees, determining appropriate DP parameters remains challenging. Current formal guarantees on the success of data reconstruction suffer from overly stringent assumptions regarding adversary knowledge about the target data, particularly in the image domain, raising questions about their real-world applicability. In this work, we empirically investigate this discrepancy by introducing a reconstruction attack based on diffusion models (DMs) that only assumes adversary access to real-world image priors and specifically targets the DP defense. We find that (1) real-world data priors significantly influence reconstruction success, (2) current reconstruction bounds do not model the risk posed by data priors well, and (3) DMs can serve as heuristic auditing tools for visualizing privacy leakage.

View on arXiv
@article{schwethelm2025_2403.07588,
  title={ Visual Privacy Auditing with Diffusion Models },
  author={ Kristian Schwethelm and Johannes Kaiser and Moritz Knolle and Sarah Lockfisch and Daniel Rueckert and Alexander Ziller },
  journal={arXiv preprint arXiv:2403.07588},
  year={ 2025 }
}
Comments on this paper