ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.18032
46
1

Enhancing Privacy-Utility Trade-offs to Mitigate Memorization in Diffusion Models

25 April 2025
C. L. P. Chen
Daochang Liu
M. Shah
Chang Xu
ArXivPDFHTML
Abstract

Text-to-image diffusion models have demonstrated remarkable capabilities in creating images highly aligned with user prompts, yet their proclivity for memorizing training set images has sparked concerns about the originality of the generated images and privacy issues, potentially leading to legal complications for both model owners and users, particularly when the memorized images contain proprietary content. Although methods to mitigate these issues have been suggested, enhancing privacy often results in a significant decrease in the utility of the outputs, as indicated by text-alignment scores. To bridge the research gap, we introduce a novel method, PRSS, which refines the classifier-free guidance approach in diffusion models by integrating prompt re-anchoring (PR) to improve privacy and incorporating semantic prompt search (SS) to enhance utility. Extensive experiments across various privacy levels demonstrate that our approach consistently improves the privacy-utility trade-off, establishing a new state-of-the-art.

View on arXiv
@article{chen2025_2504.18032,
  title={ Enhancing Privacy-Utility Trade-offs to Mitigate Memorization in Diffusion Models },
  author={ Chen Chen and Daochang Liu and Mubarak Shah and Chang Xu },
  journal={arXiv preprint arXiv:2504.18032},
  year={ 2025 }
}
Comments on this paper