ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.03789
60
0

Positive-Unlabeled Diffusion Models for Preventing Sensitive Data Generation

5 March 2025
Hiroshi Takahashi
Tomoharu Iwata
Atsutoshi Kumagai
Yuuki Yamanaka
Tomoya Yamashita
    DiffM
ArXivPDFHTML
Abstract

Diffusion models are powerful generative models but often generate sensitive data that are unwanted by users, mainly because the unlabeled training data frequently contain such sensitive data. Since labeling all sensitive data in the large-scale unlabeled training data is impractical, we address this problem by using a small amount of labeled sensitive data. In this paper, we propose positive-unlabeled diffusion models, which prevent the generation of sensitive data using unlabeled and sensitive data. Our approach can approximate the evidence lower bound (ELBO) for normal (negative) data using only unlabeled and sensitive (positive) data. Therefore, even without labeled normal data, we can maximize the ELBO for normal data and minimize it for labeled sensitive data, ensuring the generation of only normal data. Through experiments across various datasets and settings, we demonstrated that our approach can prevent the generation of sensitive images without compromising image quality.

View on arXiv
@article{takahashi2025_2503.03789,
  title={ Positive-Unlabeled Diffusion Models for Preventing Sensitive Data Generation },
  author={ Hiroshi Takahashi and Tomoharu Iwata and Atsutoshi Kumagai and Yuuki Yamanaka and Tomoya Yamashita },
  journal={arXiv preprint arXiv:2503.03789},
  year={ 2025 }
}
Comments on this paper