ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.09408
112
2

Data Attribution for Text-to-Image Models by Unlearning Synthesized Images

21 February 2025
Sheng-Yu Wang
Aaron Hertzmann
Alexei A. Efros
Jun-Yan Zhu
Richard Zhang
    TDI
ArXivPDFHTML
Abstract

The goal of data attribution for text-to-image models is to identify the training images that most influence the generation of a new image. Influence is defined such that, for a given output, if a model is retrained from scratch without the most influential images, the model would fail to reproduce the same output. Unfortunately, directly searching for these influential images is computationally infeasible, since it would require repeatedly retraining models from scratch. In our work, we propose an efficient data attribution method by simulating unlearning the synthesized image. We achieve this by increasing the training loss on the output image, without catastrophic forgetting of other, unrelated concepts. We then identify training images with significant loss deviations after the unlearning process and label these as influential. We evaluate our method with a computationally intensive but "gold-standard" retraining from scratch and demonstrate our method's advantages over previous methods.

View on arXiv
@article{wang2025_2406.09408,
  title={ Data Attribution for Text-to-Image Models by Unlearning Synthesized Images },
  author={ Sheng-Yu Wang and Aaron Hertzmann and Alexei A. Efros and Jun-Yan Zhu and Richard Zhang },
  journal={arXiv preprint arXiv:2406.09408},
  year={ 2025 }
}
Comments on this paper