ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.15341
76
0

Efficient Fine-Tuning and Concept Suppression for Pruned Diffusion Models

13 March 2025
Reza Shirkavand
Peiran Yu
Shangqian Gao
Gowthami Somepalli
Tom Goldstein
Heng-Chiao Huang
ArXivPDFHTML
Abstract

Recent advances in diffusion generative models have yielded remarkable progress. While the quality of generated content continues to improve, these models have grown considerably in size and complexity. This increasing computational burden poses significant challenges, particularly in resource-constrained deployment scenarios such as mobile devices. The combination of model pruning and knowledge distillation has emerged as a promising solution to reduce computational demands while preserving generation quality. However, this technique inadvertently propagates undesirable behaviors, including the generation of copyrighted content and unsafe concepts, even when such instances are absent from the fine-tuning dataset. In this paper, we propose a novel bilevel optimization framework for pruned diffusion models that consolidates the fine-tuning and unlearning processes into a unified phase. Our approach maintains the principal advantages of distillation-namely, efficient convergence and style transfer capabilities-while selectively suppressing the generation of unwanted content. This plug-in framework is compatible with various pruning and concept unlearning methods, facilitating efficient, safe deployment of diffusion models in controlled environments.

View on arXiv
@article{shirkavand2025_2412.15341,
  title={ Efficient Fine-Tuning and Concept Suppression for Pruned Diffusion Models },
  author={ Reza Shirkavand and Peiran Yu and Shangqian Gao and Gowthami Somepalli and Tom Goldstein and Heng Huang },
  journal={arXiv preprint arXiv:2412.15341},
  year={ 2025 }
}
Comments on this paper