ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.10614
46
0

ConsisLoRA: Enhancing Content and Style Consistency for LoRA-based Style Transfer

13 March 2025
Bolin Chen
Baoquan Zhao
H. Xie
Yi Cai
Qing Li
Xudong Mao
    DiffM
ArXivPDFHTML
Abstract

Style transfer involves transferring the style from a reference image to the content of a target image. Recent advancements in LoRA-based (Low-Rank Adaptation) methods have shown promise in effectively capturing the style of a single image. However, these approaches still face significant challenges such as content inconsistency, style misalignment, and content leakage. In this paper, we comprehensively analyze the limitations of the standard diffusion parameterization, which learns to predict noise, in the context of style transfer. To address these issues, we introduce ConsisLoRA, a LoRA-based method that enhances both content and style consistency by optimizing the LoRA weights to predict the original image rather than noise. We also propose a two-step training strategy that decouples the learning of content and style from the reference image. To effectively capture both the global structure and local details of the content image, we introduce a stepwise loss transition strategy. Additionally, we present an inference guidance method that enables continuous control over content and style strengths during inference. Through both qualitative and quantitative evaluations, our method demonstrates significant improvements in content and style consistency while effectively reducing content leakage.

View on arXiv
@article{chen2025_2503.10614,
  title={ ConsisLoRA: Enhancing Content and Style Consistency for LoRA-based Style Transfer },
  author={ Bolin Chen and Baoquan Zhao and Haoran Xie and Yi Cai and Qing Li and Xudong Mao },
  journal={arXiv preprint arXiv:2503.10614},
  year={ 2025 }
}
Comments on this paper