ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.15934
58
0

SaMam: Style-aware State Space Model for Arbitrary Image Style Transfer

20 March 2025
Hongda Liu
Longguang Wang
Ye Zhang
Ziru Yu
Yulan Guo
    Mamba
ArXivPDFHTML
Abstract

Global effective receptive field plays a crucial role for image style transfer (ST) to obtain high-quality stylized results. However, existing ST backbones (e.g., CNNs and Transformers) suffer huge computational complexity to achieve global receptive fields. Recently, the State Space Model (SSM), especially the improved variant Mamba, has shown great potential for long-range dependency modeling with linear complexity, which offers a approach to resolve the above dilemma. In this paper, we develop a Mamba-based style transfer framework, termed SaMam. Specifically, a mamba encoder is designed to efficiently extract content and style information. In addition, a style-aware mamba decoder is developed to flexibly adapt to various styles. Moreover, to address the problems of local pixel forgetting, channel redundancy and spatial discontinuity of existing SSMs, we introduce both local enhancement and zigzag scan. Qualitative and quantitative results demonstrate that our SaMam outperforms state-of-the-art methods in terms of both accuracy and efficiency.

View on arXiv
@article{liu2025_2503.15934,
  title={ SaMam: Style-aware State Space Model for Arbitrary Image Style Transfer },
  author={ Hongda Liu and Longguang Wang and Ye Zhang and Ziru Yu and Yulan Guo },
  journal={arXiv preprint arXiv:2503.15934},
  year={ 2025 }
}
Comments on this paper