46
0

MegaSR: Mining Customized Semantics and Expressive Guidance for Image Super-Resolution

Abstract

Pioneering text-to-image (T2I) diffusion models have ushered in a new era of real-world image super-resolution (Real-ISR), significantly enhancing the visual perception of reconstructed images. However, existing methods typically integrate uniform abstract textual semantics across all blocks, overlooking the distinct semantic requirements at different depths and the fine-grained, concrete semantics inherently present in the images themselves. Moreover, relying solely on a single type of guidance further disrupts the consistency of reconstruction. To address these issues, we propose MegaSR, a novel framework that mines customized block-wise semantics and expressive guidance for diffusion-based ISR. Compared to uniform textual semantics, MegaSR enables flexible adaptation to multi-granularity semantic awareness by dynamically incorporating image attributes at each block. Furthermore, we experimentally identify HED edge maps, depth maps, and segmentation maps as the most expressive guidance, and propose a multi-stage aggregation strategy to modulate them into the T2I models. Extensive experiments demonstrate the superiority of MegaSR in terms of semantic richness and structural consistency.

View on arXiv
@article{li2025_2503.08096,
  title={ MegaSR: Mining Customized Semantics and Expressive Guidance for Image Super-Resolution },
  author={ Xinrui Li and Jianlong Wu and Xinchuan Huang and Chong Chen and Weili Guan and Xian-Sheng Hua and Liqiang Nie },
  journal={arXiv preprint arXiv:2503.08096},
  year={ 2025 }
}
Comments on this paper