ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.10109
38
0

Dream-IF: Dynamic Relative EnhAnceMent for Image Fusion

13 March 2025
Xingxin Xu
Bing Cao
Yinan Xia
Pengfei Zhu
Q. Hu
ArXivPDFHTML
Abstract

Image fusion aims to integrate comprehensive information from images acquired through multiple sources. However, images captured by diverse sensors often encounter various degradations that can negatively affect fusion quality. Traditional fusion methods generally treat image enhancement and fusion as separate processes, overlooking the inherent correlation between them; notably, the dominant regions in one modality of a fused image often indicate areas where the other modality might benefit from enhancement. Inspired by this observation, we introduce the concept of dominant regions for image enhancement and present a Dynamic Relative EnhAnceMent framework for Image Fusion (Dream-IF). This framework quantifies the relative dominance of each modality across different layers and leverages this information to facilitate reciprocal cross-modal enhancement. By integrating the relative dominance derived from image fusion, our approach supports not only image restoration but also a broader range of image enhancement applications. Furthermore, we employ prompt-based encoding to capture degradation-specific details, which dynamically steer the restoration process and promote coordinated enhancement in both multi-modal image fusion and image enhancement scenarios. Extensive experimental results demonstrate that Dream-IF consistently outperforms its counterparts.

View on arXiv
@article{xu2025_2503.10109,
  title={ Dream-IF: Dynamic Relative EnhAnceMent for Image Fusion },
  author={ Xingxin Xu and Bing Cao and Yinan Xia and Pengfei Zhu and Qinghua Hu },
  journal={arXiv preprint arXiv:2503.10109},
  year={ 2025 }
}
Comments on this paper