ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.19295
14
0

FusionNet: Multi-model Linear Fusion Framework for Low-light Image Enhancement

27 April 2025
Kangbiao Shi
Yixu Feng
Tao Hu
Yu Cao
Peng Wu
Yijin Liang
Y. Zhang
Qingsen Yan
ArXivPDFHTML
Abstract

The advent of Deep Neural Networks (DNNs) has driven remarkable progress in low-light image enhancement (LLIE), with diverse architectures (e.g., CNNs and Transformers) and color spaces (e.g., sRGB, HSV, HVI) yielding impressive results. Recent efforts have sought to leverage the complementary strengths of these paradigms, offering promising solutions to enhance performance across varying degradation scenarios. However, existing fusion strategies are hindered by challenges such as parameter explosion, optimization instability, and feature misalignment, limiting further improvements. To overcome these issues, we introduce FusionNet, a novel multi-model linear fusion framework that operates in parallel to effectively capture global and local features across diverse color spaces. By incorporating a linear fusion strategy underpinned by Hilbert space theoretical guarantees, FusionNet mitigates network collapse and reduces excessive training costs. Our method achieved 1st place in the CVPR2025 NTIRE Low Light Enhancement Challenge. Extensive experiments conducted on synthetic and real-world benchmark datasets demonstrate that the proposed method significantly outperforms state-of-the-art methods in terms of both quantitative and qualitative results, delivering robust enhancement under diverse low-light conditions.

View on arXiv
@article{shi2025_2504.19295,
  title={ FusionNet: Multi-model Linear Fusion Framework for Low-light Image Enhancement },
  author={ Kangbiao Shi and Yixu Feng and Tao Hu and Yu Cao and Peng Wu and Yijin Liang and Yanning Zhang and Qingsen Yan },
  journal={arXiv preprint arXiv:2504.19295},
  year={ 2025 }
}
Comments on this paper