One Model for ALL: Low-Level Task Interaction Is a Key to Task-Agnostic Image Fusion

Advanced image fusion methods mostly prioritise high-level missions, where task interaction struggles with semantic gaps, requiring complex bridging mechanisms. In contrast, we propose to leverage low-level vision tasks from digital photography fusion, allowing for effective feature interaction through pixel-level supervision. This new paradigm provides strong guidance for unsupervised multimodal fusion without relying on abstract semantics, enhancing task-shared feature learning for broader applicability. Owning to the hybrid image features and enhanced universal representations, the proposed GIFNet supports diverse fusion tasks, achieving high performance across both seen and unseen scenarios with a single model. Uniquely, experimental results reveal that our framework also supports single-modality enhancement, offering superior flexibility for practical applications. Our code will be available atthis https URL.
View on arXiv@article{cheng2025_2502.19854, title={ One Model for ALL: Low-Level Task Interaction Is a Key to Task-Agnostic Image Fusion }, author={ Chunyang Cheng and Tianyang Xu and Zhenhua Feng and Xiaojun Wu and ZhangyongTang and Hui Li and Zeyang Zhang and Sara Atito and Muhammad Awais and Josef Kittler }, journal={arXiv preprint arXiv:2502.19854}, year={ 2025 } }