77
0

Task-driven Image Fusion with Learnable Fusion Loss

Abstract

Multi-modal image fusion aggregates information from multiple sensor sources, achieving superior visual quality and perceptual features compared to single-source images, often improving downstream tasks. However, current fusion methods for downstream tasks still use predefined fusion objectives that potentially mismatch the downstream tasks, limiting adaptive guidance and reducing model flexibility. To address this, we propose Task-driven Image Fusion (TDFusion), a fusion framework incorporating a learnable fusion loss guided by task loss. Specifically, our fusion loss includes learnable parameters modeled by a neural network called the loss generation module. This module is supervised by the downstream task loss in a meta-learning manner. The learning objective is to minimize the task loss of fused images after optimizing the fusion module with the fusion loss. Iterative updates between the fusion module and the loss module ensure that the fusion network evolves toward minimizing task loss, guiding the fusion process toward the task objectives. TDFusion's training relies entirely on the downstream task loss, making it adaptable to any specific task. It can be applied to any architecture of fusion and task networks. Experiments demonstrate TDFusion's performance through fusion experiments conducted on four different datasets, in addition to evaluations on semantic segmentation and object detection tasks.

View on arXiv
@article{bai2025_2412.03240,
  title={ Task-driven Image Fusion with Learnable Fusion Loss },
  author={ Haowen Bai and Jiangshe Zhang and Zixiang Zhao and Yichen Wu and Lilun Deng and Yukun Cui and Tao Feng and Shuang Xu },
  journal={arXiv preprint arXiv:2412.03240},
  year={ 2025 }
}
Comments on this paper