101
v1v2 (latest)

Training and Inference within 1 Second -- Tackle Cross-Sensor Degradation of Real-World Pansharpening with Efficient Residual Feature Tailoring

Main:7 Pages
12 Figures
Bibliography:2 Pages
11 Tables
Appendix:8 Pages
Abstract

Deep learning methods for pansharpening have advanced rapidly, yet models pretrained on data from a specific sensor often generalize poorly to data from other sensors. Existing methods to tackle such cross-sensor degradation include retraining model or zero-shot methods, but they are highly time-consuming or even need extra training data. To address these challenges, our method first performs modular decomposition on deep learning-based pansharpening models, revealing a general yet critical interface where high-dimensional fused features begin mapping to the channel space of the final image. % may need revisement A Feature Tailor is then integrated at this interface to address cross-sensor degradation at the feature level, and is trained efficiently with physics-aware unsupervised losses. Moreover, our method operates in a patch-wise manner, training on partial patches and performing parallel inference on all patches to boost efficiency. Our method offers two key advantages: (1) Improved Generalization Ability\textit{Improved Generalization Ability}: it significantly enhance performance in cross-sensor cases. (2) Low Generalization Cost\textit{Low Generalization Cost}: it achieves sub-second training and inference, requiring only partial test inputs and no external data, whereas prior methods often take minutes or even hours. Experiments on the real-world data from multiple datasets demonstrate that our method achieves state-of-the-art quality and efficiency in tackling cross-sensor degradation. For example, training and inference of 512×512×8512\times512\times8 image within 0.2 seconds\textit{0.2 seconds} and 4000×4000×84000\times4000\times8 image within 3 seconds\textit{3 seconds} at the fastest setting on a commonly used RTX 3090 GPU, which is over 100 times faster than zero-shot methods.

View on arXiv
Comments on this paper