39
0

Is Pre-training Applicable to the Decoder for Dense Prediction?

Abstract

Pre-trained encoders are widely employed in dense prediction tasks for their capability to effectively extract visual features from images. The decoder subsequently processes these features to generate pixel-level predictions. However, due to structural differences and variations in input data, only encoders benefit from pre-learned representations from vision benchmarks such as image classification and self-supervised learning, while decoders are typically trained from scratch. In this paper, we introduce ×\timesNet, which facilitates a "pre-trained encoder ×\times pre-trained decoder" collaboration through three innovative designs. ×\timesNet enables the direct utilization of pre-trained models within the decoder, integrating pre-learned representations into the decoding process to enhance performance in dense prediction tasks. By simply coupling the pre-trained encoder and pre-trained decoder, ×\timesNet distinguishes itself as a highly promising approach. Remarkably, it achieves this without relying on decoding-specific structures or task-specific algorithms. Despite its streamlined design, ×\timesNet outperforms advanced methods in tasks such as monocular depth estimation and semantic segmentation, achieving state-of-the-art performance particularly in monocular depth estimation. and semantic segmentation, achieving state-of-the-art results, especially in monocular depth estimation. embedding algorithms. Despite its streamlined design, ×\timesNet outperforms advanced methods in tasks such as monocular depth estimation and semantic segmentation, achieving state-of-the-art performance particularly in monocular depth estimation.

View on arXiv
@article{ning2025_2503.07637,
  title={ Is Pre-training Applicable to the Decoder for Dense Prediction? },
  author={ Chao Ning and Wanshui Gan and Weihao Xuan and Naoto Yokoya },
  journal={arXiv preprint arXiv:2503.07637},
  year={ 2025 }
}
Comments on this paper