ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.23747
39
0

Consistency-aware Self-Training for Iterative-based Stereo Matching

31 March 2025
Jingyi Zhou
Peng Ye
H. Zhang
Jiakang Yuan
Rao Qiang
Liu YangChenXu
Wu Cailin
Feng Xu
Tao Chen
    3DV
ArXivPDFHTML
Abstract

Iterative-based methods have become mainstream in stereo matching due to their high performance. However, these methods heavily rely on labeled data and face challenges with unlabeled real-world data. To this end, we propose a consistency-aware self-training framework for iterative-based stereo matching for the first time, leveraging real-world unlabeled data in a teacher-student manner. We first observe that regions with larger errors tend to exhibit more pronounced oscillation characteristics during modelthis http URLon this, we introduce a novel consistency-aware soft filtering module to evaluate the reliability of teacher-predicted pseudo-labels, which consists of a multi-resolution prediction consistency filter and an iterative prediction consistency filter to assess the prediction fluctuations of multiple resolutions and iterative optimization respectively. Further, we introduce a consistency-aware soft-weighted loss to adjust the weight of pseudo-labels accordingly, relieving the error accumulation and performance degradation problem due to incorrect pseudo-labels. Extensive experiments demonstrate that our method can improve the performance of various iterative-based stereo matching approaches in various scenarios. In particular, our method can achieve further enhancements over the current SOTA methods on several benchmark datasets.

View on arXiv
@article{zhou2025_2503.23747,
  title={ Consistency-aware Self-Training for Iterative-based Stereo Matching },
  author={ Jingyi Zhou and Peng Ye and Haoyu Zhang and Jiakang Yuan and Rao Qiang and Liu YangChenXu and Wu Cailin and Feng Xu and Tao Chen },
  journal={arXiv preprint arXiv:2503.23747},
  year={ 2025 }
}
Comments on this paper