ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.00324
26
0

Improving Optical Flow and Stereo Depth Estimation by Leveraging Uncertainty-Based Learning Difficulties

31 May 2025
Jisoo Jeong
Hong Cai
J. Lin
Fatih Porikli
ArXiv (abs)PDFHTML
Main:8 Pages
7 Figures
Bibliography:2 Pages
8 Tables
Abstract

Conventional training for optical flow and stereo depth models typically employs a uniform loss function across all pixels. However, this one-size-fits-all approach often overlooks the significant variations in learning difficulty among individual pixels and contextual regions. This paper investigates the uncertainty-based confidence maps which capture these spatially varying learning difficulties and introduces tailored solutions to address them. We first present the Difficulty Balancing (DB) loss, which utilizes an error-based confidence measure to encourage the network to focus more on challenging pixels and regions. Moreover, we identify that some difficult pixels and regions are affected by occlusions, resulting from the inherently ill-posed matching problem in the absence of real correspondences. To address this, we propose the Occlusion Avoiding (OA) loss, designed to guide the network into cycle consistency-based confident regions, where feature matching is more reliable. By combining the DB and OA losses, we effectively manage various types of challenging pixels and regions during training. Experiments on both optical flow and stereo depth tasks consistently demonstrate significant performance improvements when applying our proposed combination of the DB and OA losses.

View on arXiv
@article{jeong2025_2506.00324,
  title={ Improving Optical Flow and Stereo Depth Estimation by Leveraging Uncertainty-Based Learning Difficulties },
  author={ Jisoo Jeong and Hong Cai and Jamie Menjay Lin and Fatih Porikli },
  journal={arXiv preprint arXiv:2506.00324},
  year={ 2025 }
}
Comments on this paper