ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.07955
16
0

BoxDreamer: Dreaming Box Corners for Generalizable Object Pose Estimation

10 April 2025
Yuanhong Yu
Xingyi He He
Chen Zhao
Junhao Yu
Jiaqi Yang
Ruizhen Hu
Yujun Shen
Xing Zhu
Xiaowei Zhou
Sida Peng
ArXivPDFHTML
Abstract

This paper presents a generalizable RGB-based approach for object pose estimation, specifically designed to address challenges in sparse-view settings. While existing methods can estimate the poses of unseen objects, their generalization ability remains limited in scenarios involving occlusions and sparse reference views, restricting their real-world applicability. To overcome these limitations, we introduce corner points of the object bounding box as an intermediate representation of the object pose. The 3D object corners can be reliably recovered from sparse input views, while the 2D corner points in the target view are estimated through a novel reference-based point synthesizer, which works well even in scenarios involving occlusions. As object semantic points, object corners naturally establish 2D-3D correspondences for object pose estimation with a PnP algorithm. Extensive experiments on the YCB-Video and Occluded-LINEMOD datasets show that our approach outperforms state-of-the-art methods, highlighting the effectiveness of the proposed representation and significantly enhancing the generalization capabilities of object pose estimation, which is crucial for real-world applications.

View on arXiv
@article{yu2025_2504.07955,
  title={ BoxDreamer: Dreaming Box Corners for Generalizable Object Pose Estimation },
  author={ Yuanhong Yu and Xingyi He and Chen Zhao and Junhao Yu and Jiaqi Yang and Ruizhen Hu and Yujun Shen and Xing Zhu and Xiaowei Zhou and Sida Peng },
  journal={arXiv preprint arXiv:2504.07955},
  year={ 2025 }
}
Comments on this paper