ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.06367
60
0

FOCUS - Multi-View Foot Reconstruction From Synthetically Trained Dense Correspondences

10 February 2025
Oliver Boyne
R. Cipolla
    3DH
    3DV
ArXivPDFHTML
Abstract

Surface reconstruction from multiple, calibrated images is a challenging task - often requiring a large number of collected images with significant overlap. We look at the specific case of human foot reconstruction. As with previous successful foot reconstruction work, we seek to extract rich per-pixel geometry cues from multi-view RGB images, and fuse these into a final 3D object. Our method, FOCUS, tackles this problem with 3 main contributions: (i) SynFoot2, an extension of an existing synthetic foot dataset to include a new data type: dense correspondence with the parameterized foot model FIND; (ii) an uncertainty-aware dense correspondence predictor trained on our synthetic dataset; (iii) two methods for reconstructing a 3D surface from dense correspondence predictions: one inspired by Structure-from-Motion, and one optimization-based using the FIND model. We show that our reconstruction achieves state-of-the-art reconstruction quality in a few-view setting, performing comparably to state-of-the-art when many views are available, and runs substantially faster. We release our synthetic dataset to the research community. Code is available at:this https URL

View on arXiv
@article{boyne2025_2502.06367,
  title={ FOCUS - Multi-View Foot Reconstruction From Synthetically Trained Dense Correspondences },
  author={ Oliver Boyne and Roberto Cipolla },
  journal={arXiv preprint arXiv:2502.06367},
  year={ 2025 }
}
Comments on this paper