9
0

ACT-R: Adaptive Camera Trajectories for 3D Reconstruction from Single Image

Abstract

We introduce adaptive view planning to multi-view synthesis, aiming to improve both occlusion revelation and 3D consistency for single-view 3D reconstruction. Instead of generating an unordered set of views independently or simultaneously, we generate a sequence of views, leveraging temporal consistency to enhance 3D coherence. Most importantly, our view sequence is not determined by a pre-determined camera setup. Instead, we compute an adaptive camera trajectory (ACT), specifically, an orbit of camera views, which maximizes the visibility of occluded regions of the 3D object to be reconstructed. Once the best orbit is found, we feed it to a video diffusion model to generate novel views around the orbit, which in turn, are passed to a multi-view 3D reconstruction model to obtain the final reconstruction. Our multi-view synthesis pipeline is quite efficient since it involves no run-time training/optimization, only forward inferences by applying the pre-trained models for occlusion analysis and multi-view synthesis. Our method predicts camera trajectories that reveal occlusions effectively and produce consistent novel views, significantly improving 3D reconstruction over SOTA on the unseen GSO dataset, both quantitatively and qualitatively.

View on arXiv
@article{wang2025_2505.08239,
  title={ ACT-R: Adaptive Camera Trajectories for 3D Reconstruction from Single Image },
  author={ Yizhi Wang and Mingrui Zhao and Ali Mahdavi-Amiri and Hao Zhang },
  journal={arXiv preprint arXiv:2505.08239},
  year={ 2025 }
}
Comments on this paper