ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.03685
54
10

View-Invariant Policy Learning via Zero-Shot Novel View Synthesis

21 February 2025
Stephen Tian
Blake Wulfe
Kyle Sargent
Katherine Liu
Sergey Zakharov
Vitor Campagnolo Guizilini
Jiajun Wu
ArXivPDFHTML
Abstract

Large-scale visuomotor policy learning is a promising approach toward developing generalizable manipulation systems. Yet, policies that can be deployed on diverse embodiments, environments, and observational modalities remain elusive. In this work, we investigate how knowledge from large-scale visual data of the world may be used to address one axis of variation for generalizable manipulation: observational viewpoint. Specifically, we study single-image novel view synthesis models, which learn 3D-aware scene-level priors by rendering images of the same scene from alternate camera viewpoints given a single input image. For practical application to diverse robotic data, these models must operate zero-shot, performing view synthesis on unseen tasks and environments. We empirically analyze view synthesis models within a simple data-augmentation scheme that we call View Synthesis Augmentation (VISTA) to understand their capabilities for learning viewpoint-invariant policies from single-viewpoint demonstration data. Upon evaluating the robustness of policies trained with our method to out-of-distribution camera viewpoints, we find that they outperform baselines in both simulated and real-world manipulation tasks. Videos and additional visualizations are available atthis https URL.

View on arXiv
@article{tian2025_2409.03685,
  title={ View-Invariant Policy Learning via Zero-Shot Novel View Synthesis },
  author={ Stephen Tian and Blake Wulfe and Kyle Sargent and Katherine Liu and Sergey Zakharov and Vitor Guizilini and Jiajun Wu },
  journal={arXiv preprint arXiv:2409.03685},
  year={ 2025 }
}
Comments on this paper