ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.01288
10
0

ViSA-Flow: Accelerating Robot Skill Learning via Large-Scale Video Semantic Action Flow

2 May 2025
Changhe Chen
Quantao Yang
Xiaohao Xu
Nima Fazeli
Olov Andersson
ArXivPDFHTML
Abstract

One of the central challenges preventing robots from acquiring complex manipulation skills is the prohibitive cost of collecting large-scale robot demonstrations. In contrast, humans are able to learn efficiently by watching others interact with their environment. To bridge this gap, we introduce semantic action flow as a core intermediate representation capturing the essential spatio-temporal manipulator-object interactions, invariant to superficial visual differences. We present ViSA-Flow, a framework that learns this representation self-supervised from unlabeled large-scale video data. First, a generative model is pre-trained on semantic action flows automatically extracted from large-scale human-object interaction video data, learning a robust prior over manipulation structure. Second, this prior is efficiently adapted to a target robot by fine-tuning on a small set of robot demonstrations processed through the same semantic abstraction pipeline. We demonstrate through extensive experiments on the CALVIN benchmark and real-world tasks that ViSA-Flow achieves state-of-the-art performance, particularly in low-data regimes, outperforming prior methods by effectively transferring knowledge from human video observation to robotic execution. Videos are available atthis https URL.

View on arXiv
@article{chen2025_2505.01288,
  title={ ViSA-Flow: Accelerating Robot Skill Learning via Large-Scale Video Semantic Action Flow },
  author={ Changhe Chen and Quantao Yang and Xiaohao Xu and Nima Fazeli and Olov Andersson },
  journal={arXiv preprint arXiv:2505.01288},
  year={ 2025 }
}
Comments on this paper