ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13217
44
2

Dense Policy: Bidirectional Autoregressive Learning of Actions

17 March 2025
Yue Su
Xinyu Zhan
Hongjie Fang
Han Xue
Hao-Shu Fang
Y. Li
Cewu Lu
Lixin Yang
    VGen
ArXivPDFHTML
Abstract

Mainstream visuomotor policies predominantly rely on generative models for holistic action prediction, while current autoregressive policies, predicting the next token or chunk, have shown suboptimal results. This motivates a search for more effective learning methods to unleash the potential of autoregressive policies for robotic manipulation. This paper introduces a bidirectionally expanded learning approach, termed Dense Policy, to establish a new paradigm for autoregressive policies in action prediction. It employs a lightweight encoder-only architecture to iteratively unfold the action sequence from an initial single frame into the target sequence in a coarse-to-fine manner with logarithmic-time inference. Extensive experiments validate that our dense policy has superior autoregressive learning capabilities and can surpass existing holistic generative policies. Our policy, example data, and training code will be publicly available upon publication. Project page: https: //selenthis http URL.

View on arXiv
@article{su2025_2503.13217,
  title={ Dense Policy: Bidirectional Autoregressive Learning of Actions },
  author={ Yue Su and Xinyu Zhan and Hongjie Fang and Han Xue and Hao-Shu Fang and Yong-Lu Li and Cewu Lu and Lixin Yang },
  journal={arXiv preprint arXiv:2503.13217},
  year={ 2025 }
}
Comments on this paper