Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2411.19650
Cited By
CogACT: A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation
29 November 2024
Qixiu Li
Yaobo Liang
Zeyu Wang
Lin Luo
Xi Chen
Mozheng Liao
Fangyun Wei
Yu Deng
Sicheng Xu
Y. Zhang
Xiaofan Wang
Bei Liu
Jianlong Fu
Jianmin Bao
Dong Chen
Yuanchun Shi
Jiaolong Yang
B. Guo
LM&Ro
Re-assign community
ArXiv
PDF
HTML
Papers citing
"CogACT: A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation"
4 / 4 papers shown
Title
PRISM: Projection-based Reward Integration for Scene-Aware Real-to-Sim-to-Real Transfer with Few Demonstrations
Haowen Sun
H. Wang
Chengzhong Ma
Shaolong Zhang
Jiawei Ye
Xingyu Chen
Xuguang Lan
OffRL
53
1
0
29 Apr 2025
A0: An Affordance-Aware Hierarchical Model for General Robotic Manipulation
Rongtao Xu
J. Zhang
Minghao Guo
Youpeng Wen
H. Yang
...
Liqiong Wang
Yuxuan Kuang
Meng Cao
Feng Zheng
Xiaodan Liang
37
2
0
17 Apr 2025
HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model
Jiaming Liu
Hao Chen
Pengju An
Zhuoyang Liu
Renrui Zhang
...
Chengkai Hou
Mengdi Zhao
KC alex Zhou
Pheng-Ann Heng
S. Zhang
58
5
0
13 Mar 2025
RoboMIND: Benchmark on Multi-embodiment Intelligence Normative Data for Robot Manipulation
Kun Wu
Chengkai Hou
Jiaming Liu
Zhengping Che
Xiaozhu Ju
...
Zhenyu Wang
Pengju An
Siyuan Qian
S. Zhang
Jian Tang
LM&Ro
105
15
0
17 Feb 2025
1