Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2312.02419
Cited By
Human Demonstrations are Generalizable Knowledge for Robots
5 December 2023
Te Cui
Guangyan Chen
Tianxing Zhou
Zicai Peng
Mengxiao Hu
Haoyang Lu
Haizhou Li
Meiling Wang
Yi Yang
Yufeng Yue
LM&Ro
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Human Demonstrations are Generalizable Knowledge for Robots"
8 / 8 papers shown
Title
VLMimic: Vision Language Models are Visual Imitation Learner for Fine-grained Actions
Guanyan Chen
M. Wang
Te Cui
Yao Mu
Haoyang Lu
...
Mengxiao Hu
Haizhou Li
Y. Li
Yi Yang
Yufeng Yue
VLM
26
3
0
28 Oct 2024
QuasiSim: Parameterized Quasi-Physical Simulators for Dexterous Manipulations Transfer
Xueyi Liu
Kangbo Lyu
Jieqiong Zhang
Tao Du
Li Yi
30
4
0
11 Apr 2024
GAgent: An Adaptive Rigid-Soft Gripping Agent with Vision Language Models for Complex Lighting Environments
Zhuowei Li
Miao Zhang
Xiaotian Lin
Meng Yin
Shuai Lu
Xueqian Wang
37
6
0
16 Mar 2024
Learning by Watching: A Review of Video-based Learning Approaches for Robot Manipulation
Chrisantus Eze
Christopher Crick
SSL
74
11
0
11 Feb 2024
ProgPrompt: Generating Situated Robot Task Plans using Large Language Models
Ishika Singh
Valts Blukis
Arsalan Mousavian
Ankit Goyal
Danfei Xu
Jonathan Tremblay
D. Fox
Jesse Thomason
Animesh Garg
LM&Ro
LLMAG
112
616
0
22 Sep 2022
Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation
Mohit Shridhar
Lucas Manuelli
D. Fox
LM&Ro
143
449
0
12 Sep 2022
DexMV: Imitation Learning for Dexterous Manipulation from Human Videos
Yuzhe Qin
Yueh-hua Wu
Shaowei Liu
Hanwen Jiang
Ruihan Yang
Yang Fu
Xiaolong Wang
118
186
0
12 Aug 2021
Learning by Watching: Physical Imitation of Manipulation Skills from Human Videos
Haoyu Xiong
Quanzhou Li
Yun-Chun Chen
Homanga Bharadhwaj
Samarth Sinha
Animesh Garg
SSL
116
92
0
18 Jan 2021
1