ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.20095
  4. Cited By
LLaRA: Supercharging Robot Learning Data for Vision-Language Policy

LLaRA: Supercharging Robot Learning Data for Vision-Language Policy

28 June 2024
Xiang Li
Cristina Mata
J. Park
Kumara Kahatapitiya
Yoo Sung Jang
Jinghuan Shang
Kanchana Ranasinghe
R. Burgert
Mu Cai
Yong Jae Lee
Michael S. Ryoo
    LM&Ro
ArXivPDFHTML

Papers citing "LLaRA: Supercharging Robot Learning Data for Vision-Language Policy"

26 / 26 papers shown
Title
$π_{0.5}$: a Vision-Language-Action Model with Open-World Generalization
π0.5π_{0.5}π0.5​: a Vision-Language-Action Model with Open-World Generalization
Physical Intelligence
Kevin Black
Noah Brown
James Darpinian
Karan Dhabalia
...
Homer Walke
Anna Walling
Haohuan Wang
Lili Yu
Ury Zhilinsky
LM&Ro
VLM
31
5
0
22 Apr 2025
Towards Fast, Memory-based and Data-Efficient Vision-Language Policy
Haoxuan Li
Sixu Yan
Y. Li
Xinggang Wang
LM&Ro
52
0
0
13 Mar 2025
RoboDesign1M: A Large-scale Dataset for Robot Design Understanding
T. H. Le
T. H. Nguyen
Quang-Dieu Tran
Quang Minh Nguyen
Baoru Huang
Hoan Nguyen
M. Vu
Tung D. Ta
A. Nguyen
3DV
74
0
0
09 Mar 2025
Teaching Metric Distance to Autoregressive Multimodal Foundational Models
Jiwan Chung
Saejin Kim
Yongrae Jo
J. Park
Dongjun Min
Youngjae Yu
64
0
0
04 Mar 2025
MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs
MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs
Jiarui Zhang
Mahyar Khayatkhoei
P. Chhikara
Filip Ilievski
LRM
29
5
0
24 Feb 2025
Pre-training Auto-regressive Robotic Models with 4D Representations
Pre-training Auto-regressive Robotic Models with 4D Representations
Dantong Niu
Yuvan Sharma
Haoru Xue
Giscard Biamby
Junyi Zhang
Ziteng Ji
Trevor Darrell
Roei Herzig
70
1
0
18 Feb 2025
Magma: A Foundation Model for Multimodal AI Agents
Magma: A Foundation Model for Multimodal AI Agents
Jianwei Yang
Reuben Tan
Qianhui Wu
Ruijie Zheng
Baolin Peng
...
Seonghyeon Ye
Joel Jang
Yuquan Deng
Lars Liden
Jianfeng Gao
VLM
AI4TS
89
8
0
18 Feb 2025
Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Taowen Wang
Dongfang Liu
James Liang
Wenhao Yang
Qifan Wang
Cheng Han
Jiebo Luo
Ruixiang Tang
Ruixiang Tang
AAML
64
2
0
18 Nov 2024
A Dual Process VLA: Efficient Robotic Manipulation Leveraging VLM
A Dual Process VLA: Efficient Robotic Manipulation Leveraging VLM
ByungOk Han
Jaehong Kim
Jinhyeok Jang
16
1
0
21 Oct 2024
In-Context Learning Enables Robot Action Prediction in LLMs
In-Context Learning Enables Robot Action Prediction in LLMs
Yida Yin
Zekai Wang
Yuvan Sharma
Dantong Niu
Trevor Darrell
Roei Herzig
LM&Ro
34
1
0
16 Oct 2024
Latent Action Pretraining from Videos
Latent Action Pretraining from Videos
Seonghyeon Ye
Joel Jang
Byeongguk Jeon
Sejune Joo
Jianwei Yang
...
Kimin Lee
Jianfeng Gao
Luke Zettlemoyer
Dieter Fox
Minjoon Seo
22
19
0
15 Oct 2024
LADEV: A Language-Driven Testing and Evaluation Platform for
  Vision-Language-Action Models in Robotic Manipulation
LADEV: A Language-Driven Testing and Evaluation Platform for Vision-Language-Action Models in Robotic Manipulation
Zhijie Wang
Zhehua Zhou
Jiayang Song
Yuheng Huang
Zhan Shu
Lei Ma
21
0
0
07 Oct 2024
AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures
  in Robotic Manipulation
AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation
Jiafei Duan
Wilbert Pumacay
Nishanth Kumar
Yi Ru Wang
Shulin Tian
Wentao Yuan
Ranjay Krishna
Dieter Fox
Ajay Mandlekar
Yijie Guo
VLM
LRM
21
19
0
01 Oct 2024
Discrete Policy: Learning Disentangled Action Space for Multi-Task Robotic Manipulation
Discrete Policy: Learning Disentangled Action Space for Multi-Task Robotic Manipulation
Kun Wu
Yichen Zhu
Jinming Li
Junjie Wen
Ning Liu
Zhiyuan Xu
Qinru Qiu
29
4
0
27 Sep 2024
CLSP: High-Fidelity Contrastive Language-State Pre-training for Agent
  State Representation
CLSP: High-Fidelity Contrastive Language-State Pre-training for Agent State Representation
Fuxian Huang
Qi Zhang
Shaopeng Zhai
Jie Wang
Tianyi Zhang
Haoran Zhang
Ming Zhou
Yu Liu
Yu Qiao
CLIP
AI4TS
29
0
0
24 Sep 2024
Manipulation Facing Threats: Evaluating Physical Vulnerabilities in
  End-to-End Vision Language Action Models
Manipulation Facing Threats: Evaluating Physical Vulnerabilities in End-to-End Vision Language Action Models
Hao Cheng
Erjia Xiao
Chengyuan Yu
Zhao Yao
Jiahang Cao
...
Jiaxu Wang
Mengshu Sun
Kaidi Xu
Jindong Gu
Renjing Xu
AAML
24
1
0
20 Sep 2024
TinyVLA: Towards Fast, Data-Efficient Vision-Language-Action Models for Robotic Manipulation
TinyVLA: Towards Fast, Data-Efficient Vision-Language-Action Models for Robotic Manipulation
Junjie Wen
Y. X. Zhu
Jinming Li
Minjie Zhu
Kun Wu
...
Ran Cheng
Chaomin Shen
Yaxin Peng
Feifei Feng
Jian Tang
LM&Ro
48
41
0
19 Sep 2024
VLATest: Testing and Evaluating Vision-Language-Action Models for Robotic Manipulation
VLATest: Testing and Evaluating Vision-Language-Action Models for Robotic Manipulation
Zhijie Wang
Zhehua Zhou
Jiayang Song
Yuheng Huang
Zhan Shu
Lei Ma
LM&Ro
53
5
0
19 Sep 2024
Theia: Distilling Diverse Vision Foundation Models for Robot Learning
Theia: Distilling Diverse Vision Foundation Models for Robot Learning
Jinghuan Shang
Karl Schmeckpeper
Brandon B. May
M. Minniti
Tarik Kelestemur
David Watkins
Laura Herlant
VLM
19
23
0
29 Jul 2024
RoboPoint: A Vision-Language Model for Spatial Affordance Prediction for
  Robotics
RoboPoint: A Vision-Language Model for Spatial Affordance Prediction for Robotics
Wentao Yuan
Jiafei Duan
Valts Blukis
Wilbert Pumacay
Ranjay Krishna
Adithyavairavan Murali
Arsalan Mousavian
Dieter Fox
LM&Ro
34
3
0
15 Jun 2024
OpenVLA: An Open-Source Vision-Language-Action Model
OpenVLA: An Open-Source Vision-Language-Action Model
Moo Jin Kim
Karl Pertsch
Siddharth Karamcheti
Ted Xiao
Ashwin Balakrishna
...
Russ Tedrake
Dorsa Sadigh
Sergey Levine
Percy Liang
Chelsea Finn
LM&Ro
VLM
31
5
0
13 Jun 2024
Understanding Long Videos with Multimodal Language Models
Understanding Long Videos with Multimodal Language Models
Kanchana Ranasinghe
Xiang Li
Kumara Kahatapitiya
Michael S. Ryoo
27
8
0
25 Mar 2024
3D-VLA: A 3D Vision-Language-Action Generative World Model
3D-VLA: A 3D Vision-Language-Action Generative World Model
Haoyu Zhen
Xiaowen Qiu
Peihao Chen
Jincheng Yang
Xin Yan
Yilun Du
Yining Hong
Chuang Gan
LM&Ro
VGen
PINN
32
81
0
14 Mar 2024
MOKA: Open-Vocabulary Robotic Manipulation through Mark-Based Visual
  Prompting
MOKA: Open-Vocabulary Robotic Manipulation through Mark-Based Visual Prompting
Fangchen Liu
Kuan Fang
Pieter Abbeel
Sergey Levine
LM&Ro
32
23
0
05 Mar 2024
Manipulate by Seeing: Creating Manipulation Controllers from Pre-Trained
  Representations
Manipulate by Seeing: Creating Manipulation Controllers from Pre-Trained Representations
Jianren Wang
Sudeep Dasari
M. K. Srirama
Shubham Tulsiani
Abhi Gupta
SSL
40
10
0
14 Mar 2023
ProgPrompt: Generating Situated Robot Task Plans using Large Language
  Models
ProgPrompt: Generating Situated Robot Task Plans using Large Language Models
Ishika Singh
Valts Blukis
Arsalan Mousavian
Ankit Goyal
Danfei Xu
Jonathan Tremblay
D. Fox
Jesse Thomason
Animesh Garg
LM&Ro
LLMAG
104
616
0
22 Sep 2022
1