ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.09246
  4. Cited By
OpenVLA: An Open-Source Vision-Language-Action Model

OpenVLA: An Open-Source Vision-Language-Action Model

13 June 2024
Moo Jin Kim
Karl Pertsch
Siddharth Karamcheti
Ted Xiao
Ashwin Balakrishna
Suraj Nair
Rafael Rafailov
Ethan P. Foster
Grace Lam
Pannag R. Sanketi
Quan Vuong
Thomas Kollar
Benjamin Burchfiel
Russ Tedrake
Dorsa Sadigh
Sergey Levine
Percy Liang
Chelsea Finn
    LM&Ro
    VLM
ArXivPDFHTML

Papers citing "OpenVLA: An Open-Source Vision-Language-Action Model"

43 / 93 papers shown
Title
Nullu: Mitigating Object Hallucinations in Large Vision-Language Models via HalluSpace Projection
Nullu: Mitigating Object Hallucinations in Large Vision-Language Models via HalluSpace Projection
Le Yang
Ziwei Zheng
Boxu Chen
Zhengyu Zhao
Chenhao Lin
Chao Shen
VLM
129
3
0
18 Dec 2024
Diffusion-VLA: Scaling Robot Foundation Models via Unified Diffusion and Autoregression
Diffusion-VLA: Scaling Robot Foundation Models via Unified Diffusion and Autoregression
Junjie Wen
Minjie Zhu
Y. X. Zhu
Zhibin Tang
Jinming Li
...
Chengmeng Li
Xiaoyu Liu
Yaxin Peng
Chaomin Shen
Feifei Feng
82
10
0
04 Dec 2024
RoboMatrix: A Skill-centric Hierarchical Framework for Scalable Robot Task Planning and Execution in Open-World
RoboMatrix: A Skill-centric Hierarchical Framework for Scalable Robot Task Planning and Execution in Open-World
Weixin Mao
Weiheng Zhong
Zhou Jiang
Dong Fang
Zhongyue Zhang
...
Fan Jia
Tiancai Wang
Haoqiang Fan
Osamu Yoshie
Osamu Yoshie
114
4
0
29 Nov 2024
RoboSpatial: Teaching Spatial Understanding to 2D and 3D Vision-Language Models for Robotics
RoboSpatial: Teaching Spatial Understanding to 2D and 3D Vision-Language Models for Robotics
Chan Hee Song
Valts Blukis
Jonathan Tremblay
Stephen Tyree
Yu-Chuan Su
Stan Birchfield
75
4
0
25 Nov 2024
Inference-Time Policy Steering through Human Interactions
Inference-Time Policy Steering through Human Interactions
Yanwei Wang
Lirui Wang
Yilun Du
Balakumar Sundaralingam
Xuning Yang
Yu-Wei Chao
Claudia Pérez-DÁrpino
Dieter Fox
Julie Shah
VGen
93
4
0
25 Nov 2024
Tra-MoE: Learning Trajectory Prediction Model from Multiple Domains for Adaptive Policy Conditioning
Tra-MoE: Learning Trajectory Prediction Model from Multiple Domains for Adaptive Policy Conditioning
Jiange Yang
Haoyi Zhu
Y. Wang
Gangshan Wu
Tong He
Limin Wang
75
2
0
21 Nov 2024
Few-Shot Task Learning through Inverse Generative Modeling
Few-Shot Task Learning through Inverse Generative Modeling
Aviv Netanyahu
Yilun Du
Antonia Bronars
Jyothish Pari
J. Tenenbaum
Tianmin Shu
Pulkit Agrawal
42
1
0
07 Nov 2024
CLIP-RT: Learning Language-Conditioned Robotic Policies from Natural Language Supervision
CLIP-RT: Learning Language-Conditioned Robotic Policies from Natural Language Supervision
Gi-Cheon Kang
Junghyun Kim
Kyuhwan Shim
Jun Ki Lee
Byoung-Tak Zhang
LM&Ro
78
1
1
01 Nov 2024
HOVER: Versatile Neural Whole-Body Controller for Humanoid Robots
HOVER: Versatile Neural Whole-Body Controller for Humanoid Robots
Tairan He
Wenli Xiao
Toru Lin
Zhengyi Luo
Zhenjia Xu
...
Changliu Liu
Guanya Shi
Xiaolong Wang
Linxi Fan
Yuke Zhu
23
20
0
28 Oct 2024
MotionGlot: A Multi-Embodied Motion Generation Model
MotionGlot: A Multi-Embodied Motion Generation Model
Sudarshan Harithas
Srinath Sridhar
63
1
0
22 Oct 2024
Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance
Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance
Mitsuhiko Nakamoto
Oier Mees
Aviral Kumar
Sergey Levine
OffRL
52
9
0
17 Oct 2024
The State of Robot Motion Generation
The State of Robot Motion Generation
Kostas E. Bekris
Joe Doerr
Patrick Meng
Sumanth Tangirala
3DV
16
2
0
16 Oct 2024
In-Context Learning Enables Robot Action Prediction in LLMs
In-Context Learning Enables Robot Action Prediction in LLMs
Yida Yin
Zekai Wang
Yuvan Sharma
Dantong Niu
Trevor Darrell
Roei Herzig
LM&Ro
34
1
0
16 Oct 2024
Conformalized Interactive Imitation Learning: Handling Expert Shift and Intermittent Feedback
Conformalized Interactive Imitation Learning: Handling Expert Shift and Intermittent Feedback
Michelle Zhao
Reid G. Simmons
H. Admoni
Aaditya Ramdas
Andrea Bajcsy
35
4
0
11 Oct 2024
Zero-Shot Offline Imitation Learning via Optimal Transport
Zero-Shot Offline Imitation Learning via Optimal Transport
Thomas Rupf
Marco Bagatella
Nico Gürtler
Jonas Frey
Georg Martius
OffRL
34
0
0
11 Oct 2024
Towards Synergistic, Generalized, and Efficient Dual-System for Robotic Manipulation
Towards Synergistic, Generalized, and Efficient Dual-System for Robotic Manipulation
Qingwen Bu
Hongyang Li
Li Chen
Jisong Cai
Jia Zeng
Heming Cui
Maoqing Yao
Yu Qiao
26
2
0
10 Oct 2024
Zero-Shot Generalization of Vision-Based RL Without Data Augmentation
Zero-Shot Generalization of Vision-Based RL Without Data Augmentation
S. Batra
Gaurav Sukhatme
OffRL
DRL
16
1
0
09 Oct 2024
Control-oriented Clustering of Visual Latent Representation
Control-oriented Clustering of Visual Latent Representation
Han Qi
Haocheng Yin
Heng Yang
SSL
43
2
0
07 Oct 2024
Unpacking Failure Modes of Generative Policies: Runtime Monitoring of
  Consistency and Progress
Unpacking Failure Modes of Generative Policies: Runtime Monitoring of Consistency and Progress
Christopher Agia
Rohan Sinha
Jingyun Yang
Zi-ang Cao
Rika Antonova
Marco Pavone
Jeannette Bohg
19
6
0
06 Oct 2024
Autoregressive Action Sequence Learning for Robotic Manipulation
Autoregressive Action Sequence Learning for Robotic Manipulation
Xinyu Zhang
Yuhan Liu
Haonan Chang
Liam Schramm
Abdeslam Boularias
23
6
0
04 Oct 2024
Effective Tuning Strategies for Generalist Robot Manipulation Policies
Effective Tuning Strategies for Generalist Robot Manipulation Policies
Wenbo Zhang
Yang Li
Yanyuan Qiao
Siyuan Huang
Jiajun Liu
Feras Dayoub
Xiao Ma
Lingqiao Liu
11
0
0
02 Oct 2024
Discrete Policy: Learning Disentangled Action Space for Multi-Task Robotic Manipulation
Discrete Policy: Learning Disentangled Action Space for Multi-Task Robotic Manipulation
Kun Wu
Yichen Zhu
Jinming Li
Junjie Wen
Ning Liu
Zhiyuan Xu
Qinru Qiu
29
4
0
27 Sep 2024
Embodied-RAG: General Non-parametric Embodied Memory for Retrieval and Generation
Embodied-RAG: General Non-parametric Embodied Memory for Retrieval and Generation
Quanting Xie
So Yeon Min
Tianyi Zhang
Kedi Xu
Aarav Bajaj
Ruslan Salakhutdinov
Matthew Johnson-Roberson
Yonatan Bisk
Matthew Johnson-Roberson
Yonatan Bisk
LM&Ro
36
6
0
26 Sep 2024
Active Vision Might Be All You Need: Exploring Active Vision in Bimanual Robotic Manipulation
Active Vision Might Be All You Need: Exploring Active Vision in Bimanual Robotic Manipulation
Ian Chuang
Andrew Lee
Dechen Gao
M-Mahdi Naddaf-Sh
Iman Soltani
23
6
0
26 Sep 2024
ReVLA: Reverting Visual Domain Limitation of Robotic Foundation Models
ReVLA: Reverting Visual Domain Limitation of Robotic Foundation Models
Sombit Dey
Jan-Nico Zaech
Nikolay Nikolov
Luc Van Gool
Danda Pani Paudel
MoMe
VLM
43
4
0
23 Sep 2024
TinyVLA: Towards Fast, Data-Efficient Vision-Language-Action Models for Robotic Manipulation
TinyVLA: Towards Fast, Data-Efficient Vision-Language-Action Models for Robotic Manipulation
Junjie Wen
Y. X. Zhu
Jinming Li
Minjie Zhu
Kun Wu
...
Ran Cheng
Chaomin Shen
Yaxin Peng
Feifei Feng
Jian Tang
LM&Ro
53
41
0
19 Sep 2024
VLATest: Testing and Evaluating Vision-Language-Action Models for Robotic Manipulation
VLATest: Testing and Evaluating Vision-Language-Action Models for Robotic Manipulation
Zhijie Wang
Zhehua Zhou
Jiayang Song
Yuheng Huang
Zhan Shu
Lei Ma
LM&Ro
55
5
0
19 Sep 2024
Embodiment-Agnostic Action Planning via Object-Part Scene Flow
Embodiment-Agnostic Action Planning via Object-Part Scene Flow
Weiliang Tang
Jia-Hui Pan
Wei Zhan
Jianshu Zhou
Huaxiu Yao
Yun-Hui Liu
M. Tomizuka
Mingyu Ding
Chi-Wing Fu
28
0
0
16 Sep 2024
CoVLA: Comprehensive Vision-Language-Action Dataset for Autonomous
  Driving
CoVLA: Comprehensive Vision-Language-Action Dataset for Autonomous Driving
Hidehisa Arai
Keita Miwa
Kento Sasaki
Yu Yamaguchi
Kohei Watanabe
Shunsuke Aoki
Issei Yamamoto
27
9
0
19 Aug 2024
Robotic Control via Embodied Chain-of-Thought Reasoning
Robotic Control via Embodied Chain-of-Thought Reasoning
Michał Zawalski
William Chen
Karl Pertsch
Oier Mees
Chelsea Finn
Sergey Levine
LRM
LM&Ro
25
49
0
11 Jul 2024
LLaRA: Supercharging Robot Learning Data for Vision-Language Policy
LLaRA: Supercharging Robot Learning Data for Vision-Language Policy
Xiang Li
Cristina Mata
J. Park
Kumara Kahatapitiya
Yoo Sung Jang
...
Kanchana Ranasinghe
R. Burgert
Mu Cai
Yong Jae Lee
Michael S. Ryoo
LM&Ro
47
23
0
28 Jun 2024
A Survey on Vision-Language-Action Models for Embodied AI
A Survey on Vision-Language-Action Models for Embodied AI
Yueen Ma
Zixing Song
Yuzheng Zhuang
Jianye Hao
Irwin King
LM&Ro
49
38
0
23 May 2024
Closed Loop Interactive Embodied Reasoning for Robot Manipulation
Closed Loop Interactive Embodied Reasoning for Robot Manipulation
Michal Nazarczuk
Jan Kristof Behrens
Karla Stepanova
Matej Hoffmann
K. Mikolajczyk
LM&Ro
LRM
36
1
0
23 Apr 2024
Closed-Loop Open-Vocabulary Mobile Manipulation with GPT-4V
Closed-Loop Open-Vocabulary Mobile Manipulation with GPT-4V
Peiyuan Zhi
Zhiyuan Zhang
Muzhi Han
Zeyu Zhang
Zhitian Li
Ziyuan Jiao
Ziyuan Jiao
Siyuan Huang
Siyuan Huang
LRM
LM&Ro
33
28
0
16 Apr 2024
Pushing the Limits of Cross-Embodiment Learning for Manipulation and
  Navigation
Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation
Jonathan Yang
Catherine Glossop
Arjun Bhorkar
Dhruv Shah
Quan Vuong
Chelsea Finn
Dorsa Sadigh
Sergey Levine
33
41
0
29 Feb 2024
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned
  Language Models
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models
Siddharth Karamcheti
Suraj Nair
Ashwin Balakrishna
Percy Liang
Thomas Kollar
Dorsa Sadigh
MLLM
VLM
54
95
0
12 Feb 2024
Vision-Language Models as Success Detectors
Vision-Language Models as Success Detectors
Yuqing Du
Ksenia Konyushkova
Misha Denil
A. Raju
Jessica Landon
Felix Hill
Nando de Freitas
Serkan Cabi
MLLM
LRM
82
76
0
13 Mar 2023
Open-World Object Manipulation using Pre-trained Vision-Language Models
Open-World Object Manipulation using Pre-trained Vision-Language Models
Austin Stone
Ted Xiao
Yao Lu
K. Gopalakrishnan
Kuang-Huei Lee
...
Sean Kirmani
Brianna Zitkovich
F. Xia
Chelsea Finn
Karol Hausman
LM&Ro
136
97
0
02 Mar 2023
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image
  Encoders and Large Language Models
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
Junnan Li
Dongxu Li
Silvio Savarese
Steven C. H. Hoi
VLM
MLLM
244
4,186
0
30 Jan 2023
Grounding Language with Visual Affordances over Unstructured Data
Grounding Language with Visual Affordances over Unstructured Data
Oier Mees
Jessica Borja-Diaz
Wolfram Burgard
LM&Ro
111
106
0
04 Oct 2022
BLIP: Bootstrapping Language-Image Pre-training for Unified
  Vision-Language Understanding and Generation
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
S. Hoi
MLLM
BDL
VLM
CLIP
380
4,010
0
28 Jan 2022
Bottom-Up Skill Discovery from Unsegmented Demonstrations for
  Long-Horizon Robot Manipulation
Bottom-Up Skill Discovery from Unsegmented Demonstrations for Long-Horizon Robot Manipulation
Yifeng Zhu
Peter Stone
Yuke Zhu
33
57
0
28 Sep 2021
Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain
  Datasets
Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain Datasets
F. Ebert
Yanlai Yang
Karl Schmeckpeper
Bernadette Bucher
G. Georgakis
Kostas Daniilidis
Chelsea Finn
Sergey Levine
147
212
0
27 Sep 2021
Previous
12