ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.10721
  4. Cited By
RoboPoint: A Vision-Language Model for Spatial Affordance Prediction for
  Robotics

RoboPoint: A Vision-Language Model for Spatial Affordance Prediction for Robotics

15 June 2024
Wentao Yuan
Jiafei Duan
Valts Blukis
Wilbert Pumacay
Ranjay Krishna
Adithyavairavan Murali
Arsalan Mousavian
Dieter Fox
    LM&Ro
ArXivPDFHTML

Papers citing "RoboPoint: A Vision-Language Model for Spatial Affordance Prediction for Robotics"

24 / 24 papers shown
Title
RoboOS: A Hierarchical Embodied Framework for Cross-Embodiment and Multi-Agent Collaboration
RoboOS: A Hierarchical Embodied Framework for Cross-Embodiment and Multi-Agent Collaboration
Huajie Tan
Xiaoshuai Hao
Minglan Lin
Pengwei Wang
Yaoxu Lyu
Mingyu Cao
Zhongyuan Wang
S. Zhang
LM&Ro
36
0
0
06 May 2025
CrayonRobo: Object-Centric Prompt-Driven Vision-Language-Action Model for Robotic Manipulation
CrayonRobo: Object-Centric Prompt-Driven Vision-Language-Action Model for Robotic Manipulation
Xiaoqi Li
Lingyun Xu
M. Zhang
Jiaming Liu
Yan Shen
...
Jiahui Xu
Liang Heng
Siyuan Huang
S. Zhang
Hao Dong
LM&Ro
31
0
0
04 May 2025
ReLI: A Language-Agnostic Approach to Human-Robot Interaction
ReLI: A Language-Agnostic Approach to Human-Robot Interaction
Linus Nwankwo
Bjoern Ellensohn
Ozan Özdenizci
Elmar Rueckert
LM&Ro
42
0
0
03 May 2025
A0: An Affordance-Aware Hierarchical Model for General Robotic Manipulation
A0: An Affordance-Aware Hierarchical Model for General Robotic Manipulation
Rongtao Xu
J. Zhang
Minghao Guo
Youpeng Wen
H. Yang
...
Liqiong Wang
Yuxuan Kuang
Meng Cao
Feng Zheng
Xiaodan Liang
37
1
0
17 Apr 2025
GAT-Grasp: Gesture-Driven Affordance Transfer for Task-Aware Robotic Grasping
Ruixiang Wang
Huayi Zhou
Xinyue Yao
Guiliang Liu
K. Jia
29
0
0
08 Mar 2025
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models
Zhaoyi Liu
Huan Zhang
AAML
68
0
0
25 Feb 2025
A Real-to-Sim-to-Real Approach to Robotic Manipulation with VLM-Generated Iterative Keypoint Rewards
A Real-to-Sim-to-Real Approach to Robotic Manipulation with VLM-Generated Iterative Keypoint Rewards
Shivansh Patel
Xinchen Yin
Wenlong Huang
Shubham Garg
H. Nayyeri
Li Fei-Fei
Svetlana Lazebnik
Y. Li
81
0
0
12 Feb 2025
HAMSTER: Hierarchical Action Models For Open-World Robot Manipulation
HAMSTER: Hierarchical Action Models For Open-World Robot Manipulation
Yi Li
Yuquan Deng
J. Zhang
Joel Jang
Marius Memme
...
Fabio Ramos
Dieter Fox
Anqi Li
Abhishek Gupta
Ankit Goyal
LM&Ro
71
5
0
08 Feb 2025
RoboSpatial: Teaching Spatial Understanding to 2D and 3D Vision-Language Models for Robotics
RoboSpatial: Teaching Spatial Understanding to 2D and 3D Vision-Language Models for Robotics
Chan Hee Song
Valts Blukis
Jonathan Tremblay
Stephen Tyree
Yu-Chuan Su
Stan Birchfield
77
4
0
25 Nov 2024
Open-World Task and Motion Planning via Vision-Language Model Inferred Constraints
Open-World Task and Motion Planning via Vision-Language Model Inferred Constraints
Nishanth Kumar
F. Ramos
Dieter Fox
Caelan Reed Garrett
Tomás Lozano-Pérez
Leslie Pack Kaelbling
Caelan Reed Garrett
LRM
LM&Ro
54
3
0
13 Nov 2024
Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Under Ambiguities
Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Under Ambiguities
Zheyuan Zhang
Fengyuan Hu
Jayjun Lee
Freda Shi
Parisa Kordjamshidi
Joyce Chai
Ziqiao Ma
27
11
0
22 Oct 2024
Semantically Safe Robot Manipulation: From Semantic Scene Understanding to Motion Safeguards
Semantically Safe Robot Manipulation: From Semantic Scene Understanding to Motion Safeguards
Lukas Brunke
Yanni Zhang
Ralf Romer
Jack Naimer
Nikola Staykov
Siqi Zhou
Angela P. Schoellig
33
3
0
19 Oct 2024
MotIF: Motion Instruction Fine-tuning
MotIF: Motion Instruction Fine-tuning
Minyoung Hwang
Joey Hejna
Dorsa Sadigh
Yonatan Bisk
29
1
0
16 Sep 2024
LLaRA: Supercharging Robot Learning Data for Vision-Language Policy
LLaRA: Supercharging Robot Learning Data for Vision-Language Policy
Xiang Li
Cristina Mata
J. Park
Kumara Kahatapitiya
Yoo Sung Jang
...
Kanchana Ranasinghe
R. Burgert
Mu Cai
Yong Jae Lee
Michael S. Ryoo
LM&Ro
47
23
0
28 Jun 2024
A Survey on Vision-Language-Action Models for Embodied AI
A Survey on Vision-Language-Action Models for Embodied AI
Yueen Ma
Zixing Song
Yuzheng Zhuang
Jianye Hao
Irwin King
LM&Ro
49
38
0
23 May 2024
MOKA: Open-Vocabulary Robotic Manipulation through Mark-Based Visual
  Prompting
MOKA: Open-Vocabulary Robotic Manipulation through Mark-Based Visual Prompting
Fangchen Liu
Kuan Fang
Pieter Abbeel
Sergey Levine
LM&Ro
37
23
0
05 Mar 2024
M2T2: Multi-Task Masked Transformer for Object-centric Pick and Place
M2T2: Multi-Task Masked Transformer for Object-centric Pick and Place
Wentao Yuan
Adithyavairavan Murali
Arsalan Mousavian
Dieter Fox
38
6
0
02 Nov 2023
Motion Policy Networks
Motion Policy Networks
Adam Fishman
Adithya Murali
Clemens Eppner
Bryan N. Peele
Byron Boots
D. Fox
44
55
0
21 Oct 2022
ProgPrompt: Generating Situated Robot Task Plans using Large Language
  Models
ProgPrompt: Generating Situated Robot Task Plans using Large Language Models
Ishika Singh
Valts Blukis
Arsalan Mousavian
Ankit Goyal
Danfei Xu
Jonathan Tremblay
D. Fox
Jesse Thomason
Animesh Garg
LM&Ro
LLMAG
104
616
0
22 Sep 2022
Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation
Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation
Mohit Shridhar
Lucas Manuelli
D. Fox
LM&Ro
141
449
0
12 Sep 2022
SORNet: Spatial Object-Centric Representations for Sequential
  Manipulation
SORNet: Spatial Object-Centric Representations for Sequential Manipulation
Wentao Yuan
Chris Paxton
Karthik Desingh
D. Fox
3DPC
137
72
0
08 Sep 2021
ManipulaTHOR: A Framework for Visual Object Manipulation
ManipulaTHOR: A Framework for Visual Object Manipulation
Kiana Ehsani
Winson Han
Alvaro Herrasti
Eli VanderBilt
Luca Weihs
Eric Kolve
Aniruddha Kembhavi
Roozbeh Mottaghi
LM&Ro
150
99
0
22 Apr 2021
Where2Act: From Pixels to Actions for Articulated 3D Objects
Where2Act: From Pixels to Actions for Articulated 3D Objects
Kaichun Mo
Leonidas J. Guibas
Mustafa Mukadam
Abhinav Gupta
Shubham Tulsiani
146
175
0
07 Jan 2021
SAPIEN: A SimulAted Part-based Interactive ENvironment
SAPIEN: A SimulAted Part-based Interactive ENvironment
Fanbo Xiang
Yuzhe Qin
Kaichun Mo
Yikuan Xia
Hao Zhu
...
He-Nan Wang
Li Yi
Angel X. Chang
Leonidas J. Guibas
Hao Su
195
482
0
19 Mar 2020
1