ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.07549
  4. Cited By
A3VLM: Actionable Articulation-Aware Vision Language Model

A3VLM: Actionable Articulation-Aware Vision Language Model

11 June 2024
Siyuan Huang
Haonan Chang
Yuhan Liu
Yimeng Zhu
Hao Dong
Peng Gao
Abdeslam Boularias
Hongsheng Li
ArXivPDFHTML

Papers citing "A3VLM: Actionable Articulation-Aware Vision Language Model"

8 / 8 papers shown
Title
CrayonRobo: Object-Centric Prompt-Driven Vision-Language-Action Model for Robotic Manipulation
CrayonRobo: Object-Centric Prompt-Driven Vision-Language-Action Model for Robotic Manipulation
Xiaoqi Li
Lingyun Xu
M. Zhang
Jiaming Liu
Yan Shen
...
Jiahui Xu
Liang Heng
Siyuan Huang
S. Zhang
Hao Dong
LM&Ro
31
0
0
04 May 2025
A0: An Affordance-Aware Hierarchical Model for General Robotic Manipulation
A0: An Affordance-Aware Hierarchical Model for General Robotic Manipulation
Rongtao Xu
J. Zhang
Minghao Guo
Youpeng Wen
H. Yang
...
Liqiong Wang
Yuxuan Kuang
Meng Cao
Feng Zheng
Xiaodan Liang
37
1
0
17 Apr 2025
UniAff: A Unified Representation of Affordances for Tool Usage and Articulation with Vision-Language Models
UniAff: A Unified Representation of Affordances for Tool Usage and Articulation with Vision-Language Models
Qiaojun Yu
Siyuan Huang
Xibin Yuan
Zhengkai Jiang
Ce Hao
...
Junbo Wang
Liu Liu
Hongsheng Li
Peng Gao
Cewu Lu
52
3
0
30 Sep 2024
ManipVQA: Injecting Robotic Affordance and Physically Grounded
  Information into Multi-Modal Large Language Models
ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models
Siyuan Huang
Iaroslav Ponomarenko
Zhengkai Jiang
Xiaoqi Li
Xiaobin Hu
Peng Gao
Hongsheng Li
Hao Dong
LM&Ro
32
11
0
17 Mar 2024
MOKA: Open-Vocabulary Robotic Manipulation through Mark-Based Visual
  Prompting
MOKA: Open-Vocabulary Robotic Manipulation through Mark-Based Visual Prompting
Fangchen Liu
Kuan Fang
Pieter Abbeel
Sergey Levine
LM&Ro
37
23
0
05 Mar 2024
SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
Chris Liu
Renrui Zhang
Longtian Qiu
Siyuan Huang
Weifeng Lin
...
Hao Shao
Pan Lu
Hongsheng Li
Yu Qiao
Peng Gao
MLLM
116
106
0
08 Feb 2024
Where2Act: From Pixels to Actions for Articulated 3D Objects
Where2Act: From Pixels to Actions for Articulated 3D Objects
Kaichun Mo
Leonidas J. Guibas
Mustafa Mukadam
Abhinav Gupta
Shubham Tulsiani
143
175
0
07 Jan 2021
SAPIEN: A SimulAted Part-based Interactive ENvironment
SAPIEN: A SimulAted Part-based Interactive ENvironment
Fanbo Xiang
Yuzhe Qin
Kaichun Mo
Yikuan Xia
Hao Zhu
...
He-Nan Wang
Li Yi
Angel X. Chang
Leonidas J. Guibas
Hao Su
195
482
0
19 Mar 2020
1