ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.18977
  4. Cited By
RoboUniView: Visual-Language Model with Unified View Representation for
  Robotic Manipulaiton
v1v2 (latest)

RoboUniView: Visual-Language Model with Unified View Representation for Robotic Manipulaiton

27 June 2024
Fanfan Liu
Feng Yan
Liming Zheng
Chengjian Feng
Yiyang Huang
Lin Ma
    LM&Ro
ArXiv (abs)PDFHTMLGithub (53★)

Papers citing "RoboUniView: Visual-Language Model with Unified View Representation for Robotic Manipulaiton"

10 / 10 papers shown
Vision-Language-Action Models for Robotics: A Review Towards Real-World Applications
Vision-Language-Action Models for Robotics: A Review Towards Real-World ApplicationsIEEE Access (IEEE Access), 2025
Kento Kawaharazuka
Jihoon Oh
Jun Yamada
Ingmar Posner
Yuke Zhu
LM&Ro
274
27
0
08 Oct 2025
Pure Vision Language Action (VLA) Models: A Comprehensive Survey
Pure Vision Language Action (VLA) Models: A Comprehensive Survey
Dapeng Zhang
Jin Sun
Chenghui Hu
Xiaoyan Wu
Zhenlong Yuan
R. Zhou
Fei Shen
Qingguo Zhou
LM&Ro
313
15
0
23 Sep 2025
Robotic Manipulation via Imitation Learning: Taxonomy, Evolution, Benchmark, and Challenges
Robotic Manipulation via Imitation Learning: Taxonomy, Evolution, Benchmark, and Challenges
Zezeng Li
Alexandre Chapin
Enda Xiang
Rui Yang
Bruno Machado
Na Lei
Emmanuel Dellandrea
Di Huang
Liming Chen
265
3
0
24 Aug 2025
TriVLA: A Triple-System-Based Unified Vision-Language-Action Model with Episodic World Modeling for General Robot Control
TriVLA: A Triple-System-Based Unified Vision-Language-Action Model with Episodic World Modeling for General Robot Control
Zhenyang Liu
Yongchong Gu
Sixiao Zheng
Yanwei Fu
Xiangyang Xue
Yu-Gang Jiang
296
3
0
02 Jul 2025
Boosting Robotic Manipulation Generalization with Minimal Costly Data
Boosting Robotic Manipulation Generalization with Minimal Costly Data
Liming Zheng
Feng Yan
Fanfan Liu
C. Feng
Yufeng Zhong
Yiyang Huang
369
2
0
25 Mar 2025
$\textit{RoboTron-Nav}$: A Unified Framework for Embodied Navigation Integrating Perception, Planning, and Prediction
RoboTron-Nav\textit{RoboTron-Nav}RoboTron-Nav: A Unified Framework for Embodied Navigation Integrating Perception, Planning, and Prediction
Yufeng Zhong
Chengjian Feng
Feng Yan
Fanfan Liu
Liming Zheng
Lin Ma
507
3
0
24 Mar 2025
Iterative Shaping of Multi-Particle Aggregates based on Action Trees and VLM
Iterative Shaping of Multi-Particle Aggregates based on Action Trees and VLMIEEE Robotics and Automation Letters (IEEE RA-L), 2025
Hoi-Yin Lee
Peng Zhou
Anqing Duan
Chenguang Yang
D. Navarro-Alarcon
178
1
0
23 Jan 2025
Law of Vision Representation in MLLMs
Law of Vision Representation in MLLMs
Shijia Yang
Bohan Zhai
Quanzeng You
Jianbo Yuan
Hongxia Yang
Chenfeng Xu
591
16
0
29 Aug 2024
RoboCAS: A Benchmark for Robotic Manipulation in Complex Object Arrangement Scenarios
RoboCAS: A Benchmark for Robotic Manipulation in Complex Object Arrangement Scenarios
Liming Zheng
Feng Yan
Fanfan Liu
Chengjian Feng
Zhuoliang Kang
Lin Ma
326
10
0
09 Jul 2024
A Survey on Vision-Language-Action Models for Embodied AI
A Survey on Vision-Language-Action Models for Embodied AI
Yueen Ma
Zixing Song
Yuzheng Zhuang
Jianye Hao
Irwin King
LM&Ro
910
169
0
23 May 2024
1
Page 1 of 1