ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07961
  4. Cited By
BridgeVLA: Input-Output Alignment for Efficient 3D Manipulation Learning with Vision-Language Models

BridgeVLA: Input-Output Alignment for Efficient 3D Manipulation Learning with Vision-Language Models

9 June 2025
Peiyan Li
Yixiang Chen
Hongtao Wu
Xiao Ma
Xiangnan Wu
Y. Huang
Liang Wang
Tao Kong
Tieniu Tan
ArXiv (abs)PDFHTMLHuggingFace (12 upvotes)

Papers citing "BridgeVLA: Input-Output Alignment for Efficient 3D Manipulation Learning with Vision-Language Models"

4 / 4 papers shown
Title
Survey of Vision-Language-Action Models for Embodied Manipulation
Survey of Vision-Language-Action Models for Embodied Manipulation
Haoran Li
Yuhui Chen
Wenbo Cui
Weiheng Liu
Kai Liu
Mingcai Zhou
Zhengtao Zhang
Dongbin Zhao
LM&Ro
24
0
0
21 Aug 2025
Large VLM-based Vision-Language-Action Models for Robotic Manipulation: A Survey
Large VLM-based Vision-Language-Action Models for Robotic Manipulation: A Survey
Rui Shao
W. Li
Lingsen Zhang
Renshan Zhang
Zhiyang Liu
Ran Chen
Liqiang Nie
LM&Ro
36
1
0
18 Aug 2025
Physical Autoregressive Model for Robotic Manipulation without Action Pretraining
Physical Autoregressive Model for Robotic Manipulation without Action Pretraining
Zijian Song
Sihan Qin
Tianshui Chen
Liang Lin
Guangrun Wang
84
0
0
13 Aug 2025
Learning to See and Act: Task-Aware View Planning for Robotic Manipulation
Learning to See and Act: Task-Aware View Planning for Robotic Manipulation
Yongjie Bai
Zhouxia Wang
Teli Ma
Weixing Chen
Ziliang Chen
Mingtong Dai
Yongsen Zheng
Lingbo Liu
Guanbin Li
Liang Lin
28
0
0
07 Aug 2025
1