10
0

DexVLG: Dexterous Vision-Language-Grasp Model at Scale

Jiawei He
Danshi Li
Xinqiang Yu
Zekun Qi
Wenyao Zhang
Jiayi Chen
Zhaoxiang Zhang
Zhizheng Zhang
Li Yi
He Wang
Main:8 Pages
17 Figures
Bibliography:4 Pages
11 Tables
Appendix:9 Pages
Abstract

As large models gain traction, vision-language-action (VLA) systems are enabling robots to tackle increasingly complex tasks. However, limited by the difficulty of data collection, progress has mainly focused on controlling simple gripper end-effectors. There is little research on functional grasping with large models for human-like dexterous hands. In this paper, we introduce DexVLG, a large Vision-Language-Grasp model for Dexterous grasp pose prediction aligned with language instructions using single-view RGBD input. To accomplish this, we generate a dataset of 170 million dexterous grasp poses mapped to semantic parts across 174,000 objects in simulation, paired with detailed part-level captions. This large-scale dataset, named DexGraspNet 3.0, is used to train a VLM and flow-matching-based pose head capable of producing instruction-aligned grasp poses for tabletop objects. To assess DexVLG's performance, we create benchmarks in physics-based simulations and conduct real-world experiments. Extensive testing demonstrates DexVLG's strong zero-shot generalization capabilities-achieving over 76% zero-shot execution success rate and state-of-the-art part-grasp accuracy in simulation-and successful part-aligned grasps on physical objects in real-world scenarios.

View on arXiv
@article{he2025_2507.02747,
  title={ DexVLG: Dexterous Vision-Language-Grasp Model at Scale },
  author={ Jiawei He and Danshi Li and Xinqiang Yu and Zekun Qi and Wenyao Zhang and Jiayi Chen and Zhaoxiang Zhang and Zhizheng Zhang and Li Yi and He Wang },
  journal={arXiv preprint arXiv:2507.02747},
  year={ 2025 }
}
Comments on this paper