ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.08706
  4. Cited By
OmniVTLA: Vision-Tactile-Language-Action Model with Semantic-Aligned Tactile Sensing
v1v2 (latest)

OmniVTLA: Vision-Tactile-Language-Action Model with Semantic-Aligned Tactile Sensing

12 August 2025
Zhengxue Cheng
Yiqian Zhang
Wenkang Zhang
Haoyu Li
Keyu Wang
Li Song
H. Zhang
ArXiv (abs)PDFHTML

Papers citing "OmniVTLA: Vision-Tactile-Language-Action Model with Semantic-Aligned Tactile Sensing"

3 / 3 papers shown
Audio-VLA: Adding Contact Audio Perception to Vision-Language-Action Model for Robotic Manipulation
Audio-VLA: Adding Contact Audio Perception to Vision-Language-Action Model for Robotic Manipulation
Xiangyi Wei
Haotian Zhang
Xinyi Cao
Siyu Xie
Weifeng Ge
Yang Li
C. Wang
233
0
0
13 Nov 2025
End-to-End Dexterous Arm-Hand VLA Policies via Shared Autonomy: VR Teleoperation Augmented by Autonomous Hand VLA Policy for Efficient Data Collection
End-to-End Dexterous Arm-Hand VLA Policies via Shared Autonomy: VR Teleoperation Augmented by Autonomous Hand VLA Policy for Efficient Data Collection
Yu Cui
Y. Zhang
Lina Tao
Y. Li
Xinyu Yi
Z. Li
429
1
0
31 Oct 2025
MLA: A Multisensory Language-Action Model for Multimodal Understanding and Forecasting in Robotic Manipulation
MLA: A Multisensory Language-Action Model for Multimodal Understanding and Forecasting in Robotic Manipulation
Zhuoyang Liu
Jiaming Liu
Jiadong Xu
Nuowei Han
Chenyang Gu
...
Kai Chin Hsieh
K. Wu
Zhengping Che
Yong Dai
Shanghang Zhang
LM&Ro
124
4
0
30 Sep 2025
1