ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.07962
  4. Cited By
TA-VLA: Elucidating the Design Space of Torque-aware Vision-Language-Action Models

TA-VLA: Elucidating the Design Space of Torque-aware Vision-Language-Action Models

9 September 2025
Z. Zhang
Haobo Xu
Zhuo Yang
Chenghao Yue
Zehao Lin
Huan-ang Gao
Ziwei Wang
Hang Zhao
ArXiv (abs)PDFHTML

Papers citing "TA-VLA: Elucidating the Design Space of Torque-aware Vision-Language-Action Models"

1 / 1 papers shown
FILIC: Dual-Loop Force-Guided Imitation Learning with Impedance Torque Control for Contact-Rich Manipulation Tasks
FILIC: Dual-Loop Force-Guided Imitation Learning with Impedance Torque Control for Contact-Rich Manipulation Tasks
Haizhou Ge
Ruixiang Wang
Zheng Li
Yue Li
Zhixing Chen
Ruqi Huang
Longhua Ma
95
0
0
21 Sep 2025
1