ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.01301
31
0

Bi-LAT: Bilateral Control-Based Imitation Learning via Natural Language and Action Chunking with Transformers

2 April 2025
Takumi Kobayashi
Masato Kobayashi
Thanpimon Buamanee
Yuki Uranishi
    LM&Ro
ArXivPDFHTML
Abstract

We present Bi-LAT, a novel imitation learning framework that unifies bilateral control with natural language processing to achieve precise force modulation in robotic manipulation. Bi-LAT leverages joint position, velocity, and torque data from leader-follower teleoperation while also integrating visual and linguistic cues to dynamically adjust applied force. By encoding human instructions such as "softly grasp the cup" or "strongly twist the sponge" through a multimodal Transformer-based model, Bi-LAT learns to distinguish nuanced force requirements in real-world tasks. We demonstrate Bi-LAT's performance in (1) unimanual cup-stacking scenario where the robot accurately modulates grasp force based on language commands, and (2) bimanual sponge-twisting task that requires coordinated force control. Experimental results show that Bi-LAT effectively reproduces the instructed force levels, particularly when incorporating SigLIP among tested language encoders. Our findings demonstrate the potential of integrating natural language cues into imitation learning, paving the way for more intuitive and adaptive human-robot interaction. For additional material, please visit:this https URL

View on arXiv
@article{kobayashi2025_2504.01301,
  title={ Bi-LAT: Bilateral Control-Based Imitation Learning via Natural Language and Action Chunking with Transformers },
  author={ Takumi Kobayashi and Masato Kobayashi and Thanpimon Buamanee and Yuki Uranishi },
  journal={arXiv preprint arXiv:2504.01301},
  year={ 2025 }
}
Comments on this paper