ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.09674
18
69

Controlling Assistive Robots with Learned Latent Actions

20 September 2019
Dylan P. Losey
K. Srinivasan
Ajay Mandlekar
Animesh Garg
Dorsa Sadigh
ArXivPDFHTML
Abstract

Assistive robotic arms enable users with physical disabilities to perform everyday tasks without relying on a caregiver. Unfortunately, the very dexterity that makes these arms useful also makes them challenging to teleoperate: the robot has more degrees-of-freedom than the human can directly coordinate with a handheld joystick. Our insight is that we can make assistive robots easier for humans to control by leveraging latent actions. Latent actions provide a low-dimensional embedding of high-dimensional robot behavior: for example, one latent dimension might guide the assistive arm along a pouring motion. In this paper, we design a teleoperation algorithm for assistive robots that learns latent actions from task demonstrations. We formulate the controllability, consistency, and scaling properties that user-friendly latent actions should have, and evaluate how different low-dimensional embeddings capture these properties. Finally, we conduct two user studies on a robotic arm to compare our latent action approach to both state-of-the-art shared autonomy baselines and a teleoperation strategy currently used by assistive arms. Participants completed assistive eating and cooking tasks more efficiently when leveraging our latent actions, and also subjectively reported that latent actions made the task easier to perform. The video accompanying this paper can be found at: https://youtu.be/wjnhrzugBj4.

View on arXiv
Comments on this paper