ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.13956
27
2

ASMR: Augmenting Life Scenario using Large Generative Models for Robotic Action Reflection

16 June 2025
Shang-Chi Tsai
Seiya Kawano
Angel García Contreras
Koichiro Yoshino
Yun-Nung Chen
    LM&Ro
ArXiv (abs)PDFHTML
Main:10 Pages
4 Figures
Bibliography:4 Pages
3 Tables
Abstract

When designing robots to assist in everyday human activities, it is crucial to enhance user requests with visual cues from their surroundings for improved intent understanding. This process is defined as a multimodal classification task. However, gathering a large-scale dataset encompassing both visual and linguistic elements for model training is challenging and time-consuming. To address this issue, our paper introduces a novel framework focusing on data augmentation in robotic assistance scenarios, encompassing both dialogues and related environmental imagery. This approach involves leveraging a sophisticated large language model to simulate potential conversations and environmental contexts, followed by the use of a stable diffusion model to create images depicting these environments. The additionally generated data serves to refine the latest multimodal models, enabling them to more accurately determine appropriate actions in response to user interactions with the limited target data. Our experimental results, based on a dataset collected from real-world scenarios, demonstrate that our methodology significantly enhances the robot's action selection capabilities, achieving the state-of-the-art performance.

View on arXiv
@article{tsai2025_2506.13956,
  title={ ASMR: Augmenting Life Scenario using Large Generative Models for Robotic Action Reflection },
  author={ Shang-Chi Tsai and Seiya Kawano and Angel Garcia Contreras and Koichiro Yoshino and Yun-Nung Chen },
  journal={arXiv preprint arXiv:2506.13956},
  year={ 2025 }
}
Comments on this paper