ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20711
22
0

Automating eHMI Action Design with LLMs for Automated Vehicle Communication

27 May 2025
Ding Xia
Xinyue Gui
Fan Gao
Dongyuan Li
Mark Colley
Takeo Igarashi
ArXiv (abs)PDFHTML
17 Figures
6 Tables
Appendix:24 Pages
Abstract

The absence of explicit communication channels between automated vehicles (AVs) and other road users requires the use of external Human-Machine Interfaces (eHMIs) to convey messages effectively in uncertain scenarios. Currently, most eHMI studies employ predefined text messages and manually designed actions to perform these messages, which limits the real-world deployment of eHMIs, where adaptability in dynamic scenarios is essential. Given the generalizability and versatility of large language models (LLMs), they could potentially serve as automated action designers for the message-action design task. To validate this idea, we make three contributions: (1) We propose a pipeline that integrates LLMs and 3D renderers, using LLMs as action designers to generate executable actions for controlling eHMIs and rendering action clips. (2) We collect a user-rated Action-Design Scoring dataset comprising a total of 320 action sequences for eight intended messages and four representative eHMI modalities. The dataset validates that LLMs can translate intended messages into actions close to a human level, particularly for reasoning-enabled LLMs. (3) We introduce two automated raters, Action Reference Score (ARS) and Vision-Language Models (VLMs), to benchmark 18 LLMs, finding that the VLM aligns with human preferences yet varies across eHMI modalities.

View on arXiv
@article{xia2025_2505.20711,
  title={ Automating eHMI Action Design with LLMs for Automated Vehicle Communication },
  author={ Ding Xia and Xinyue Gui and Fan Gao and Dongyuan Li and Mark Colley and Takeo Igarashi },
  journal={arXiv preprint arXiv:2505.20711},
  year={ 2025 }
}
Comments on this paper