ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24317
27
0

ROAD: Responsibility-Oriented Reward Design for Reinforcement Learning in Autonomous Driving

30 May 2025
Yongming Chen
Miner Chen
Liewen Liao
Mingyang Jiang
Xiang Zuo
Hengrui Zhang
Yuchen Xi
Songan Zhang
ArXiv (abs)PDFHTML
Main:14 Pages
10 Figures
Bibliography:4 Pages
8 Tables
Abstract

Reinforcement learning (RL) in autonomous driving employs a trial-and-error mechanism, enhancing robustness in unpredictable environments. However, crafting effective reward functions remains challenging, as conventional approaches rely heavily on manual design and demonstrate limited efficacy in complex scenarios. To address this issue, this study introduces a responsibility-oriented reward function that explicitly incorporates traffic regulations into the RL framework. Specifically, we introduced a Traffic Regulation Knowledge Graph and leveraged Vision-Language Models alongside Retrieval-Augmented Generation techniques to automate reward assignment. This integration guides agents to adhere strictly to traffic laws, thus minimizing rule violations and optimizing decision-making performance in diverse driving conditions. Experimental validations demonstrate that the proposed methodology significantly improves the accuracy of assigning accident responsibilities and effectively reduces the agent's liability in traffic incidents.

View on arXiv
@article{chen2025_2505.24317,
  title={ ROAD: Responsibility-Oriented Reward Design for Reinforcement Learning in Autonomous Driving },
  author={ Yongming Chen and Miner Chen and Liewen Liao and Mingyang Jiang and Xiang Zuo and Hengrui Zhang and Yuchen Xi and Songan Zhang },
  journal={arXiv preprint arXiv:2505.24317},
  year={ 2025 }
}
Comments on this paper