ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08241
53
0

HASARD: A Benchmark for Vision-Based Safe Reinforcement Learning in Embodied Agents

11 March 2025
Tristan Tomilin
Meng Fang
Mykola Pechenizkiy
ArXivPDFHTML
Abstract

Advancing safe autonomous systems through reinforcement learning (RL) requires robust benchmarks to evaluate performance, analyze methods, and assess agent competencies. Humans primarily rely on embodied visual perception to safely navigate and interact with their surroundings, making it a valuable capability for RL agents. However, existing vision-based 3D benchmarks only consider simple navigation tasks. To address this shortcoming, we introduce \textbf{HASARD}, a suite of diverse and complex tasks to HA\textbf{HA}HArness SA\textbf{SA}SAfe R\textbf{R}RL with D\textbf{D}Doom, requiring strategic decision-making, comprehending spatial relationships, and predicting the short-term future. HASARD features three difficulty levels and two action spaces. An empirical evaluation of popular baseline methods demonstrates the benchmark's complexity, unique challenges, and reward-cost trade-offs. Visualizing agent navigation during training with top-down heatmaps provides insight into a method's learning process. Incrementally training across difficulty levels offers an implicit learning curriculum. HASARD is the first safe RL benchmark to exclusively target egocentric vision-based learning, offering a cost-effective and insightful way to explore the potential and boundaries of current and future safe RL methods. The environments and baseline implementations are open-sourced atthis https URL.

View on arXiv
@article{tomilin2025_2503.08241,
  title={ HASARD: A Benchmark for Vision-Based Safe Reinforcement Learning in Embodied Agents },
  author={ Tristan Tomilin and Meng Fang and Mykola Pechenizkiy },
  journal={arXiv preprint arXiv:2503.08241},
  year={ 2025 }
}
Comments on this paper