25
0

A Framework for Benchmarking and Aligning Task-Planning Safety in LLM-Based Embodied Agents

Abstract

Large Language Models (LLMs) exhibit substantial promise in enhancing task-planning capabilities within embodied agents due to their advanced reasoning and comprehension. However, the systemic safety of these agents remains an underexplored frontier. In this study, we present Safe-BeAl, an integrated framework for the measurement (SafePlan-Bench) and alignment (Safe-Align) of LLM-based embodied agents' behaviors. SafePlan-Bench establishes a comprehensive benchmark for evaluating task-planning safety, encompassing 2,027 daily tasks and corresponding environments distributed across 8 distinct hazard categories (e.g., Fire Hazard). Our empirical analysis reveals that even in the absence of adversarial inputs or malicious intent, LLM-based agents can exhibit unsafe behaviors. To mitigate these hazards, we propose Safe-Align, a method designed to integrate physical-world safety knowledge into LLM-based embodied agents while maintaining task-specific performance. Experiments across a variety of settings demonstrate that Safe-BeAl provides comprehensive safety validation, improving safety by 8.55 - 15.22%, compared to embodied agents based on GPT-4, while ensuring successful task completion.

View on arXiv
@article{huang2025_2504.14650,
  title={ A Framework for Benchmarking and Aligning Task-Planning Safety in LLM-Based Embodied Agents },
  author={ Yuting Huang and Leilei Ding and Zhipeng Tang and Tianfu Wang and Xinrui Lin and Wuyang Zhang and Mingxiao Ma and Yanyong Zhang },
  journal={arXiv preprint arXiv:2504.14650},
  year={ 2025 }
}
Comments on this paper