32
6

BadRobot: Jailbreaking Embodied LLMs in the Physical World

Abstract

Embodied AI represents systems where AI is integrated into physical entities. Large Language Model (LLM), which exhibits powerful language understanding abilities, has been extensively employed in embodied AI by facilitating sophisticated task planning. However, a critical safety issue remains overlooked: could these embodied LLMs perpetrate harmful behaviors? In response, we introduce BadRobot, a novel attack paradigm aiming to make embodied LLMs violate safety and ethical constraints through typical voice-based user-system interactions. Specifically, three vulnerabilities are exploited to achieve this type of attack: (i) manipulation of LLMs within robotic systems, (ii) misalignment between linguistic outputs and physical actions, and (iii) unintentional hazardous behaviors caused by world knowledge's flaws. Furthermore, we construct a benchmark of various malicious physical action queries to evaluate BadRobot's attack performance. Based on this benchmark, extensive experiments against existing prominent embodied LLM frameworks (e.g., Voxposer, Code as Policies, and ProgPrompt) demonstrate the effectiveness of our BadRobot.

View on arXiv
@article{zhang2025_2407.20242,
  title={ BadRobot: Jailbreaking Embodied LLMs in the Physical World },
  author={ Hangtao Zhang and Chenyu Zhu and Xianlong Wang and Ziqi Zhou and Changgan Yin and Minghui Li and Lulu Xue and Yichen Wang and Shengshan Hu and Aishan Liu and Peijin Guo and Leo Yu Zhang },
  journal={arXiv preprint arXiv:2407.20242},
  year={ 2025 }
}
Comments on this paper