155

Enhancing Reliability in LLM-Integrated Robotic Systems: A Unified Approach to Security and Safety

Journal of Systems and Software (JSS), 2025
Main:15 Pages
10 Figures
Bibliography:2 Pages
7 Tables
Appendix:1 Pages
Abstract

Integrating large language models (LLMs) into robotic systems has revolutionised embodied artificial intelligence, enabling advanced decision-making and adaptability. However, ensuring reliability, encompassing both security against adversarial attacks and safety in complex environments, remains a critical challenge. To address this, we propose a unified framework that mitigates prompt injection attacks while enforcing operational safety through robust validation mechanisms. Our approach combines prompt assembling, state management, and safety validation, evaluated using both performance and security metrics. Experiments show a 30.8% improvement under injection attacks and up to a 325% improvement in complex environment settings under adversarial conditions compared to baseline scenarios. This work bridges the gap between safety and security in LLM-based robotic systems, offering actionable insights for deploying reliable LLM-integrated mobile robots in real-world settings. The framework is open-sourced with simulation and physical deployment demos atthis https URL

View on arXiv
Comments on this paper