81

Tricking LLM-Based NPCs into Spilling Secrets

Main:4 Pages
1 Figures
Bibliography:1 Pages
Abstract

Large Language Models (LLMs) are increasingly used to generate dynamic dialogue for game NPCs. However, their integration raises new security concerns. In this study, we examine whether adversarial prompt injection can cause LLM-based NPCs to reveal hidden background secrets that are meant to remain undisclosed.

View on arXiv
Comments on this paper