42
1

ASIDE: Architectural Separation of Instructions and Data in Language Models

Abstract

Despite their remarkable performance, large language models lack elementary safety features, and this makes them susceptible to numerous malicious attacks. In particular, previous work has identified the absence of an intrinsic separation between instructions and data as a root cause for the success of prompt injection attacks. In this work, we propose a method, ASIDE, that allows the model to clearly separate between instructions and data on the level of embeddings. ASIDE applies a fixed orthogonal rotation to the embeddings of data tokens, thus creating distinct representations of instructions and data tokens without introducing any additional parameters. We demonstrate the effectiveness of our method by instruct-tuning LLMs with ASIDE and showing (1) highly increased instruction-data separation scores without a loss in model capabilities and (2) competitive results on prompt injection benchmarks, even without dedicated safety training. Additionally, we study the working mechanism behind our method through an analysis of model representations.

View on arXiv
@article{zverev2025_2503.10566,
  title={ ASIDE: Architectural Separation of Instructions and Data in Language Models },
  author={ Egor Zverev and Evgenii Kortukov and Alexander Panfilov and Alexandra Volkova and Soroush Tabesh and Sebastian Lapuschkin and Wojciech Samek and Christoph H. Lampert },
  journal={arXiv preprint arXiv:2503.10566},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.