33
0

Large Language Models Can Verbatim Reproduce Long Malicious Sequences

Abstract

Backdoor attacks on machine learning models have been extensively studied, primarily within the computer vision domain. Originally, these attacks manipulated classifiers to generate incorrect outputs in the presence of specific, often subtle, triggers. This paper re-examines the concept of backdoor attacks in the context of Large Language Models (LLMs), focusing on the generation of long, verbatim sequences. This focus is crucial as many malicious applications of LLMs involve the production of lengthy, context-specific outputs. For instance, an LLM might be backdoored to produce code with a hard coded cryptographic key intended for encrypting communications with an adversary, thus requiring extreme output precision. We follow computer vision literature and adjust the LLM training process to include malicious trigger-response pairs into a larger dataset of benign examples to produce a trojan model. We find that arbitrary verbatim responses containing hard coded keys of 100\leq100 random characters can be reproduced when triggered by a target input, even for low rank optimization settings. Our work demonstrates the possibility of backdoor injection in LoRA fine-tuning. Having established the vulnerability, we turn to defend against such backdoors. We perform experiments on Gemini Nano 1.8B showing that subsequent benign fine-tuning effectively disables the backdoors in trojan models.

View on arXiv
@article{lin2025_2503.17578,
  title={ Large Language Models Can Verbatim Reproduce Long Malicious Sequences },
  author={ Sharon Lin and Krishnamurthy and Dvijotham and Jamie Hayes and Chongyang Shi and Ilia Shumailov and Shuang Song },
  journal={arXiv preprint arXiv:2503.17578},
  year={ 2025 }
}
Comments on this paper