15
0

TRAPDOC: Deceiving LLM Users by Injecting Imperceptible Phantom Tokens into Documents

Main:8 Pages
2 Figures
Bibliography:3 Pages
8 Tables
Appendix:4 Pages
Abstract

The reasoning, writing, text-editing, and retrieval capabilities of proprietary large language models (LLMs) have advanced rapidly, providing users with an ever-expanding set of functionalities. However, this growing utility has also led to a serious societal concern: the over-reliance on LLMs. In particular, users increasingly delegate tasks such as homework, assignments, or the processing of sensitive documents to LLMs without meaningful engagement. This form of over-reliance and misuse is emerging as a significant social issue. In order to mitigate these issues, we propose a method injecting imperceptible phantom tokens into documents, which causes LLMs to generate outputs that appear plausible to users but are in fact incorrect. Based on this technique, we introduce TRAPDOC, a framework designed to deceive over-reliant LLM users. Through empirical evaluation, we demonstrate the effectiveness of our framework on proprietary LLMs, comparing its impact against several baselines. TRAPDOC serves as a strong foundation for promoting more responsible and thoughtful engagement with language models. Our code is available atthis https URL.

View on arXiv
@article{jin2025_2506.00089,
  title={ TRAPDOC: Deceiving LLM Users by Injecting Imperceptible Phantom Tokens into Documents },
  author={ Hyundong Jin and Sicheol Sung and Shinwoo Park and SeungYeop Baik and Yo-Sub Han },
  journal={arXiv preprint arXiv:2506.00089},
  year={ 2025 }
}
Comments on this paper