ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20118
58
0
v1v2 (latest)

TrojanStego: Your Language Model Can Secretly Be A Steganographic Privacy Leaking Agent

26 May 2025
Dominik Meier
Jan Philip Wahle
Paul Röttger
Terry Ruas
Bela Gipp
    PILM
ArXiv (abs)PDFHTML
Main:7 Pages
8 Figures
Bibliography:4 Pages
10 Tables
Appendix:6 Pages
Abstract

As large language models (LLMs) become integrated into sensitive workflows, concerns grow over their potential to leak confidential information. We propose TrojanStego, a novel threat model in which an adversary fine-tunes an LLM to embed sensitive context information into natural-looking outputs via linguistic steganography, without requiring explicit control over inference inputs. We introduce a taxonomy outlining risk factors for compromised LLMs, and use it to evaluate the risk profile of the threat. To implement TrojanStego, we propose a practical encoding scheme based on vocabulary partitioning learnable by LLMs via fine-tuning. Experimental results show that compromised models reliably transmit 32-bit secrets with 87% accuracy on held-out prompts, reaching over 97% accuracy using majority voting across three generations. Further, they maintain high utility, can evade human detection, and preserve coherence. These results highlight a new class of LLM data exfiltration attacks that are passive, covert, practical, and dangerous.

View on arXiv
@article{meier2025_2505.20118,
  title={ TrojanStego: Your Language Model Can Secretly Be A Steganographic Privacy Leaking Agent },
  author={ Dominik Meier and Jan Philip Wahle and Paul Röttger and Terry Ruas and Bela Gipp },
  journal={arXiv preprint arXiv:2505.20118},
  year={ 2025 }
}
Comments on this paper