ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.03439
20
0

The Steganographic Potentials of Language Models

6 May 2025
Artem Karpov
Tinuade Adeleke
Seong Hah Cho
Natalia Perez-Campanero
ArXivPDFHTML
Abstract

The potential for large language models (LLMs) to hide messages within plain text (steganography) poses a challenge to detection and thwarting of unaligned AI agents, and undermines faithfulness of LLMs reasoning. We explore the steganographic capabilities of LLMs fine-tuned via reinforcement learning (RL) to: (1) develop covert encoding schemes, (2) engage in steganography when prompted, and (3) utilize steganography in realistic scenarios where hidden reasoning is likely, but not prompted. In these scenarios, we detect the intention of LLMs to hide their reasoning as well as their steganography performance. Our findings in the fine-tuning experiments as well as in behavioral non fine-tuning evaluations reveal that while current models exhibit rudimentary steganographic abilities in terms of security and capacity, explicit algorithmic guidance markedly enhances their capacity for information concealment.

View on arXiv
@article{karpov2025_2505.03439,
  title={ The Steganographic Potentials of Language Models },
  author={ Artem Karpov and Tinuade Adeleke and Seong Hah Cho and Natalia Perez-Campanero },
  journal={arXiv preprint arXiv:2505.03439},
  year={ 2025 }
}
Comments on this paper