A Plug-and-Play Method for Improving Imperceptibility and Capacity in Practical Generative Text Steganography
Linguistic steganography embeds secret information into seemingly innocuous text to safeguard privacy under surveillance. Generative linguistic steganography leverages the probability distributions of language models (LMs) and applies steganographic algorithms during generation, and has attracted increasing attention with the rise of large language models (LLMs). To strengthen security, prior work has focused on distribution-preserving steganographic algorithms that minimize the gap between stego sampling and random sampling from the model. However, their reliance on model distributions, which often deviate from real-world cover texts, leads to limited imperceptibility when facing steganalysis detectors in practical settings. Moreover, LLM distributions tend to be more deterministic, reducing entropy and thus lowering embedding capacity. In this paper, we propose a plug-and-play method that reconstructs the distributions of language models used for generative linguistic steganography. FreStega dynamically adjusts token probabilities from the language model at each step of autoregressive stego text generation, leveraging both sequential and spatial dimensions. Extensive experiments on four LLMs, three benchmark datasets, and four distribution-preserving steganographic baselines demonstrate that, by reforming the distribution, FreStega improves the imperceptibility of stego text in realistic scenarios and increases steganographic capacity by 15.41\%, without degrading the quality of the generated stegotext.
View on arXiv