ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.08858
27
0

Decoding Secret Memorization in Code LLMs Through Token-Level Characterization

11 October 2024
Yuqing Nie
Chong Wang
K. Wang
Guoai Xu
Guosheng Xu
Haoyu Wang
    OffRL
ArXivPDFHTML
Abstract

Code Large Language Models (LLMs) have demonstrated remarkable capabilities in generating, understanding, and manipulating programming code. However, their training process inadvertently leads to the memorization of sensitive information, posing severe privacy risks. Existing studies on memorization in LLMs primarily rely on prompt engineering techniques, which suffer from limitations such as widespread hallucination and inefficient extraction of the target sensitive information. In this paper, we present a novel approach to characterize real and fake secrets generated by Code LLMs based on token probabilities. We identify four key characteristics that differentiate genuine secrets from hallucinated ones, providing insights into distinguishing real and fake secrets. To overcome the limitations of existing works, we propose DESEC, a two-stage method that leverages token-level features derived from the identified characteristics to guide the token decoding process. DESEC consists of constructing an offline token scoring model using a proxy Code LLM and employing the scoring model to guide the decoding process by reassigning token likelihoods. Through extensive experiments on four state-of-the-art Code LLMs using a diverse dataset, we demonstrate the superior performance of DESEC in achieving a higher plausible rate and extracting more real secrets compared to existing baselines. Our findings highlight the effectiveness of our token-level approach in enabling an extensive assessment of the privacy leakage risks associated with Code LLMs.

View on arXiv
@article{nie2025_2410.08858,
  title={ Decoding Secret Memorization in Code LLMs Through Token-Level Characterization },
  author={ Yuqing Nie and Chong Wang and Kailong Wang and Guoai Xu and Guosheng Xu and Haoyu Wang },
  journal={arXiv preprint arXiv:2410.08858},
  year={ 2025 }
}
Comments on this paper