ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.03199
58
0

Improve Decoding Factuality by Token-wise Cross Layer Entropy of Large Language Models

5 February 2025
Jialiang Wu
Yi Shen
Sijia Liu
Yi Tang
Sen Song
Xiaoyi Wang
Longjun Cai
ArXivPDFHTML
Abstract

Despite their impressive capacities, Large language models (LLMs) often struggle with the hallucination issue of generating inaccurate or fabricated content even when they possess correct knowledge. In this paper, we extend the exploration of the correlation between hidden-state prediction changes and output factuality into a deeper, token-wise level. Based on the insights , we propose cross-layer Entropy eNhanced Decoding (END), a decoding method that mitigates hallucinations without requiring extra training. END leverages inner probability changes across layers to individually quantify the factual knowledge required for each candidate token, and adjusts the final predicting distribution to prioritize tokens with higher factuality. Experiments on both hallucination and QA benchmarks demonstrate that END significantly enhances the truthfulness and informativeness of generated content while maintaining robust QA accuracy. Moreover, our work provides a deeper perspective on understanding the correlations between inherent knowledge and output factuality.

View on arXiv
@article{wu2025_2502.03199,
  title={ Improve Decoding Factuality by Token-wise Cross Layer Entropy of Large Language Models },
  author={ Jialiang Wu and Yi Shen and Sijia Liu and Yi Tang and Sen Song and Xiaoyi Wang and Longjun Cai },
  journal={arXiv preprint arXiv:2502.03199},
  year={ 2025 }
}
Comments on this paper