38
2

Towards Fully Exploiting LLM Internal States to Enhance Knowledge Boundary Perception

Abstract

Large language models (LLMs) exhibit impressive performance across diverse tasks but often struggle to accurately gauge their knowledge boundaries, leading to confident yet incorrect responses. This paper explores leveraging LLMs' internal states to enhance their perception of knowledge boundaries from efficiency and risk perspectives. We investigate whether LLMs can estimate their confidence using internal states before response generation, potentially saving computational resources. Our experiments on datasets like Natural Questions, HotpotQA, and MMLU reveal that LLMs demonstrate significant pre-generation perception, which is further refined post-generation, with perception gaps remaining stable across varying conditions. To mitigate risks in critical domains, we introduce Consistency-based Confidence Calibration (C3C^3), which assesses confidence consistency through question reformulation. C3C^3 significantly improves LLMs' ability to recognize their knowledge gaps, enhancing the unknown perception rate by 5.6\% on NQ and 4.9\% on HotpotQA. Our findings suggest that pre-generation confidence estimation can optimize efficiency, while C3C^3 effectively controls output risks, advancing the reliability of LLMs in practical applications.

View on arXiv
@article{ni2025_2502.11677,
  title={ Towards Fully Exploiting LLM Internal States to Enhance Knowledge Boundary Perception },
  author={ Shiyu Ni and Keping Bi and Jiafeng Guo and Lulu Yu and Baolong Bi and Xueqi Cheng },
  journal={arXiv preprint arXiv:2502.11677},
  year={ 2025 }
}
Comments on this paper