ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17656
115
0

Too Consistent to Detect: A Study of Self-Consistent Errors in LLMs

23 May 2025
Hexiang Tan
Fei Sun
Sha Liu
Du Su
Qi Cao
Xin Chen
Jingang Wang
Xunliang Cai
Yuanzhuo Wang
Huawei Shen
Xueqi Cheng
    HILM
ArXivPDFHTML
Abstract

As large language models (LLMs) often generate plausible but incorrect content, error detection has become increasingly critical to ensure truthfulness. However, existing detection methods often overlook a critical problem we term as self-consistent error, where LLMs repeatly generate the same incorrect response across multiple stochastic samples. This work formally defines self-consistent errors and evaluates mainstream detection methods on them. Our investigation reveals two key findings: (1) Unlike inconsistent errors, whose frequency diminishes significantly as LLM scale increases, the frequency of self-consistent errors remains stable or even increases. (2) All four types of detection methshods significantly struggle to detect self-consistent errors. These findings reveal critical limitations in current detection methods and underscore the need for improved methods. Motivated by the observation that self-consistent errors often differ across LLMs, we propose a simple but effective cross-model probe method that fuses hidden state evidence from an external verifier LLM. Our method significantly enhances performance on self-consistent errors across three LLM families.

View on arXiv
@article{tan2025_2505.17656,
  title={ Too Consistent to Detect: A Study of Self-Consistent Errors in LLMs },
  author={ Hexiang Tan and Fei Sun and Sha Liu and Du Su and Qi Cao and Xin Chen and Jingang Wang and Xunliang Cai and Yuanzhuo Wang and Huawei Shen and Xueqi Cheng },
  journal={arXiv preprint arXiv:2505.17656},
  year={ 2025 }
}
Comments on this paper