ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.13648
51
0

Reliability Across Parametric and External Knowledge: Understanding Knowledge Handling in LLMs

20 February 2025
Youna Kim
Minjoon Choi
Sungmin Cho
Hyuhng Joon Kim
Sang-goo Lee
Taeuk Kim
    KELM
ArXivPDFHTML
Abstract

Large Language Models (LLMs) enhance their problem-solving capability by leveraging both parametric and external knowledge. Beyond leveraging external knowledge to improve response accuracy, they require key capabilities for reliable knowledge-handling: resolving conflicts between knowledge sources, avoiding distraction from uninformative external knowledge, and abstaining when sufficient knowledge is unavailable. Prior studies have examined these scenarios in isolation or with limited scope. To systematically evaluate these capabilities, we introduce a comprehensive framework for analyzing knowledge-handling based on two key dimensions: the presence of parametric knowledge and the informativeness of external knowledge. Through analysis, we identify biases in knowledge utilization and examine how the ability to handle one scenario impacts performance in others. Furthermore, we demonstrate that training on data constructed based on the knowledge-handling scenarios improves LLMs' reliability in integrating and utilizing knowledge.

View on arXiv
@article{kim2025_2502.13648,
  title={ Reliability Across Parametric and External Knowledge: Understanding Knowledge Handling in LLMs },
  author={ Youna Kim and Minjoon Choi and Sungmin Cho and Hyuhng Joon Kim and Sang-goo Lee and Taeuk Kim },
  journal={arXiv preprint arXiv:2502.13648},
  year={ 2025 }
}
Comments on this paper