58
5

Refine Knowledge of Large Language Models via Adaptive Contrastive Learning

Abstract

How to alleviate the hallucinations of Large Language Models (LLMs) has always been the fundamental goal pursued by the LLMs research community. Looking through numerous hallucination-related studies, a mainstream category of methods is to reduce hallucinations by optimizing the knowledge representation of LLMs to change their output. Considering that the core focus of these works is the knowledge acquired by models, and knowledge has long been a central theme in human societal progress, we believe that the process of models refining knowledge can greatly benefit from the way humans learn. In our work, by imitating the human learning process, we design an Adaptive Contrastive Learning strategy. Our method flexibly constructs different positive and negative samples for contrastive learning based on LLMs' actual mastery of knowledge. This strategy helps LLMs consolidate the correct knowledge they already possess, deepen their understanding of the correct knowledge they have encountered but not fully grasped, forget the incorrect knowledge they previously learned, and honestly acknowledge the knowledge they lack. Extensive experiments and detailed analyses on widely used datasets demonstrate the effectiveness of our method.

View on arXiv
@article{li2025_2502.07184,
  title={ Refine Knowledge of Large Language Models via Adaptive Contrastive Learning },
  author={ Yinghui Li and Haojing Huang and Jiayi Kuang and Yangning Li and Shu-Yu Guo and Chao Qu and Xiaoyu Tan and Hai-Tao Zheng and Ying Shen and Philip S. Yu },
  journal={arXiv preprint arXiv:2502.07184},
  year={ 2025 }
}
Comments on this paper