40
1

Fine-Grained Stateful Knowledge Exploration: A Novel Paradigm for Integrating Knowledge Graphs with Large Language Models

Abstract

Large Language Models (LLMs) have shown impressive capabilities, yet updating their knowledge remains a significant challenge, often leading to outdated or inaccurate responses. A proposed solution is the integration of external knowledge bases, such as knowledge graphs, with LLMs. Most existing methods use a paradigm that treats the question as the objective, with relevant knowledge being incrementally retrieved from the knowledge graph. However, this strategy frequently experiences an mismatch in the granularity of knowledge between the target question and the entities and relations being retrieved. As a result, the information in the question cannot precisely correspond to the retrieved knowledge. This may cause redundant exploration or omission of vital knowledge, thereby leading to enhanced computational consumption and reduced retrieval accuracy. In this paper, we propose a novel paradigm of fine-grained stateful knowledge exploration, which addresses the `information granularity mismatch' issue. We extract fine-grained information from questions and explore the semantic mapping between this information and the knowledge in graph. By dynamically updating the mapping records, we avoid redundant exploration and ensure no pertinent information is overlooked, thereby reducing computational overhead and improving the accuracy of knowledge exploration. The use of fine-grained information also eliminates the need for a priori knowledge, a common requirement in existing methods. Experiments on multiple datasets revealed that our paradigm surpasses current advanced methods in knowledge retrieval while significantly reducing the average number of LLM invocations.

View on arXiv
Comments on this paper