14
1

Dual Reasoning: A GNN-LLM Collaborative Framework for Knowledge Graph Question Answering

Abstract

Large Language Models (LLMs) excel at intuitive, implicit reasoning. Guiding LLMs to construct thought chains can enhance their deliberate reasoning abilities, but also faces challenges such as hallucination. Knowledge Graphs (KGs) can provide explicit structured knowledge for LLMs to alleviate these issues. However, existing KG-enhanced methods often overlook explicit graph learning, making it challenging to efficiently provide precise reasoning chains for LLMs. Following dual-process theory, we propose Dual-Reasoning (DualR), a novel framework that integrates an external system based on Graph Neural Network (GNN) for explicit reasoning on KGs, complementing the implicit reasoning of LLMs through externalized reasoning chains. DualR designs an LLM-empowered GNN module for explicit learning on KGs, efficiently extracting high-quality reasoning chains. These reasoning chains are then refined to a knowledge-enhanced multiple-choice prompt, guiding a frozen LLM to reason thoughtfully for final answer determination. Extensive experiments on three benchmark KGQA datasets demonstrate that DualR achieves state-of-the-art performance while maintaining high efficiency and interpretability.

View on arXiv
@article{liu2025_2406.01145,
  title={ Dual Reasoning: A GNN-LLM Collaborative Framework for Knowledge Graph Question Answering },
  author={ Guangyi Liu and Yongqi Zhang and Yong Li and Quanming Yao },
  journal={arXiv preprint arXiv:2406.01145},
  year={ 2025 }
}
Comments on this paper