30
0

Mitigating LLM Hallucinations with Knowledge Graphs: A Case Study

Abstract

High-stakes domains like cyber operations need responsible and trustworthy AI methods. While large language models (LLMs) are becoming increasingly popular in these domains, they still suffer from hallucinations. This research paper provides learning outcomes from a case study with LinkQ, an open-source natural language interface that was developed to combat hallucinations by forcing an LLM to query a knowledge graph (KG) for ground-truth data during question-answering (QA). We conduct a quantitative evaluation of LinkQ using a well-known KGQA dataset, showing that the system outperforms GPT-4 but still struggles with certain question categories - suggesting that alternative query construction strategies will need to be investigated in future LLM querying systems. We discuss a qualitative study of LinkQ with two domain experts using a real-world cybersecurity KG, outlining these experts' feedback, suggestions, perceived limitations, and future opportunities for systems like LinkQ.

View on arXiv
@article{li2025_2504.12422,
  title={ Mitigating LLM Hallucinations with Knowledge Graphs: A Case Study },
  author={ Harry Li and Gabriel Appleby and Kenneth Alperin and Steven R Gomez and Ashley Suh },
  journal={arXiv preprint arXiv:2504.12422},
  year={ 2025 }
}
Comments on this paper