24
5

Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models

Xin Sky Li
Weize Chen
Qizhi Chu
Haopeng Li
Zhaojun Sun
Ran Li
Chen Qian
Yiwei Wei
Zhiyuan Liu
Chuan Shi
Maosong Sun
Cheng Yang
Abstract

The need to analyze graphs is ubiquitous across various fields, from social networks to biological research and recommendation systems. Therefore, enabling the ability of large language models (LLMs) to process graphs is an important step toward more advanced general intelligence. However, current LLM benchmarks on graph analysis require models to directly reason over the prompts describing graph topology, and are thus limited to small graphs with only a few dozens of nodes. In contrast, human experts typically write programs based on popular libraries for task solving, and can thus handle graphs with different scales. To this end, a question naturally arises: can LLMs analyze graphs like professionals? In this paper, we introduce ProGraph, a manually crafted benchmark containing 3 categories of graph tasks. The benchmark expects solutions based on programming instead of directly reasoning over raw inputs. Our findings reveal that the performance of current LLMs is unsatisfactory, with the best model achieving only 36% accuracy. To bridge this gap, we propose LLM4Graph datasets, which include crawled documents and auto-generated codes based on 6 widely used graph libraries. By augmenting closed-source LLMs with document retrieval and fine-tuning open-source ones on the codes, we show 11-32% absolute improvements in their accuracies. Our results underscore that the capabilities of LLMs in handling structured data are still under-explored, and show the effectiveness of LLM4Graph in enhancing LLMs' proficiency of graph analysis. The benchmark, datasets and enhanced open-source models are available atthis https URL.

View on arXiv
@article{li2025_2409.19667,
  title={ Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models },
  author={ Xin Sky Li and Weize Chen and Qizhi Chu and Haopeng Li and Zhaojun Sun and Ran Li and Chen Qian and Yiwei Wei and Zhiyuan Liu and Chuan Shi and Maosong Sun and Cheng Yang },
  journal={arXiv preprint arXiv:2409.19667},
  year={ 2025 }
}
Comments on this paper