ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.07982
70
2

Deep Semantic Graph Learning via LLM based Node Enhancement

11 February 2025
Chuanqi Shi
Yiyi Tao
Hang Zhang
Lun Wang
Shaoshuai Du
Yixian Shen
Yanxin Shen
ArXivPDFHTML
Abstract

Graph learning has attracted significant attention due to its widespread real-world applications. Current mainstream approaches rely on text node features and obtain initial node embeddings through shallow embedding learning using GNNs, which shows limitations in capturing deep textual semantics. Recent advances in Large Language Models (LLMs) have demonstrated superior capabilities in understanding text semantics, transforming traditional text feature processing. This paper proposes a novel framework that combines Graph Transformer architecture with LLM-enhanced node features. Specifically, we leverage LLMs to generate rich semantic representations of text nodes, which are then processed by a multi-head self-attention mechanism in the Graph Transformer to capture both local and global graph structural information. Our model utilizes the Transformer's attention mechanism to dynamically aggregate neighborhood information while preserving the semantic richness provided by LLM embeddings. Experimental results demonstrate that the LLM-enhanced node features significantly improve the performance of graph learning models on node classification tasks. This approach shows promising results across multiple graph learning tasks, offering a practical direction for combining graph networks with language models.

View on arXiv
@article{shi2025_2502.07982,
  title={ Deep Semantic Graph Learning via LLM based Node Enhancement },
  author={ Chuanqi Shi and Yiyi Tao and Hang Zhang and Lun Wang and Shaoshuai Du and Yixian Shen and Yanxin Shen },
  journal={arXiv preprint arXiv:2502.07982},
  year={ 2025 }
}
Comments on this paper