Code Language Models (codeLMs) and Graph Neural Networks (GNNs) are widely used in code vulnerability detection. However, GNNs often rely on aggregating information from adjacent nodes, limiting structural information propagation across layers. While codeLMs can supplement GNNs with semantic information, existing integration methods underexplore their collaborative potential. To address these challenges, we propose Vul-LMGNNs, integrating pre-trained codeLMs with GNNs to enable cross-layer propagation of semantic and structural information. Vul-LMGNNs leverage Code Property Graphs (CPGs) to incorporate syntax, control flow, and data dependencies, using gated GNNs for structural extraction. An online knowledge distillation (KD) mechanism allows a student GNN to capture structural information from a trained counterpart via alternating training. Additionally, an "implicit-explicit" joint training framework leverages codeLMs to initialize embeddings and propagate code semantics. In the explicit phase, it performs late fusion via linear interpolation. Evaluations on real-world vulnerability datasets show Vul-LMGNNs outperform 17 state-of-the-art approaches. Source code is available at:this https URL.
View on arXiv@article{liu2025_2404.14719, title={ Vul-LMGNNs: Fusing language models and online-distilled graph neural networks for code vulnerability detection }, author={ Ruitong Liu and Yanbin Wang and Haitao Xu and Jianguo Sun and Fan Zhang and Peiyue Li and Zhenhao Guo }, journal={arXiv preprint arXiv:2404.14719}, year={ 2025 } }