ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.02027
4
0

GraphPrompter: Multi-stage Adaptive Prompt Optimization for Graph In-Context Learning

4 May 2025
Rui Lv
Z. Zhang
Kai Zhang
Qi Liu
Weibo Gao
J. H. Liu
Jiaxia Yan
Linan Yue
Fangzhou Yao
ArXivPDFHTML
Abstract

Graph In-Context Learning, with the ability to adapt pre-trained graph models to novel and diverse downstream graphs without updating any parameters, has gained much attention in the community. The key to graph in-context learning is to perform downstream graphs conditioned on chosen prompt examples. Existing methods randomly select subgraphs or edges as prompts, leading to noisy graph prompts and inferior model performance. Additionally, due to the gap between pre-training and testing graphs, when the number of classes in the testing graphs is much greater than that in the training, the in-context learning ability will also significantly deteriorate. To tackle the aforementioned challenges, we develop a multi-stage adaptive prompt optimization method GraphPrompter, which optimizes the entire process of generating, selecting, and using graph prompts for better in-context learning capabilities. Firstly, Prompt Generator introduces a reconstruction layer to highlight the most informative edges and reduce irrelevant noise for graph prompt construction. Furthermore, in the selection stage, Prompt Selector employs the kkk-nearest neighbors algorithm and pre-trained selection layers to dynamically choose appropriate samples and minimize the influence of irrelevant prompts. Finally, we leverage a Prompt Augmenter with a cache replacement strategy to enhance the generalization capability of the pre-trained model on new datasets. Extensive experiments show that GraphPrompter effectively enhances the in-context learning ability of graph models. On average across all the settings, our approach surpasses the state-of-the-art baselines by over 8%. Our code is released atthis https URL.

View on arXiv
@article{lv2025_2505.02027,
  title={ GraphPrompter: Multi-stage Adaptive Prompt Optimization for Graph In-Context Learning },
  author={ Rui Lv and Zaixi Zhang and Kai Zhang and Qi Liu and Weibo Gao and Jiawei Liu and Jiaxia Yan and Linan Yue and Fangzhou Yao },
  journal={arXiv preprint arXiv:2505.02027},
  year={ 2025 }
}
Comments on this paper