ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.06878
6
12

Attentive Graph Neural Networks for Few-Shot Learning

14 July 2020
Hao Cheng
Joey Tianyi Zhou
Wee Peng Tay
B. Wen
ArXivPDFHTML
Abstract

Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks. Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep, which limit the scalability. In this work, we propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism, i.e. node self-attention, neighborhood attention, and layer memory attention. We explain why the proposed attentive modules can improve GNN for few-shot learning with theoretical analysis and illustrations. Extensive experiments show that the proposed Attentive GNN model achieves the promising results, comparing to the state-of-the-art GNN- and CNN-based methods for few-shot learning tasks, over the mini-ImageNet and tiered-ImageNet benchmarks, under ConvNet-4 and ResNet-based backbone with both inductive and transductive settings. The codes will be made publicly available.

View on arXiv
Comments on this paper