ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.15938
  4. Cited By
ABKD: Graph Neural Network Compression with Attention-Based Knowledge
  Distillation

ABKD: Graph Neural Network Compression with Attention-Based Knowledge Distillation

24 October 2023
Anshul Ahluwalia
Rohit Das
Payman Behnam
Alind Khare
Pan Li
Alexey Tumanov
ArXivPDFHTML

Papers citing "ABKD: Graph Neural Network Compression with Attention-Based Knowledge Distillation"

2 / 2 papers shown
Title
Show, Attend and Distill:Knowledge Distillation via Attention-based
  Feature Matching
Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching
Mingi Ji
Byeongho Heo
Sungrae Park
56
142
0
05 Feb 2021
Distilling Knowledge from Graph Convolutional Networks
Distilling Knowledge from Graph Convolutional Networks
Yiding Yang
Jiayan Qiu
Mingli Song
Dacheng Tao
Xinchao Wang
141
226
0
23 Mar 2020
1