ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.14768
  4. Cited By
Teach Harder, Learn Poorer: Rethinking Hard Sample Distillation for
  GNN-to-MLP Knowledge Distillation

Teach Harder, Learn Poorer: Rethinking Hard Sample Distillation for GNN-to-MLP Knowledge Distillation

20 July 2024
Lirong Wu
Yunfan Liu
Haitao Lin
Yufei Huang
Stan Z. Li
ArXivPDFHTML

Papers citing "Teach Harder, Learn Poorer: Rethinking Hard Sample Distillation for GNN-to-MLP Knowledge Distillation"

2 / 2 papers shown
Title
Iterative Graph Self-Distillation
Iterative Graph Self-Distillation
Hanlin Zhang
Shuai Lin
Weiyang Liu
Pan Zhou
Jian Tang
Xiaodan Liang
Eric P. Xing
SSL
39
33
0
23 Oct 2020
Distilling Knowledge from Graph Convolutional Networks
Distilling Knowledge from Graph Convolutional Networks
Yiding Yang
Jiayan Qiu
Mingli Song
Dacheng Tao
Xinchao Wang
141
222
0
23 Mar 2020
1