ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.12788
  4. Cited By
Representative Teacher Keys for Knowledge Distillation Model Compression
  Based on Attention Mechanism for Image Classification

Representative Teacher Keys for Knowledge Distillation Model Compression Based on Attention Mechanism for Image Classification

26 June 2022
Jun-Teng Yang
Sheng-Che Kao
S. Huang
ArXivPDFHTML

Papers citing "Representative Teacher Keys for Knowledge Distillation Model Compression Based on Attention Mechanism for Image Classification"

2 / 2 papers shown
Title
Show, Attend and Distill:Knowledge Distillation via Attention-based
  Feature Matching
Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching
Mingi Ji
Byeongho Heo
Sungrae Park
65
145
0
05 Feb 2021
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
288
2,028
0
28 Jul 2020
1