Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2206.12788
Cited By
Representative Teacher Keys for Knowledge Distillation Model Compression Based on Attention Mechanism for Image Classification
26 June 2022
Jun-Teng Yang
Sheng-Che Kao
S. Huang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Representative Teacher Keys for Knowledge Distillation Model Compression Based on Attention Mechanism for Image Classification"
2 / 2 papers shown
Title
Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching
Mingi Ji
Byeongho Heo
Sungrae Park
65
145
0
05 Feb 2021
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
288
2,028
0
28 Jul 2020
1