Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2011.00593
Cited By
v1
v2 (latest)
MixKD: Towards Efficient Distillation of Large-scale Language Models
International Conference on Learning Representations (ICLR), 2020
1 November 2020
Kevin J. Liang
Weituo Hao
Dinghan Shen
Jiuxiang Gu
Weizhu Chen
Changyou Chen
Lawrence Carin
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"MixKD: Towards Efficient Distillation of Large-scale Language Models"
0 / 0 papers shown
Title
No papers found