ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.08551
  4. Cited By
HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain
  Language Model Compression

HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression

16 October 2021
Chenhe Dong
Yaliang Li
Ying Shen
Minghui Qiu
    VLM
ArXivPDFHTML

Papers citing "HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression"

4 / 4 papers shown
Title
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT
Sheng Shen
Zhen Dong
Jiayu Ye
Linjian Ma
Z. Yao
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
214
505
0
12 Sep 2019
Text Summarization with Pretrained Encoders
Text Summarization with Pretrained Encoders
Yang Liu
Mirella Lapata
MILM
245
1,417
0
22 Aug 2019
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
234
11,568
0
09 Mar 2017
Forward and Reverse Gradient-Based Hyperparameter Optimization
Forward and Reverse Gradient-Based Hyperparameter Optimization
Luca Franceschi
Michele Donini
P. Frasconi
Massimiliano Pontil
109
370
0
06 Mar 2017
1