ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.15094
  4. Cited By
Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data

Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data

27 October 2021
Gongfan Fang
Yifan Bao
Jie Song
Xinchao Wang
Don Xie
Chengchao Shen
Mingli Song
ArXivPDFHTML

Papers citing "Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data"

10 / 10 papers shown
Title
Multi-Level Optimal Transport for Universal Cross-Tokenizer Knowledge Distillation on Language Models
Multi-Level Optimal Transport for Universal Cross-Tokenizer Knowledge Distillation on Language Models
Xiao Cui
Mo Zhu
Yulei Qin
Liang Xie
Wengang Zhou
H. Li
81
4
0
19 Dec 2024
MUST: A Multilingual Student-Teacher Learning approach for low-resource
  speech recognition
MUST: A Multilingual Student-Teacher Learning approach for low-resource speech recognition
Muhammad Umar Farooq
Rehan Ahmad
Thomas Hain
18
0
0
29 Oct 2023
Weight Averaging Improves Knowledge Distillation under Domain Shift
Weight Averaging Improves Knowledge Distillation under Domain Shift
Valeriy Berezovskiy
Nikita Morozov
MoMe
21
1
0
20 Sep 2023
f-Divergence Minimization for Sequence-Level Knowledge Distillation
f-Divergence Minimization for Sequence-Level Knowledge Distillation
Yuqiao Wen
Zichao Li
Wenyu Du
Lili Mou
30
53
0
27 Jul 2023
Towards domain generalisation in ASR with elitist sampling and ensemble
  knowledge distillation
Towards domain generalisation in ASR with elitist sampling and ensemble knowledge distillation
Rehan Ahmad
Md. Asif Jalal
Muhammad Umar Farooq
A. Ollerenshaw
Thomas Hain
18
2
0
01 Mar 2023
Momentum Adversarial Distillation: Handling Large Distribution Shifts in
  Data-Free Knowledge Distillation
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Kien Do
Hung Le
D. Nguyen
Dang Nguyen
Haripriya Harikumar
T. Tran
Santu Rana
Svetha Venkatesh
18
32
0
21 Sep 2022
IDEAL: Query-Efficient Data-Free Learning from Black-box Models
IDEAL: Query-Efficient Data-Free Learning from Black-box Models
Jie M. Zhang
Chen Chen
Lingjuan Lyu
55
14
0
23 May 2022
Distilling Knowledge from Graph Convolutional Networks
Distilling Knowledge from Graph Convolutional Networks
Yiding Yang
Jiayan Qiu
Mingli Song
Dacheng Tao
Xinchao Wang
148
226
0
23 Mar 2020
A Style-Based Generator Architecture for Generative Adversarial Networks
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
276
10,348
0
12 Dec 2018
Image-to-Image Translation with Conditional Adversarial Networks
Image-to-Image Translation with Conditional Adversarial Networks
Phillip Isola
Jun-Yan Zhu
Tinghui Zhou
Alexei A. Efros
SSeg
212
19,447
0
21 Nov 2016
1