Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2207.00555
Cited By
FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Learning
1 July 2022
Yeonghyeon Lee
Kangwook Jang
Jahyun Goo
Youngmoon Jung
Hoi-Rim Kim
Re-assign community
ArXiv
PDF
HTML
Papers citing
"FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Learning"
20 / 20 papers shown
Title
Convex Distillation: Efficient Compression of Deep Networks via Convex Optimization
Prateek Varshney
Mert Pilanci
40
0
0
09 Oct 2024
One-pass Multiple Conformer and Foundation Speech Systems Compression and Quantization Using An All-in-one Neural Model
Zhaoqing Li
Haoning Xu
Tianzi Wang
Shoukang Hu
Zengrui Jin
Shujie Hu
Jiajun Deng
Mingyu Cui
Mengzhe Geng
Xunying Liu
MQ
37
1
0
14 Jun 2024
AdaPTwin: Low-Cost Adaptive Compression of Product Twins in Transformers
Emil Biju
Anirudh Sriram
Mert Pilanci
52
0
0
13 Jun 2024
Sustainable self-supervised learning for speech representations
Luis Lugo
Valentin Vielzeuf
31
2
0
11 Jun 2024
DAISY: Data Adaptive Self-Supervised Early Exit for Speech Representation Models
T. Lin
Hung-yi Lee
Hao Tang
40
1
0
08 Jun 2024
AdaKD: Dynamic Knowledge Distillation of ASR models using Adaptive Loss Weighting
Shreyan Ganguly
Roshan Nayak
Rakshith Rao
Ujan Deb
AP Prathosh
32
1
0
11 May 2024
SKILL: Similarity-aware Knowledge distILLation for Speech Self-Supervised Learning
Luca Zampierin
G. B. Hacene
Bac Nguyen
Mirco Ravanelli
38
2
0
26 Feb 2024
STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models
Kangwook Jang
Sungnyun Kim
Hoi-Rim Kim
33
1
0
14 Dec 2023
USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models
Shaojin Ding
David Qiu
David Rim
Yanzhang He
Oleg Rybakov
...
Tara N. Sainath
Zhonglin Han
Jian Li
Amir Yazdanbakhsh
Shivani Agrawal
MQ
26
9
0
13 Dec 2023
CoLLD: Contrastive Layer-to-layer Distillation for Compressing Multilingual Pre-trained Speech Encoders
Heng-Jui Chang
Ning Dong
Ruslan Mavlyutov
Sravya Popuri
Yu-An Chung
42
6
0
14 Sep 2023
Task-Agnostic Structured Pruning of Speech Representation Models
Haoyu Wang
Siyuan Wang
Weiqiang Zhang
Hongbin Suo
Yulong Wan
VLM
22
14
0
02 Jun 2023
DistilXLSR: A Light Weight Cross-Lingual Speech Representation Model
Haoyu Wang
Siyuan Wang
Weiqiang Zhang
Jinfeng Bai
32
2
0
02 Jun 2023
Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation
Kangwook Jang
Sungnyun Kim
Se-Young Yun
Hoi-Rim Kim
29
5
0
19 May 2023
DistillW2V2: A Small and Streaming Wav2vec 2.0 Based ASR Model
Yanzhe Fu
Yueteng Kang
Songjun Cao
Long Ma
11
7
0
16 Mar 2023
Lightweight feature encoder for wake-up word detection based on self-supervised speech representation
Hyungjun Lim
Younggwan Kim
Ki-Woong Yeom
E. Seo
Hoodong Lee
Stanley Jungkyu Choi
Honglak Lee
15
1
0
14 Mar 2023
Fine-tuning Strategies for Faster Inference using Speech Self-Supervised Models: A Comparative Study
Salah Zaiem
Robin Algayres
Titouan Parcollet
S. Essid
Mirco Ravanelli
50
14
0
12 Mar 2023
Compressing Transformer-based self-supervised models for speech processing
Tzu-Quan Lin
Tsung-Huan Yang
Chun-Yao Chang
Kuang-Ming Chen
Tzu-hsun Feng
Hung-yi Lee
Hao Tang
40
6
0
17 Nov 2022
Match to Win: Analysing Sequences Lengths for Efficient Self-supervised Learning in Speech and Audio
Yan Gao
Javier Fernandez-Marques
Titouan Parcollet
Pedro Porto Buarque de Gusmão
Nicholas D. Lane
33
9
0
30 Sep 2022
Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models
J. Yoon
H. Kim
Hyeon Seung Lee
Sunghwan Ahn
N. Kim
28
1
0
05 Nov 2021
Exploring wav2vec 2.0 on speaker verification and language identification
Zhiyun Fan
Meng Li
Shiyu Zhou
Bo Xu
117
202
0
11 Dec 2020
1