Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2302.12757
Cited By
Ensemble knowledge distillation of self-supervised speech models
24 February 2023
Kuan-Po Huang
Tzu-hsun Feng
Yu-Kuan Fu
Tsung-Yuan Hsu
Po-Chieh Yen
Wei-Cheng Tseng
Kai-Wei Chang
Hung-yi Lee
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Ensemble knowledge distillation of self-supervised speech models"
5 / 5 papers shown
Title
How Redundant Is the Transformer Stack in Speech Representation Models?
Teresa Dorszewski
Albert Kjøller Jacobsen
Lenka Tětková
Lars Kai Hansen
104
0
0
20 Jan 2025
MT2KD: Towards A General-Purpose Encoder for Speech, Speaker, and Audio Events
Xiaoyu Yang
Qiujia Li
Chao Zhang
P. Woodland
18
0
0
25 Sep 2024
MIDAS: Multi-level Intent, Domain, And Slot Knowledge Distillation for Multi-turn NLU
Yan Li
So-Eon Kim
Seong-Bae Park
S. Han
19
0
0
15 Aug 2024
On-Device Constrained Self-Supervised Speech Representation Learning for Keyword Spotting via Knowledge Distillation
Gene-Ping Yang
Yue Gu
Qingming Tang
Dongsu Du
Yuzong Liu
6
5
0
06 Jul 2023
Self-Supervised Speech Representation Learning: A Review
Abdel-rahman Mohamed
Hung-yi Lee
Lasse Borgholt
Jakob Drachmann Havtorn
Joakim Edin
...
Shang-Wen Li
Karen Livescu
Lars Maaløe
Tara N. Sainath
Shinji Watanabe
SSL
AI4TS
124
339
0
21 May 2022
1