ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.10917
  4. Cited By
Knowledge Distillation from Multiple Foundation Models for End-to-End
  Speech Recognition

Knowledge Distillation from Multiple Foundation Models for End-to-End Speech Recognition

20 March 2023
Xiaoyu Yang
Qiujia Li
C. Zhang
P. Woodland
ArXivPDFHTML

Papers citing "Knowledge Distillation from Multiple Foundation Models for End-to-End Speech Recognition"

5 / 5 papers shown
Title
MT2KD: Towards A General-Purpose Encoder for Speech, Speaker, and Audio Events
MT2KD: Towards A General-Purpose Encoder for Speech, Speaker, and Audio Events
Xiaoyu Yang
Qiujia Li
Chao Zhang
P. Woodland
18
0
0
25 Sep 2024
Potentials of the Metaverse for Robotized Applications in Industry 4.0
  and Industry 5.0
Potentials of the Metaverse for Robotized Applications in Industry 4.0 and Industry 5.0
E. Kaigom
25
2
0
31 Mar 2024
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
Weiming Zhuang
Chen Chen
Lingjuan Lyu
C. L. P. Chen
Yaochu Jin
Lingjuan Lyu
AIFin
AI4CE
86
85
0
27 Jun 2023
Predicting Multi-Codebook Vector Quantization Indexes for Knowledge
  Distillation
Predicting Multi-Codebook Vector Quantization Indexes for Knowledge Distillation
Liyong Guo
Xiaoyu Yang
Quandong Wang
Yuxiang Kong
Zengwei Yao
...
Wei Kang
Long Lin
Mingshuang Luo
Piotr Żelasko
Daniel Povey
VLM
17
7
0
31 Oct 2022
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
268
5,652
0
05 Dec 2016
1