ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.09310
  4. Cited By
Distilling Knowledge from Ensembles of Acoustic Models for Joint
  CTC-Attention End-to-End Speech Recognition

Distilling Knowledge from Ensembles of Acoustic Models for Joint CTC-Attention End-to-End Speech Recognition

19 May 2020
Yan Gao
Titouan Parcollet
Nicholas D. Lane
    VLM
ArXivPDFHTML

Papers citing "Distilling Knowledge from Ensembles of Acoustic Models for Joint CTC-Attention End-to-End Speech Recognition"

4 / 4 papers shown
Title
Learning When to Trust Which Teacher for Weakly Supervised ASR
Learning When to Trust Which Teacher for Weakly Supervised ASR
Aakriti Agrawal
Milind Rao
Anit Kumar Sahu
Gopinath Chennupati
A. Stolcke
14
0
0
21 Jun 2023
Towards domain generalisation in ASR with elitist sampling and ensemble
  knowledge distillation
Towards domain generalisation in ASR with elitist sampling and ensemble knowledge distillation
Rehan Ahmad
Md. Asif Jalal
Muhammad Umar Farooq
A. Ollerenshaw
Thomas Hain
18
2
0
01 Mar 2023
An Experimental Study on Private Aggregation of Teacher Ensemble
  Learning for End-to-End Speech Recognition
An Experimental Study on Private Aggregation of Teacher Ensemble Learning for End-to-End Speech Recognition
Chao-Han Huck Yang
I-Fan Chen
A. Stolcke
Sabato Marco Siniscalchi
Chin-Hui Lee
22
2
0
11 Oct 2022
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
192
473
0
12 Jun 2018
1