Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1811.04531
Cited By
Sequence-Level Knowledge Distillation for Model Compression of Attention-based Sequence-to-Sequence Speech Recognition
12 November 2018
Raden Muáz Muním
Nakamasa Inoue
K. Shinoda
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Sequence-Level Knowledge Distillation for Model Compression of Attention-based Sequence-to-Sequence Speech Recognition"
13 / 13 papers shown
Title
An Efficient Self-Learning Framework For Interactive Spoken Dialog Systems
Hitesh Tulsiani
David M. Chan
Shalini Ghosh
Garima Lalwani
Prabhat Pandey
Ankish Bansal
Sri Garimella
Ariya Rastrow
Björn Hoffmeister
26
0
0
16 Sep 2024
Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation
Jingxuan Wei
Linzhuang Sun
Yichong Leng
Xu Tan
Bihui Yu
Ruifeng Guo
43
3
0
23 Apr 2024
Unraveling Key Factors of Knowledge Distillation
Jingxuan Wei
Linzhuang Sun
Xu Tan
Bihui Yu
Ruifeng Guo
22
0
0
14 Dec 2023
Knowledge Transfer and Distillation from Autoregressive to Non-Autoregressive Speech Recognition
Xun Gong
Zhikai Zhou
Y. Qian
17
3
0
15 Jul 2022
Sequence-level self-learning with multiple hypotheses
K. Kumatani
Dimitrios Dimitriadis
Yashesh Gaur
R. Gmyr
Sefik Emre Eskimez
Jinyu Li
Michael Zeng
SSL
15
1
0
10 Dec 2021
Temporal Knowledge Distillation for On-device Audio Classification
Kwanghee Choi
Martin Kersner
Jacob Morton
Buru Chang
6
26
0
27 Oct 2021
Deploying a BERT-based Query-Title Relevance Classifier in a Production System: a View from the Trenches
Leonard Dahlmann
Tomer Lancewicki
MQ
17
0
0
23 Aug 2021
Exploiting Large-scale Teacher-Student Training for On-device Acoustic Models
Jing Liu
R. Swaminathan
S. Parthasarathi
Chunchuan Lyu
Athanasios Mouchtaris
Siegfried Kunzmann
11
9
0
11 Jun 2021
Alignment Knowledge Distillation for Online Streaming Attention-based Speech Recognition
H. Inaguma
Tatsuya Kawahara
16
13
0
28 Feb 2021
Hierarchical Transformer-based Large-Context End-to-end ASR with Large-Context Knowledge Distillation
Ryo Masumura
Naoki Makishima
Mana Ihori
Akihiko Takashima
Tomohiro Tanaka
Shota Orihashi
21
29
0
16 Feb 2021
Knowledge Distillation in Deep Learning and its Applications
Abdolmaged Alkhulaifi
Fahad Alsahli
Irfan Ahmad
FedML
12
76
0
17 Jul 2020
Large scale weakly and semi-supervised learning for low-resource video ASR
Kritika Singh
Vimal Manohar
Alex Xiao
Sergey Edunov
Ross B. Girshick
Vitaliy Liptchinsky
Christian Fuegen
Yatharth Saraf
Geoffrey Zweig
Abdel-rahman Mohamed
12
9
0
16 May 2020
Domain Adaptation via Teacher-Student Learning for End-to-End Speech Recognition
Zhong Meng
Jinyu Li
Yashesh Gaur
Y. Gong
7
50
0
06 Jan 2020
1