ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.04862
  4. Cited By
Listen Attentively, and Spell Once: Whole Sentence Generation via a
  Non-Autoregressive Architecture for Low-Latency Speech Recognition

Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition

11 May 2020
Ye Bai
Jiangyan Yi
J. Tao
Zhengkun Tian
Zhengqi Wen
Shuai Zhang
    RALM
ArXivPDFHTML

Papers citing "Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition"

4 / 4 papers shown
Title
Dynamic Alignment Mask CTC: Improved Mask-CTC with Aligned Cross Entropy
Dynamic Alignment Mask CTC: Improved Mask-CTC with Aligned Cross Entropy
Xulong Zhang
Haobin Tang
Jianzong Wang
Ning Cheng
Jian Luo
Jing Xiao
27
2
0
14 Mar 2023
Towards Personalization of CTC Speech Recognition Models with Contextual
  Adapters and Adaptive Boosting
Towards Personalization of CTC Speech Recognition Models with Contextual Adapters and Adaptive Boosting
Saket Dingliwal
Monica Sunkara
S. Bodapati
S. Ronanki
Jeffrey J. Farris
Katrin Kirchhoff
25
0
0
18 Oct 2022
Exploring Non-Autoregressive End-To-End Neural Modeling For English
  Mispronunciation Detection And Diagnosis
Exploring Non-Autoregressive End-To-End Neural Modeling For English Mispronunciation Detection And Diagnosis
Hsin-Wei Wang
Bi-Cheng Yan
Hsuan-Sheng Chiu
Yung-Chang Hsu
Berlin Chen
13
7
0
01 Nov 2021
PQK: Model Compression via Pruning, Quantization, and Knowledge
  Distillation
PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation
Jang-Hyun Kim
Simyung Chang
Nojun Kwak
14
44
0
25 Jun 2021
1