Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.12418
Cited By
v1
v2 (latest)
Unsupervised pre-training for sequence to sequence speech recognition
28 October 2019
Zhiyun Fan
Shiyu Zhou
Bo Xu
SSL
AI4TS
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Unsupervised pre-training for sequence to sequence speech recognition"
9 / 9 papers shown
Title
Sequence-to-sequence models in peer-to-peer learning: A practical application
Robert Šajina
Ivo Ipšić
93
0
0
02 May 2024
A Complementary Joint Training Approach Using Unpaired Speech and Text for Low-Resource Automatic Speech Recognition
Ye Du
Jie Zhang
Qiu-shi Zhu
Lirong Dai
Ming Wu
Xin Fang
Zhouwang Yang
51
2
0
05 Apr 2022
Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data
Junyi Ao
Zi-Hua Zhang
Long Zhou
Shujie Liu
Haizhou Li
Tom Ko
Lirong Dai
Jinyu Li
Yao Qian
Furu Wei
SSL
77
19
0
31 Mar 2022
Pretrained Language Models for Text Generation: A Survey
Junyi Li
Tianyi Tang
Wayne Xin Zhao
J. Nie
Ji-Rong Wen
AI4CE
170
151
0
14 Jan 2022
Dropout Regularization for Self-Supervised Learning of Transformer Encoder Speech Representation
Jian Luo
Jianzong Wang
Ning Cheng
Jing Xiao
SSL
66
6
0
09 Jul 2021
Pretrained Language Models for Text Generation: A Survey
Junyi Li
Tianyi Tang
Wayne Xin Zhao
Ji-Rong Wen
LM&MA
VLM
SyDa
106
191
0
21 May 2021
Non-autoregressive Transformer-based End-to-end ASR using BERT
Fu-Hao Yu
Kuan-Yu Chen
55
23
0
10 Apr 2021
Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition
Ye Bai
Jiangyan Yi
J. Tao
Zhengkun Tian
Zhengqi Wen
Shuai Zhang
RALM
63
41
0
11 May 2020
Listen and Fill in the Missing Letters: Non-Autoregressive Transformer for Speech Recognition
Nanxin Chen
Shinji Watanabe
Jesús Villalba
Najim Dehak
70
16
0
10 Nov 2019
1