ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.08207
  4. Cited By
XLST: Cross-lingual Self-training to Learn Multilingual Representation
  for Low Resource Speech Recognition

XLST: Cross-lingual Self-training to Learn Multilingual Representation for Low Resource Speech Recognition

15 March 2021
Zi-qiang Zhang
Yan Song
Ming Wu
Xin Fang
Lirong Dai
    SSL
ArXivPDFHTML

Papers citing "XLST: Cross-lingual Self-training to Learn Multilingual Representation for Low Resource Speech Recognition"

4 / 4 papers shown
Title
Language-universal phonetic encoder for low-resource speech recognition
Language-universal phonetic encoder for low-resource speech recognition
Siyuan Feng
Ming Tu
Rui Xia
Chuanzeng Huang
Yuxuan Wang
31
2
0
19 May 2023
Language-Universal Phonetic Representation in Multilingual Speech
  Pretraining for Low-Resource Speech Recognition
Language-Universal Phonetic Representation in Multilingual Speech Pretraining for Low-Resource Speech Recognition
Siyuan Feng
Ming Tu
Rui Xia
Chuanzeng Huang
Yuxuan Wang
29
5
0
19 May 2023
Efficient Utilization of Large Pre-Trained Models for Low Resource ASR
Efficient Utilization of Large Pre-Trained Models for Low Resource ASR
Peter Vieting
Christoph Luscher
Julian Dierkes
Ralf Schluter
Hermann Ney
33
5
0
26 Oct 2022
How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An
  Extensive Benchmark on Air Traffic Control Communications
How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications
Juan Pablo Zuluaga
Amrutha Prasad
Iuliia Nigmatulina
Seyyed Saeed Sarfjoo
P. Motlícek
Matthias Kleinert
H. Helmke
Oliver Ohneiser
Qingran Zhan
13
43
0
31 Mar 2022
1