Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2302.13750
Cited By
MoLE : Mixture of Language Experts for Multi-Lingual Automatic Speech Recognition
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023
27 February 2023
Yoohwan Kwon
Soo-Whan Chung
MoE
Re-assign community
ArXiv (abs)
PDF
HTML
HuggingFace (2 upvotes)
Papers citing
"MoLE : Mixture of Language Experts for Multi-Lingual Automatic Speech Recognition"
16 / 16 papers shown
Mixture of LoRA Experts with Multi-Modal and Multi-Granularity LLM Generative Error Correction for Accented Speech Recognition
IEEE Transactions on Audio, Speech, and Language Processing (TASLP), 2025
Bingshen Mu
Kun Wei
Pengcheng Guo
Lei Xie
310
7
0
12 Jul 2025
Omni-Router: Sharing Routing Decisions in Sparse Mixture-of-Experts for Speech Recognition
Zijin Gu
Tatiana Likhomanenko
Navdeep Jaitly
MoE
238
2
0
08 Jul 2025
On-the-fly Routing for Zero-shot MoE Speaker Adaptation of Speech Foundation Models for Dysarthric Speech Recognition
Shujie Hu
Xurong Xie
Mengzhe Geng
Jiajun Deng
Huimeng Wang
...
Chengxi Deng
Tianzi Wang
Mingyu Cui
Helen M. Meng
Xunying Liu
150
0
0
28 May 2025
BLR-MoE: Boosted Language-Routing Mixture of Experts for Domain-Robust Multilingual E2E ASR
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2025
Guodong Ma
Wenxuan Wang
Lifeng Zhou
Yuting Yang
Yuke Li
Binbin Du
MoE
284
4
0
22 Jan 2025
Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts
International Conference on Learning Representations (ICLR), 2024
Guorui Zheng
Xidong Wang
Juhao Liang
Nuo Chen
Yuping Zheng
Benyou Wang
MoE
311
11
0
14 Oct 2024
Block Vecchia Approximation for Scalable and Efficient Gaussian Process Computations
Qilong Pan
Sameh Abdulah
M. Genton
Ying Sun
189
5
0
06 Oct 2024
Dynamic Language Group-Based MoE: Enhancing Code-Switching Speech Recognition with Hierarchical Routing
Hukai Huang
Shenghui Lu
Yahui Shan
He Qu
Wenhao Guan
Q. Hong
Lin Li
MoE
427
4
0
26 Jul 2024
SC-MoE: Switch Conformer Mixture of Experts for Unified Streaming and Non-streaming Code-Switching ASR
Shuaishuai Ye
Shunfei Chen
Xinhui Hu
Xinkang Xu
MoE
205
6
0
26 Jun 2024
SimSMoE: Solving Representational Collapse via Similarity Measure
Giang Do
Hung Le
T. Tran
MoE
281
3
0
22 Jun 2024
Adaptive Gating in Mixture-of-Experts based Language Models
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Jiamin Li
Qiang Su
Yitao Yang
Yimin Jiang
Cong Wang
Hong-Yu Xu
MoE
204
13
0
11 Oct 2023
LAE-ST-MoE: Boosted Language-Aware Encoder Using Speech Translation Auxiliary Task for E2E Code-switching ASR
Automatic Speech Recognition & Understanding (ASRU), 2023
Guodong Ma
Wenxuan Wang
Yuke Li
Yuting Yang
Binbin Du
Haoran Fu
226
10
0
28 Sep 2023
Big model only for hard audios: Sample dependent Whisper model selection for efficient inferences
Hugo Malard
Salah Zaiem
Robin Algayres
307
2
0
22 Sep 2023
Language-Routing Mixture of Experts for Multilingual and Code-Switching Speech Recognition
Interspeech (Interspeech), 2023
Wenxuan Wang
Guodong Ma
Yuke Li
Binbin Du
MoE
264
41
0
12 Jul 2023
Confidence-based Ensembles of End-to-End Speech Recognition Models
Interspeech (Interspeech), 2023
Igor Gitman
Vitaly Lavrukhin
A. Laptev
Boris Ginsburg
UQCV
323
9
0
27 Jun 2023
Condensing Multilingual Knowledge with Lightweight Language-Specific Modules
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Haoran Xu
Weiting Tan
Shuyue Stella Li
Yunmo Chen
Benjamin Van Durme
Philipp Koehn
Kenton W. Murray
298
7
0
23 May 2023
Signal Transformer: Complex-valued Attention and Meta-Learning for Signal Recognition
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2021
Yihong Dong
Ying Peng
Muqiao Yang
Songtao Lu
Qingjiang Shi
399
12
0
05 Jun 2021
1