ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.03424
  4. Cited By
Neural Language Model Pruning for Automatic Speech Recognition

Neural Language Model Pruning for Automatic Speech Recognition

5 October 2023
Leonardo Emili
Thiago Fraga-Silva
Ernest Pusateri
M. Nußbaum-Thom
Youssef Oualil
ArXivPDFHTML

Papers citing "Neural Language Model Pruning for Automatic Speech Recognition"

7 / 7 papers shown
Title
USM RNN-T model weights binarization
USM RNN-T model weights binarization
Oleg Rybakov
Dmitriy Serdyuk
Chengjian Zheng
MQ
26
0
0
05 Jun 2024
Space-Efficient Representation of Entity-centric Query Language Models
Space-Efficient Representation of Entity-centric Query Language Models
Christophe Van Gysel
Mirko Hannemann
Ernest Pusateri
Youssef Oualil
I. Oparin
30
7
0
29 Jun 2022
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning
  for Language Models
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Liam H. Fowl
Jonas Geiping
Steven Reich
Yuxin Wen
Wojtek Czaja
Micah Goldblum
Tom Goldstein
FedML
71
56
0
29 Jan 2022
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
127
673
0
24 Jan 2021
SEED: Self-supervised Distillation For Visual Representation
SEED: Self-supervised Distillation For Visual Representation
Zhiyuan Fang
Jianfeng Wang
Lijuan Wang
Lei Zhang
Yezhou Yang
Zicheng Liu
SSL
236
190
0
12 Jan 2021
Structured Pruning for Efficient ConvNets via Incremental Regularization
Structured Pruning for Efficient ConvNets via Incremental Regularization
Huan Wang
Qiming Zhang
Yuehai Wang
Haoji Hu
3DPC
38
45
0
20 Nov 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,888
0
15 Sep 2016
1