Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.11316
Cited By
SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
19 June 2020
F. Iandola
Albert Eaton Shaw
Ravi Krishna
Kurt Keutzer
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SqueezeBERT: What can computer vision teach NLP about efficient neural networks?"
22 / 22 papers shown
Title
A Comparative Analysis of Pretrained Language Models for Text-to-Speech
M. G. Moya
Panagiota Karanasou
S. Karlapati
Bastian Schnell
Nicole Peinelt
Alexis Moinet
Thomas Drugman
26
3
0
04 Sep 2023
Improving Small Language Models on PubMedQA via Generative Data Augmentation
Zhen Guo
Peiqi Wang
Yanwei Wang
Shangdi Yu
LM&MA
MedIm
18
10
0
12 May 2023
EdgeTran: Co-designing Transformers for Efficient Inference on Mobile Edge Platforms
Shikhar Tuli
N. Jha
34
3
0
24 Mar 2023
The Framework Tax: Disparities Between Inference Efficiency in NLP Research and Deployment
Jared Fernandez
Jacob Kahn
Clara Na
Yonatan Bisk
Emma Strubell
FedML
25
10
0
13 Feb 2023
Efficient Methods for Natural Language Processing: A Survey
Marcos Vinícius Treviso
Ji-Ung Lee
Tianchu Ji
Betty van Aken
Qingqing Cao
...
Emma Strubell
Niranjan Balasubramanian
Leon Derczynski
Iryna Gurevych
Roy Schwartz
28
109
0
31 Aug 2022
Identifying and Measuring Token-Level Sentiment Bias in Pre-trained Language Models with Prompts
Apoorv Garg
Deval Srivastava
Zhiyang Xu
Lifu Huang
11
5
0
15 Apr 2022
pNLP-Mixer: an Efficient all-MLP Architecture for Language
Francesco Fusco
Damian Pascual
Peter W. J. Staar
Diego Antognini
26
29
0
09 Feb 2022
Can Model Compression Improve NLP Fairness
Guangxuan Xu
Qingyuan Hu
23
26
0
21 Jan 2022
EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation
Chenhe Dong
Guangrun Wang
Hang Xu
Jiefeng Peng
Xiaozhe Ren
Xiaodan Liang
16
28
0
15 Sep 2021
Compute and Energy Consumption Trends in Deep Learning Inference
Radosvet Desislavov
Fernando Martínez-Plumed
José Hernández Orallo
10
112
0
12 Sep 2021
Enhancing Natural Language Representation with Large-Scale Out-of-Domain Commonsense
Wanyun Cui
Xingran Chen
22
6
0
06 Sep 2021
NoiER: An Approach for Training more Reliable Fine-TunedDownstream Task Models
Myeongjun Jang
Thomas Lukasiewicz
19
4
0
29 Aug 2021
AutoBERT-Zero: Evolving BERT Backbone from Scratch
Jiahui Gao
Hang Xu
Han Shi
Xiaozhe Ren
Philip L. H. Yu
Xiaodan Liang
Xin Jiang
Zhenguo Li
19
37
0
15 Jul 2021
Pre-trained Summarization Distillation
Sam Shleifer
Alexander M. Rush
13
98
0
24 Oct 2020
Fixing the train-test resolution discrepancy: FixEfficientNet
Hugo Touvron
Andrea Vedaldi
Matthijs Douze
Hervé Jégou
AAML
181
110
0
18 Mar 2020
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Canwen Xu
Wangchunshu Zhou
Tao Ge
Furu Wei
Ming Zhou
221
197
0
07 Feb 2020
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT
Sheng Shen
Zhen Dong
Jiayu Ye
Linjian Ma
Z. Yao
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
225
574
0
12 Sep 2019
The Woman Worked as a Babysitter: On Biases in Language Generation
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
206
616
0
03 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,950
0
20 Apr 2018
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
288
10,214
0
16 Nov 2016
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,740
0
26 Sep 2016
Convolutional Neural Networks for Sentence Classification
Yoon Kim
AILaw
VLM
250
13,360
0
25 Aug 2014
1