Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2207.08006
Cited By
S4: a High-sparsity, High-performance AI Accelerator
16 July 2022
Ian En-Hsu Yen
Zhibin Xiao
Dongkuan Xu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"S4: a High-sparsity, High-performance AI Accelerator"
4 / 4 papers shown
Title
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance
Shiwei Liu
Yuesong Tian
Tianlong Chen
Li Shen
30
8
0
05 Mar 2022
I-BERT: Integer-only BERT Quantization
Sehoon Kim
A. Gholami
Z. Yao
Michael W. Mahoney
Kurt Keutzer
MQ
86
340
0
05 Jan 2021
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Zhangyang Wang
Michael Carbin
148
376
0
23 Jul 2020
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Canwen Xu
Wangchunshu Zhou
Tao Ge
Furu Wei
Ming Zhou
221
197
0
07 Feb 2020
1