Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.02770
Cited By
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance
5 March 2022
Shiwei Liu
Yuesong Tian
Tianlong Chen
Li Shen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance"
8 / 8 papers shown
Title
Sparse-to-Sparse Training of Diffusion Models
Inês Cardoso Oliveira
Decebal Constantin Mocanu
Luis A. Leiva
DiffM
78
0
0
30 Apr 2025
Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning
Zhongzhi Yu
Yang Zhang
Kaizhi Qian
Y. Fu
Yingyan Lin
21
13
0
23 Jun 2023
Balanced Training for Sparse GANs
Yite Wang
Jing Wu
N. Hovakimyan
Ruoyu Sun
19
9
0
28 Feb 2023
DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training
Rong Dai
Li Shen
Fengxiang He
Xinmei Tian
Dacheng Tao
FedML
11
110
0
01 Jun 2022
Carbon Emissions and Large Neural Network Training
David A. Patterson
Joseph E. Gonzalez
Quoc V. Le
Chen Liang
Lluís-Miquel Munguía
D. Rothchild
David R. So
Maud Texier
J. Dean
AI4CE
239
642
0
21 Apr 2021
Truly Sparse Neural Networks at Scale
Selima Curci
D. Mocanu
Mykola Pechenizkiy
18
19
0
02 Feb 2021
Distilling portable Generative Adversarial Networks for Image Translation
Hanting Chen
Yunhe Wang
Han Shu
Changyuan Wen
Chunjing Xu
Boxin Shi
Chao Xu
Chang Xu
78
83
0
07 Mar 2020
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
262
10,320
0
12 Dec 2018
1