Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2103.03936
Cited By
Pufferfish: Communication-efficient Models At No Extra Cost
5 March 2021
Hongyi Wang
Saurabh Agarwal
Dimitris Papailiopoulos
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Pufferfish: Communication-efficient Models At No Extra Cost"
18 / 18 papers shown
Title
Parameter-Efficient Fine-Tuning of State Space Models
Kevin Galim
Wonjun Kang
Yuchen Zeng
H. Koo
Kangwook Lee
29
4
0
11 Oct 2024
Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generation
Likun Li
Haoqi Zeng
Changpeng Yang
Haozhe Jia
Di Xu
DiffM
32
4
0
12 Mar 2024
LLM360: Towards Fully Transparent Open-Source LLMs
Zhengzhong Liu
Aurick Qiao
W. Neiswanger
Hongyi Wang
Bowen Tan
...
Zhiting Hu
Mark Schulze
Preslav Nakov
Timothy Baldwin
Eric P. Xing
38
68
0
11 Dec 2023
Maestro: Uncovering Low-Rank Structures via Trainable Decomposition
Samuel Horváth
Stefanos Laskaridis
Shashank Rajput
Hongyi Wang
BDL
32
4
0
28 Aug 2023
Robust low-rank training via approximate orthonormal constraints
Dayana Savostianova
Emanuele Zangrando
Gianluca Ceruti
Francesco Tudisco
24
9
0
02 Jun 2023
GraVAC: Adaptive Compression for Communication-Efficient Distributed DL Training
S. Tyagi
Martin Swany
22
4
0
20 May 2023
Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
Jaeyong Song
Jinkyu Yim
Jaewon Jung
Hongsun Jang
H. Kim
Youngsok Kim
Jinho Lee
GNN
14
25
0
24 Jan 2023
Deep Learning Meets Sparse Regularization: A Signal Processing Perspective
Rahul Parhi
Robert D. Nowak
21
25
0
23 Jan 2023
Does compressing activations help model parallel training?
S. Bian
Dacheng Li
Hongyi Wang
Eric P. Xing
Shivaram Venkataraman
19
4
0
06 Jan 2023
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
Fangshuo Liao
Anastasios Kyrillidis
36
16
0
05 Dec 2021
Efficient and Private Federated Learning with Partially Trainable Networks
Hakim Sidahmed
Zheng Xu
Ankush Garg
Yuan Cao
Mingqing Chen
FedML
49
13
0
06 Oct 2021
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
175
411
0
14 Jul 2021
What Kinds of Functions do Deep Neural Networks Learn? Insights from Variational Spline Theory
Rahul Parhi
Robert D. Nowak
MLT
25
70
0
07 May 2021
On the Utility of Gradient Compression in Distributed Training Systems
Saurabh Agarwal
Hongyi Wang
Shivaram Venkataraman
Dimitris Papailiopoulos
23
46
0
28 Feb 2021
Fixing the train-test resolution discrepancy: FixEfficientNet
Hugo Touvron
Andrea Vedaldi
Matthijs Douze
Hervé Jégou
AAML
189
110
0
18 Mar 2020
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,561
0
17 Apr 2017
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
313
1,047
0
10 Feb 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
278
2,888
0
15 Sep 2016
1