Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2107.00961
Cited By
ResIST: Layer-Wise Decomposition of ResNets for Distributed Training
2 July 2021
Chen Dun
Cameron R. Wolfe
C. Jermaine
Anastasios Kyrillidis
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ResIST: Layer-Wise Decomposition of ResNets for Distributed Training"
15 / 15 papers shown
Title
Leveraging Randomness in Model and Data Partitioning for Privacy Amplification
Andy Dong
Wei-Ning Chen
Ayfer Özgür
FedML
49
1
0
04 Mar 2025
Embracing Federated Learning: Enabling Weak Client Participation via Partial Model Training
Sunwoo Lee
Tuo Zhang
Saurav Prakash
Yue Niu
Salman Avestimehr
FedML
21
4
0
21 Jun 2024
WASH: Train your Ensemble with Communication-Efficient Weight Shuffling, then Average
Louis Fournier
Adel Nabli
Masih Aminbeidokhti
M. Pedersoli
Eugene Belilovsky
Edouard Oyallon
MoMe
FedML
36
3
0
27 May 2024
FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity
Kai Yi
Nidham Gazagnadou
Peter Richtárik
Lingjuan Lyu
69
11
0
15 Apr 2024
Efficient Stagewise Pretraining via Progressive Subnetworks
Abhishek Panigrahi
Nikunj Saunshi
Kaifeng Lyu
Sobhan Miryoosefi
Sashank J. Reddi
Satyen Kale
Sanjiv Kumar
28
5
0
08 Feb 2024
Towards Hyperparameter-Agnostic DNN Training via Dynamical System Insights
Carmel Fiscko
Aayushya Agarwal
Yihan Ruan
S. Kar
L. Pileggi
Bruno Sinopoli
16
0
0
21 Oct 2023
Module-wise Training of Neural Networks via the Minimizing Movement Scheme
Skander Karkar
Bhaskar Sen
Emmanuel de Bezenac
Patrick Gallinari
17
2
0
29 Sep 2023
Federated Learning Over Images: Vertical Decompositions and Pre-Trained Backbones Are Difficult to Beat
Erdong Hu
Yu-Shuen Tang
Anastasios Kyrillidis
C. Jermaine
FedML
14
10
0
06 Sep 2023
Towards a Better Theoretical Understanding of Independent Subnetwork Training
Egor Shulgin
Peter Richtárik
AI4CE
16
6
0
28 Jun 2023
Understanding Progressive Training Through the Framework of Randomized Coordinate Descent
Rafal Szlendak
Elnur Gasanov
Peter Richtárik
14
2
0
06 Jun 2023
On Efficient Training of Large-Scale Deep Learning Models: A Literature Review
Li Shen
Yan Sun
Zhiyuan Yu
Liang Ding
Xinmei Tian
Dacheng Tao
VLM
24
39
0
07 Apr 2023
Efficient and Light-Weight Federated Learning via Asynchronous Distributed Dropout
Chen Dun
Mirian Hipolito Garcia
C. Jermaine
Dimitrios Dimitriadis
Anastasios Kyrillidis
47
20
0
28 Oct 2022
Block-wise Training of Residual Networks via the Minimizing Movement Scheme
Skander Karkar
Ibrahim Ayed
Emmanuel de Bézenac
Patrick Gallinari
17
1
0
03 Oct 2022
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
Fangshuo Liao
Anastasios Kyrillidis
19
16
0
05 Dec 2021
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
261
10,196
0
16 Nov 2016
1