ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.00961
  4. Cited By
ResIST: Layer-Wise Decomposition of ResNets for Distributed Training
v1v2 (latest)

ResIST: Layer-Wise Decomposition of ResNets for Distributed Training

2 July 2021
Chen Dun
Cameron R. Wolfe
C. Jermaine
Anastasios Kyrillidis
ArXiv (abs)PDFHTML

Papers citing "ResIST: Layer-Wise Decomposition of ResNets for Distributed Training"

16 / 16 papers shown
TwIST: Rigging the Lottery in Transformers with Independent Subnetwork Training
TwIST: Rigging the Lottery in Transformers with Independent Subnetwork Training
Michael Menezes
Barbara Su
Xinze Feng
Yehya Farhat
Hamza Shili
Anastasios Kyrillidis
168
1
0
06 Nov 2025
Leveraging Randomness in Model and Data Partitioning for Privacy Amplification
Leveraging Randomness in Model and Data Partitioning for Privacy Amplification
Andy Dong
Wei-Ning Chen
Ayfer Özgür
FedML
369
1
0
04 Mar 2025
Embracing Federated Learning: Enabling Weak Client Participation via
  Partial Model Training
Embracing Federated Learning: Enabling Weak Client Participation via Partial Model Training
Sunwoo Lee
Tuo Zhang
Saurav Prakash
Yue Niu
Salman Avestimehr
FedML
228
9
0
21 Jun 2024
WASH: Train your Ensemble with Communication-Efficient Weight Shuffling,
  then Average
WASH: Train your Ensemble with Communication-Efficient Weight Shuffling, then Average
Louis Fournier
Adel Nabli
Masih Aminbeidokhti
M. Pedersoli
Eugene Belilovsky
Edouard Oyallon
MoMeFedML
333
7
0
27 May 2024
FedP3: Federated Personalized and Privacy-friendly Network Pruning under
  Model Heterogeneity
FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity
Kai Yi
Nidham Gazagnadou
Peter Richtárik
Lingjuan Lyu
316
14
0
15 Apr 2024
Efficient Stagewise Pretraining via Progressive Subnetworks
Efficient Stagewise Pretraining via Progressive Subnetworks
Abhishek Panigrahi
Nikunj Saunshi
Kaifeng Lyu
Sobhan Miryoosefi
Sashank J. Reddi
Satyen Kale
Sanjiv Kumar
181
8
0
08 Feb 2024
Towards Hyperparameter-Agnostic DNN Training via Dynamical System
  Insights
Towards Hyperparameter-Agnostic DNN Training via Dynamical System Insights
Carmel Fiscko
Aayushya Agarwal
Yihan Ruan
S. Kar
L. Pileggi
Bruno Sinopoli
164
1
0
21 Oct 2023
Module-wise Training of Neural Networks via the Minimizing Movement
  Scheme
Module-wise Training of Neural Networks via the Minimizing Movement SchemeNeural Information Processing Systems (NeurIPS), 2023
Skander Karkar
Bhaskar Sen
Emmanuel de Bezenac
Patrick Gallinari
328
4
0
29 Sep 2023
Federated Learning Over Images: Vertical Decompositions and Pre-Trained
  Backbones Are Difficult to Beat
Federated Learning Over Images: Vertical Decompositions and Pre-Trained Backbones Are Difficult to BeatIEEE International Conference on Computer Vision (ICCV), 2023
Erdong Hu
Yu-Shuen Tang
Anastasios Kyrillidis
C. Jermaine
FedML
285
13
0
06 Sep 2023
Towards a Better Theoretical Understanding of Independent Subnetwork
  Training
Towards a Better Theoretical Understanding of Independent Subnetwork TrainingInternational Conference on Machine Learning (ICML), 2023
Egor Shulgin
Peter Richtárik
AI4CE
354
8
0
28 Jun 2023
Understanding Progressive Training Through the Framework of Randomized
  Coordinate Descent
Understanding Progressive Training Through the Framework of Randomized Coordinate DescentInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023
Rafal Szlendak
Elnur Gasanov
Peter Richtárik
202
2
0
06 Jun 2023
On Efficient Training of Large-Scale Deep Learning Models: A Literature
  Review
On Efficient Training of Large-Scale Deep Learning Models: A Literature Review
Li Shen
Yan Sun
Zhiyuan Yu
Liang Ding
Xinmei Tian
Dacheng Tao
VLM
296
51
0
07 Apr 2023
Efficient and Light-Weight Federated Learning via Asynchronous
  Distributed Dropout
Efficient and Light-Weight Federated Learning via Asynchronous Distributed DropoutInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Chen Dun
Mirian Hipolito Garcia
C. Jermaine
Dimitrios Dimitriadis
Anastasios Kyrillidis
213
28
0
28 Oct 2022
Block-wise Training of Residual Networks via the Minimizing Movement
  Scheme
Block-wise Training of Residual Networks via the Minimizing Movement Scheme
Skander Karkar
Ibrahim Ayed
Emmanuel de Bézenac
Patrick Gallinari
213
1
0
03 Oct 2022
On the Convergence of Shallow Neural Network Training with Randomly
  Masked Neurons
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
Fangshuo Liao
Anastasios Kyrillidis
317
16
0
05 Dec 2021
YOLO9000: Better, Faster, Stronger
YOLO9000: Better, Faster, StrongerComputer Vision and Pattern Recognition (CVPR), 2016
Joseph Redmon
Ali Farhadi
VLMObjD
604
17,042
0
25 Dec 2016
1