ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.02120
  4. Cited By
Distributed Learning of Deep Neural Networks using Independent Subnet
  Training

Distributed Learning of Deep Neural Networks using Independent Subnet Training

4 October 2019
John Shelton Hyatt
Cameron R. Wolfe
Michael Lee
Yuxin Tang
Anastasios Kyrillidis
Christopher M. Jermaine
    OOD
ArXivPDFHTML

Papers citing "Distributed Learning of Deep Neural Networks using Independent Subnet Training"

10 / 10 papers shown
Title
FedPeWS: Personalized Warmup via Subnetworks for Enhanced Heterogeneous Federated Learning
FedPeWS: Personalized Warmup via Subnetworks for Enhanced Heterogeneous Federated Learning
Nurbek Tastan
Samuel Horváth
Martin Takáč
Karthik Nandakumar
FedML
57
0
0
03 Oct 2024
A Survey of Distributed Learning in Cloud, Mobile, and Edge Settings
A Survey of Distributed Learning in Cloud, Mobile, and Edge Settings
Madison Threadgill
A. Gerstlauer
41
1
0
23 May 2024
SPIRT: A Fault-Tolerant and Reliable Peer-to-Peer Serverless ML Training
  Architecture
SPIRT: A Fault-Tolerant and Reliable Peer-to-Peer Serverless ML Training Architecture
Amine Barrak
Mayssa Jaziri
Ranim Trabelsi
Fehmi Jaafar
Fábio Petrillo
36
2
0
25 Sep 2023
Efficient and Light-Weight Federated Learning via Asynchronous
  Distributed Dropout
Efficient and Light-Weight Federated Learning via Asynchronous Distributed Dropout
Chen Dun
Mirian Hipolito Garcia
C. Jermaine
Dimitrios Dimitriadis
Anastasios Kyrillidis
61
20
0
28 Oct 2022
RSC: Accelerating Graph Neural Networks Training via Randomized Sparse
  Computations
RSC: Accelerating Graph Neural Networks Training via Randomized Sparse Computations
Zirui Liu
Sheng-Wei Chen
Kaixiong Zhou
Daochen Zha
Xiao Huang
Xia Hu
32
14
0
19 Oct 2022
On the Convergence of Shallow Neural Network Training with Randomly
  Masked Neurons
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
Fangshuo Liao
Anastasios Kyrillidis
36
16
0
05 Dec 2021
ImageNet-21K Pretraining for the Masses
ImageNet-21K Pretraining for the Masses
T. Ridnik
Emanuel Ben-Baruch
Asaf Noy
Lihi Zelnik-Manor
SSeg
VLM
CLIP
179
686
0
22 Apr 2021
The Future of Digital Health with Federated Learning
The Future of Digital Health with Federated Learning
Nicola Rieke
Jonny Hancox
Wenqi Li
Fausto Milletari
H. Roth
...
Ronald M. Summers
Andrew Trask
Daguang Xu
Maximilian Baust
M. Jorge Cardoso
OOD
174
1,705
0
18 Mar 2020
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,889
0
15 Sep 2016
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
136
1,198
0
16 Aug 2016
1