ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.02668
  4. Cited By
On the Convergence of Shallow Neural Network Training with Randomly
  Masked Neurons

On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons

5 December 2021
Fangshuo Liao
Anastasios Kyrillidis
ArXivPDFHTML

Papers citing "On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons"

13 / 13 papers shown
Title
Everything, Everywhere, All at Once: Is Mechanistic Interpretability Identifiable?
Everything, Everywhere, All at Once: Is Mechanistic Interpretability Identifiable?
Maxime Méloux
Silviu Maniu
François Portet
Maxime Peyrard
34
0
0
28 Feb 2025
FedP3: Federated Personalized and Privacy-friendly Network Pruning under
  Model Heterogeneity
FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity
Kai Yi
Nidham Gazagnadou
Peter Richtárik
Lingjuan Lyu
69
11
0
15 Apr 2024
Federated Learning Over Images: Vertical Decompositions and Pre-Trained
  Backbones Are Difficult to Beat
Federated Learning Over Images: Vertical Decompositions and Pre-Trained Backbones Are Difficult to Beat
Erdong Hu
Yu-Shuen Tang
Anastasios Kyrillidis
C. Jermaine
FedML
14
10
0
06 Sep 2023
Towards a Better Theoretical Understanding of Independent Subnetwork
  Training
Towards a Better Theoretical Understanding of Independent Subnetwork Training
Egor Shulgin
Peter Richtárik
AI4CE
16
6
0
28 Jun 2023
MIRACLE: Multi-task Learning based Interpretable Regulation of
  Autoimmune Diseases through Common Latent Epigenetics
MIRACLE: Multi-task Learning based Interpretable Regulation of Autoimmune Diseases through Common Latent Epigenetics
Pengcheng Xu
Jinpu Cai
Yulin Gao
Ziqi Rong
AI4CE
13
0
0
24 Jun 2023
Xtreme Margin: A Tunable Loss Function for Binary Classification
  Problems
Xtreme Margin: A Tunable Loss Function for Binary Classification Problems
Rayan Wali
MQ
10
3
0
31 Oct 2022
LOFT: Finding Lottery Tickets through Filter-wise Training
LOFT: Finding Lottery Tickets through Filter-wise Training
Qihan Wang
Chen Dun
Fangshuo Liao
C. Jermaine
Anastasios Kyrillidis
11
3
0
28 Oct 2022
Efficient and Light-Weight Federated Learning via Asynchronous
  Distributed Dropout
Efficient and Light-Weight Federated Learning via Asynchronous Distributed Dropout
Chen Dun
Mirian Hipolito Garcia
C. Jermaine
Dimitrios Dimitriadis
Anastasios Kyrillidis
47
20
0
28 Oct 2022
On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks
On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks
Hongru Yang
Zhangyang Wang
MLT
19
8
0
27 Mar 2022
Masked Training of Neural Networks with Partial Gradients
Masked Training of Neural Networks with Partial Gradients
Amirkeivan Mohtashami
Martin Jaggi
Sebastian U. Stich
8
22
0
16 Jun 2021
GIST: Distributed Training for Large-Scale Graph Convolutional Networks
GIST: Distributed Training for Large-Scale Graph Convolutional Networks
Cameron R. Wolfe
Jingkang Yang
Arindam Chowdhury
Chen Dun
Artun Bayer
Santiago Segarra
Anastasios Kyrillidis
BDL
GNN
LRM
39
9
0
20 Feb 2021
On the Proof of Global Convergence of Gradient Descent for Deep ReLU
  Networks with Linear Widths
On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths
Quynh N. Nguyen
31
49
0
24 Jan 2021
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
247
9,109
0
06 Jun 2015
1