ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.13982
  4. Cited By
Mathematical Models of Overparameterized Neural Networks

Mathematical Models of Overparameterized Neural Networks

27 December 2020
Cong Fang
Hanze Dong
Tong Zhang
ArXivPDFHTML

Papers citing "Mathematical Models of Overparameterized Neural Networks"

9 / 9 papers shown
Title
Sparse Neural Additive Model: Interpretable Deep Learning with Feature
  Selection via Group Sparsity
Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity
Shiyun Xu
Zhiqi Bu
Pratik Chaudhari
Ian J. Barnett
9
21
0
25 Feb 2022
Edge Artificial Intelligence for 6G: Vision, Enabling Technologies, and
  Applications
Edge Artificial Intelligence for 6G: Vision, Enabling Technologies, and Applications
Khaled B. Letaief
Yuanming Shi
Jianmin Lu
Jianhua Lu
26
414
0
24 Nov 2021
Local Augmentation for Graph Neural Networks
Local Augmentation for Graph Neural Networks
Songtao Liu
Rex Ying
Hanze Dong
Lanqing Li
Tingyang Xu
Yu Rong
P. Zhao
Junzhou Huang
Dinghao Wu
34
91
0
08 Sep 2021
Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse
  in Imbalanced Training
Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse in Imbalanced Training
Cong Fang
Hangfeng He
Qi Long
Weijie J. Su
FAtt
122
165
0
29 Jan 2021
Minimum Excess Risk in Bayesian Learning
Minimum Excess Risk in Bayesian Learning
Aolin Xu
Maxim Raginsky
15
37
0
29 Dec 2020
Generative Adversarial Imitation Learning with Neural Networks: Global
  Optimality and Convergence Rate
Generative Adversarial Imitation Learning with Neural Networks: Global Optimality and Convergence Rate
Yufeng Zhang
Qi Cai
Zhuoran Yang
Zhaoran Wang
79
12
0
08 Mar 2020
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,886
0
15 Sep 2016
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
123
602
0
14 Feb 2016
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
247
9,134
0
06 Jun 2015
1