Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.00294
Cited By
v1
v2 (latest)
Statistical Guarantees for Regularized Neural Networks
30 May 2020
Mahsa Taheri
Fang Xie
Johannes Lederer
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Statistical Guarantees for Regularized Neural Networks"
26 / 26 papers shown
Title
Neural Drift Estimation for Ergodic Diffusions: Non-parametric Analysis and Numerical Exploration
Simone Di Gregorio
Francesco Iafrate
17
0
0
30 May 2025
Regularization can make diffusion models more efficient
Mahsa Taheri
Johannes Lederer
171
0
0
13 Feb 2025
Adversarially robust generalization theory via Jacobian regularization for deep neural networks
Dongya Wu
Xin Li
AAML
107
0
0
17 Dec 2024
Fast Training of Sinusoidal Neural Fields via Scaling Initialization
Taesun Yeom
Sangyoon Lee
Jaeho Lee
110
3
0
07 Oct 2024
On the estimation rate of Bayesian PINN for inverse problems
Yi Sun
Debarghya Mukherjee
Yves Atchadé
PINN
112
1
0
21 Jun 2024
How many samples are needed to train a deep neural network?
Pegah Golestaneh
Mahsa Taheri
Johannes Lederer
76
4
0
26 May 2024
Better Representations via Adversarial Training in Pre-Training: A Theoretical Perspective
Yue Xing
Xiaofeng Lin
Qifan Song
Yi Tian Xu
Belinda Zeng
Guang Cheng
SSL
63
0
0
26 Jan 2024
Statistical learning by sparse deep neural networks
Felix Abramovich
BDL
77
1
0
15 Nov 2023
A statistical perspective on algorithm unrolling models for inverse problems
Yves Atchadé
Xinru Liu
Qiuyun Zhu
53
0
0
10 Nov 2023
Adversarial Training with Generated Data in High-Dimensional Regression: An Asymptotic Study
Yue Xing
46
0
0
21 Jun 2023
Distribution Estimation of Contaminated Data via DNN-based MoM-GANs
Fang Xie
Lihu Xu
Qiuran Yao
Huiming Zhang
55
0
0
28 Dec 2022
Statistical guarantees for sparse deep learning
Johannes Lederer
40
11
0
11 Dec 2022
Nonparametric regression with modified ReLU networks
A. Beknazaryan
Hailin Sang
52
0
0
17 Jul 2022
Statistical Guarantees for Approximate Stationary Points of Simple Neural Networks
Mahsa Taheri
Fang Xie
Johannes Lederer
52
0
0
09 May 2022
A PAC-Bayes oracle inequality for sparse neural networks
Maximilian F. Steffen
Mathias Trabs
UQCV
65
2
0
26 Apr 2022
Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive?
Kaiqi Zhang
Yu Wang
118
12
0
20 Apr 2022
Non-Asymptotic Guarantees for Robust Statistical Learning under Infinite Variance Assumption
Lihu Xu
Fang Yao
Qiuran Yao
Huiming Zhang
69
11
0
10 Jan 2022
Regularization and Reparameterization Avoid Vanishing Gradients in Sigmoid-Type Networks
Leni Ven
Johannes Lederer
ODL
54
6
0
04 Jun 2021
Neural networks with superexpressive activations and integer weights
A. Beknazaryan
64
6
0
20 May 2021
Analytic function approximation by path norm regularized deep networks
A. Beknazaryan
43
2
0
05 Apr 2021
Function approximation by deep neural networks with parameters
{
0
,
±
1
2
,
±
1
,
2
}
\{0,\pm \frac{1}{2}, \pm 1, 2\}
{
0
,
±
2
1
,
±
1
,
2
}
A. Beknazaryan
41
5
0
15 Mar 2021
Activation Functions in Artificial Neural Networks: A Systematic Overview
Johannes Lederer
FAtt
AI4CE
66
47
0
25 Jan 2021
Optimization Landscapes of Wide Deep Neural Networks Are Benign
Johannes Lederer
75
8
0
02 Oct 2020
Risk Bounds for Robust Deep Learning
Johannes Lederer
OOD
59
16
0
14 Sep 2020
HALO: Learning to Prune Neural Networks with Shrinkage
Skyler Seto
M. Wells
Wenyu Zhang
54
0
0
24 Aug 2020
Layer Sparsity in Neural Networks
Mohamed Hebiri
Johannes Lederer
92
10
0
28 Jun 2020
1