ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.02930
  4. Cited By
Nearly-tight VC-dimension and pseudodimension bounds for piecewise
  linear neural networks

Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks

8 March 2017
Peter L. Bartlett
Nick Harvey
Christopher Liaw
Abbas Mehrabian
ArXivPDFHTML

Papers citing "Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks"

50 / 109 papers shown
Title
Is Out-of-Distribution Detection Learnable?
Is Out-of-Distribution Detection Learnable?
Zhen Fang
Yixuan Li
Jie Lu
Jiahua Dong
Bo Han
Feng Liu
OODD
39
125
0
26 Oct 2022
The Curious Case of Benign Memorization
The Curious Case of Benign Memorization
Sotiris Anagnostidis
Gregor Bachmann
Lorenzo Noci
Thomas Hofmann
AAML
54
8
0
25 Oct 2022
Designing Universal Causal Deep Learning Models: The Case of Infinite-Dimensional Dynamical Systems from Stochastic Analysis
Designing Universal Causal Deep Learning Models: The Case of Infinite-Dimensional Dynamical Systems from Stochastic Analysis
Luca Galimberti
Anastasis Kratsios
Giulia Livieri
OOD
28
14
0
24 Oct 2022
Why neural networks find simple solutions: the many regularizers of
  geometric complexity
Why neural networks find simple solutions: the many regularizers of geometric complexity
Benoit Dherin
Michael Munn
M. Rosca
David Barrett
57
31
0
27 Sep 2022
Improving Self-Supervised Learning by Characterizing Idealized
  Representations
Improving Self-Supervised Learning by Characterizing Idealized Representations
Yann Dubois
Tatsunori Hashimoto
Stefano Ermon
Percy Liang
SSL
83
41
0
13 Sep 2022
On the generalization of learning algorithms that do not converge
On the generalization of learning algorithms that do not converge
N. Chandramoorthy
Andreas Loukas
Khashayar Gatmiry
Stefanie Jegelka
MLT
19
11
0
16 Aug 2022
Large Language Models and the Reverse Turing Test
Large Language Models and the Reverse Turing Test
T. Sejnowski
ELM
28
107
0
28 Jul 2022
Deep Sufficient Representation Learning via Mutual Information
Deep Sufficient Representation Learning via Mutual Information
Siming Zheng
Yuanyuan Lin
Jian Huang
SSL
DRL
47
0
0
21 Jul 2022
Benefits of Additive Noise in Composing Classes with Bounded Capacity
Benefits of Additive Noise in Composing Classes with Bounded Capacity
A. F. Pour
H. Ashtiani
33
3
0
14 Jun 2022
A general approximation lower bound in $L^p$ norm, with applications to
  feed-forward neural networks
A general approximation lower bound in LpL^pLp norm, with applications to feed-forward neural networks
El Mehdi Achour
Armand Foucault
Sébastien Gerchinovitz
Franccois Malgouyres
32
7
0
09 Jun 2022
Why Robust Generalization in Deep Learning is Difficult: Perspective of
  Expressive Power
Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power
Binghui Li
Jikai Jin
Han Zhong
J. Hopcroft
Liwei Wang
OOD
82
27
0
27 May 2022
Learning ReLU networks to high uniform accuracy is intractable
Learning ReLU networks to high uniform accuracy is intractable
Julius Berner
Philipp Grohs
F. Voigtlaender
32
4
0
26 May 2022
How do noise tails impact on deep ReLU networks?
How do noise tails impact on deep ReLU networks?
Jianqing Fan
Yihong Gu
Wen-Xin Zhou
ODL
41
13
0
20 Mar 2022
Simultaneous Learning of the Inputs and Parameters in Neural
  Collaborative Filtering
Simultaneous Learning of the Inputs and Parameters in Neural Collaborative Filtering
Ramin Raziperchikolaei
Young-joo Chung
27
2
0
14 Mar 2022
Estimating a regression function in exponential families by model
  selection
Estimating a regression function in exponential families by model selection
Juntong Chen
22
2
0
13 Mar 2022
Generalization Through The Lens Of Leave-One-Out Error
Generalization Through The Lens Of Leave-One-Out Error
Gregor Bachmann
Thomas Hofmann
Aurelien Lucchi
67
7
0
07 Mar 2022
Designing Universal Causal Deep Learning Models: The Geometric
  (Hyper)Transformer
Designing Universal Causal Deep Learning Models: The Geometric (Hyper)Transformer
Beatrice Acciaio
Anastasis Kratsios
G. Pammer
OOD
54
20
0
31 Jan 2022
Deep Nonparametric Estimation of Operators between Infinite Dimensional
  Spaces
Deep Nonparametric Estimation of Operators between Infinite Dimensional Spaces
Hao Liu
Haizhao Yang
Minshuo Chen
T. Zhao
Wenjing Liao
32
36
0
01 Jan 2022
Neural networks with linear threshold activations: structure and
  algorithms
Neural networks with linear threshold activations: structure and algorithms
Sammy Khalife
Hongyu Cheng
A. Basu
42
14
0
15 Nov 2021
On the Equivalence between Neural Network and Support Vector Machine
On the Equivalence between Neural Network and Support Vector Machine
Yilan Chen
Wei Huang
Lam M. Nguyen
Tsui-Wei Weng
AAML
25
18
0
11 Nov 2021
Improved Regularization and Robustness for Fine-tuning in Neural
  Networks
Improved Regularization and Robustness for Fine-tuning in Neural Networks
Dongyue Li
Hongyang R. Zhang
NoLa
55
56
0
08 Nov 2021
Provable Lifelong Learning of Representations
Provable Lifelong Learning of Representations
Xinyuan Cao
Weiyang Liu
Santosh Vempala
CLL
25
13
0
27 Oct 2021
A Deep Generative Approach to Conditional Sampling
A Deep Generative Approach to Conditional Sampling
Xingyu Zhou
Yuling Jiao
Jin Liu
Jian Huang
10
41
0
19 Oct 2021
VC dimension of partially quantized neural networks in the
  overparametrized regime
VC dimension of partially quantized neural networks in the overparametrized regime
Yutong Wang
Clayton D. Scott
25
1
0
06 Oct 2021
Learning the hypotheses space from data through a U-curve algorithm
Learning the hypotheses space from data through a U-curve algorithm
Diego Marcondes
Adilson Simonis
Junior Barrera
39
1
0
08 Sep 2021
Robust Nonparametric Regression with Deep Neural Networks
Robust Nonparametric Regression with Deep Neural Networks
Guohao Shen
Yuling Jiao
Yuanyuan Lin
Jian Huang
OOD
33
13
0
21 Jul 2021
Learning from scarce information: using synthetic data to classify Roman
  fine ware pottery
Learning from scarce information: using synthetic data to classify Roman fine ware pottery
Santos J. Núñez Jareño
Daniël P. van Helden
Evgeny M. Mirkes
I. Tyukin
Penelope Allison
37
5
0
03 Jul 2021
Deep Generative Learning via Schrödinger Bridge
Deep Generative Learning via Schrödinger Bridge
Gefei Wang
Yuling Jiao
Qiang Xu
Yang Wang
Can Yang
DiffM
OT
23
92
0
19 Jun 2021
What can linearized neural networks actually say about generalization?
What can linearized neural networks actually say about generalization?
Guillermo Ortiz-Jiménez
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
29
43
0
12 Jun 2021
Quantifying and Improving Transferability in Domain Generalization
Quantifying and Improving Transferability in Domain Generalization
Guojun Zhang
Han Zhao
Yaoliang Yu
Pascal Poupart
40
37
0
07 Jun 2021
Sharp bounds for the number of regions of maxout networks and vertices
  of Minkowski sums
Sharp bounds for the number of regions of maxout networks and vertices of Minkowski sums
Guido Montúfar
Yue Ren
Leon Zhang
20
39
0
16 Apr 2021
Generalization bounds via distillation
Generalization bounds via distillation
Daniel J. Hsu
Ziwei Ji
Matus Telgarsky
Lan Wang
FedML
25
32
0
12 Apr 2021
Proof of the Theory-to-Practice Gap in Deep Learning via Sampling
  Complexity bounds for Neural Network Approximation Spaces
Proof of the Theory-to-Practice Gap in Deep Learning via Sampling Complexity bounds for Neural Network Approximation Spaces
Philipp Grohs
F. Voigtlaender
31
34
0
06 Apr 2021
Fast Jacobian-Vector Product for Deep Networks
Fast Jacobian-Vector Product for Deep Networks
Randall Balestriero
Richard Baraniuk
31
4
0
01 Apr 2021
Quantitative approximation results for complex-valued neural networks
Quantitative approximation results for complex-valued neural networks
A. Caragea
D. Lee
J. Maly
G. Pfander
F. Voigtlaender
13
5
0
25 Feb 2021
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for
  Deep ReLU Networks
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks
Quynh N. Nguyen
Marco Mondelli
Guido Montúfar
25
81
0
21 Dec 2020
Computational Separation Between Convolutional and Fully-Connected
  Networks
Computational Separation Between Convolutional and Fully-Connected Networks
Eran Malach
Shai Shalev-Shwartz
24
26
0
03 Oct 2020
The Kolmogorov-Arnold representation theorem revisited
The Kolmogorov-Arnold representation theorem revisited
Johannes Schmidt-Hieber
30
126
0
31 Jul 2020
The Interpolation Phase Transition in Neural Networks: Memorization and
  Generalization under Lazy Training
The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training
Andrea Montanari
Yiqiao Zhong
49
95
0
25 Jul 2020
Approximation in shift-invariant spaces with deep ReLU neural networks
Approximation in shift-invariant spaces with deep ReLU neural networks
Yunfei Yang
Zhen Li
Yang Wang
34
14
0
25 May 2020
Learning the gravitational force law and other analytic functions
Learning the gravitational force law and other analytic functions
Atish Agarwala
Abhimanyu Das
Rina Panigrahy
Qiuyi Zhang
MLT
16
0
0
15 May 2020
On Deep Instrumental Variables Estimate
On Deep Instrumental Variables Estimate
Ruiqi Liu
Zuofeng Shang
Guang Cheng
26
26
0
30 Apr 2020
Memory capacity of neural networks with threshold and ReLU activations
Memory capacity of neural networks with threshold and ReLU activations
Roman Vershynin
31
21
0
20 Jan 2020
Deep Gamblers: Learning to Abstain with Portfolio Theory
Deep Gamblers: Learning to Abstain with Portfolio Theory
Liu Ziyin
Zhikang T. Wang
Paul Pu Liang
Ruslan Salakhutdinov
Louis-Philippe Morency
Masahito Ueda
23
110
0
29 Jun 2019
The phase diagram of approximation rates for deep neural networks
The phase diagram of approximation rates for deep neural networks
Dmitry Yarotsky
Anton Zhevnerchuk
30
121
0
22 Jun 2019
Explicitizing an Implicit Bias of the Frequency Principle in Two-layer
  Neural Networks
Explicitizing an Implicit Bias of the Frequency Principle in Two-layer Neural Networks
Yaoyu Zhang
Zhi-Qin John Xu
Tao Luo
Zheng Ma
MLT
AI4CE
39
38
0
24 May 2019
A lattice-based approach to the expressivity of deep ReLU neural
  networks
A lattice-based approach to the expressivity of deep ReLU neural networks
V. Corlay
J. Boutros
P. Ciblat
L. Brunel
27
4
0
28 Feb 2019
Fine-Grained Analysis of Optimization and Generalization for
  Overparameterized Two-Layer Neural Networks
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
55
961
0
24 Jan 2019
Frequency Principle: Fourier Analysis Sheds Light on Deep Neural
  Networks
Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks
Zhi-Qin John Xu
Yaoyu Zhang
Tao Luo
Yan Xiao
Zheng Ma
23
503
0
19 Jan 2019
On the potential for open-endedness in neural networks
On the potential for open-endedness in neural networks
N. Guttenberg
N. Virgo
A. Penn
21
10
0
12 Dec 2018
Previous
123
Next