ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1702.08489
  4. Cited By
Depth Separation for Neural Networks

Depth Separation for Neural Networks

27 February 2017
Amit Daniely
    MDE
ArXivPDFHTML

Papers citing "Depth Separation for Neural Networks"

29 / 29 papers shown
Title
On the Depth of Monotone ReLU Neural Networks and ICNNs
On the Depth of Monotone ReLU Neural Networks and ICNNs
Egor Bakaev
Florestan Brunck
Christoph Hertrich
Daniel Reichman
Amir Yehudayoff
26
1
0
09 May 2025
The Role of Depth, Width, and Tree Size in Expressiveness of Deep Forest
The Role of Depth, Width, and Tree Size in Expressiveness of Deep Forest
Shen-Huan Lyu
Jin-Hui Wu
Qin-Cheng Zheng
Baoliu Ye
39
0
0
06 Jul 2024
Spectral complexity of deep neural networks
Spectral complexity of deep neural networks
Simmaco Di Lillo
Domenico Marinucci
Michele Salvi
Stefano Vigogna
BDL
82
1
0
15 May 2024
Minimum width for universal approximation using ReLU networks on compact
  domain
Minimum width for universal approximation using ReLU networks on compact domain
Namjun Kim
Chanho Min
Sejun Park
VLM
29
10
0
19 Sep 2023
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Eshaan Nichani
Alexandru Damian
Jason D. Lee
MLT
44
13
0
11 May 2023
Deep neural network approximation of composite functions without the
  curse of dimensionality
Deep neural network approximation of composite functions without the curse of dimensionality
Adrian Riekert
26
0
0
12 Apr 2023
Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice
  Polytopes
Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes
Christian Haase
Christoph Hertrich
Georg Loho
34
22
0
24 Feb 2023
Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at
  Irregularly Spaced Data
Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at Irregularly Spaced Data
Jonathan W. Siegel
14
2
0
02 Feb 2023
Transformers Learn Shortcuts to Automata
Transformers Learn Shortcuts to Automata
Bingbin Liu
Jordan T. Ash
Surbhi Goel
A. Krishnamurthy
Cyril Zhang
OffRL
LRM
48
156
0
19 Oct 2022
Intrinsic dimensionality and generalization properties of the
  $\mathcal{R}$-norm inductive bias
Intrinsic dimensionality and generalization properties of the R\mathcal{R}R-norm inductive bias
Navid Ardeshir
Daniel J. Hsu
Clayton Sanford
CML
AI4CE
26
6
0
10 Jun 2022
Exponential Separations in Symmetric Neural Networks
Exponential Separations in Symmetric Neural Networks
Aaron Zweig
Joan Bruna
32
7
0
02 Jun 2022
Random Feature Amplification: Feature Learning and Generalization in
  Neural Networks
Random Feature Amplification: Feature Learning and Generalization in Neural Networks
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
30
29
0
15 Feb 2022
Depth and Feature Learning are Provably Beneficial for Neural Network
  Discriminators
Depth and Feature Learning are Provably Beneficial for Neural Network Discriminators
Carles Domingo-Enrich
MLT
MDE
31
0
0
27 Dec 2021
Optimization-Based Separations for Neural Networks
Optimization-Based Separations for Neural Networks
Itay Safran
Jason D. Lee
185
14
0
04 Dec 2021
Expressivity of Neural Networks via Chaotic Itineraries beyond
  Sharkovsky's Theorem
Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem
Clayton Sanford
Vaggos Chatziafratis
16
1
0
19 Oct 2021
Layer Folding: Neural Network Depth Reduction using Activation
  Linearization
Layer Folding: Neural Network Depth Reduction using Activation Linearization
Amir Ben Dror
Niv Zehngut
Avraham Raviv
E. Artyomov
Ran Vitek
R. Jevnisek
29
20
0
17 Jun 2021
Deep ReLU Networks Preserve Expected Length
Deep ReLU Networks Preserve Expected Length
Boris Hanin
Ryan Jeong
David Rolnick
29
14
0
21 Feb 2021
Depth separation beyond radial functions
Depth separation beyond radial functions
Luca Venturi
Samy Jelassi
Tristan Ozuch
Joan Bruna
19
15
0
02 Feb 2021
The Connection Between Approximation, Depth Separation and Learnability
  in Neural Networks
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
21
20
0
31 Jan 2021
Size and Depth Separation in Approximating Benign Functions with Neural
  Networks
Size and Depth Separation in Approximating Benign Functions with Neural Networks
Gal Vardi
Daniel Reichman
T. Pitassi
Ohad Shamir
28
7
0
30 Jan 2021
Deep Equals Shallow for ReLU Networks in Kernel Regimes
Deep Equals Shallow for ReLU Networks in Kernel Regimes
A. Bietti
Francis R. Bach
30
86
0
30 Sep 2020
The Depth-to-Width Interplay in Self-Attention
The Depth-to-Width Interplay in Self-Attention
Yoav Levine
Noam Wies
Or Sharir
Hofit Bata
Amnon Shashua
30
45
0
22 Jun 2020
Nonlinear Approximation and (Deep) ReLU Networks
Nonlinear Approximation and (Deep) ReLU Networks
Ingrid Daubechies
Ronald A. DeVore
S. Foucart
Boris Hanin
G. Petrova
22
138
0
05 May 2019
Is Deeper Better only when Shallow is Good?
Is Deeper Better only when Shallow is Good?
Eran Malach
Shai Shalev-Shwartz
28
45
0
08 Mar 2019
A lattice-based approach to the expressivity of deep ReLU neural
  networks
A lattice-based approach to the expressivity of deep ReLU neural networks
V. Corlay
J. Boutros
P. Ciblat
L. Brunel
24
4
0
28 Feb 2019
Deep Neural Networks for Estimation and Inference
Deep Neural Networks for Estimation and Inference
M. Farrell
Tengyuan Liang
S. Misra
BDL
27
254
0
26 Sep 2018
Limits on representing Boolean functions by linear combinations of
  simple functions: thresholds, ReLUs, and low-degree polynomials
Limits on representing Boolean functions by linear combinations of simple functions: thresholds, ReLUs, and low-degree polynomials
Richard Ryan Williams
32
27
0
26 Feb 2018
The power of deeper networks for expressing natural functions
The power of deeper networks for expressing natural functions
David Rolnick
Max Tegmark
36
174
0
16 May 2017
On the ability of neural nets to express distributions
On the ability of neural nets to express distributions
Holden Lee
Rong Ge
Tengyu Ma
Andrej Risteski
Sanjeev Arora
BDL
20
84
0
22 Feb 2017
1