Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1912.04378
Cited By
Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem
9 December 2019
Vaggos Chatziafratis
Sai Ganesh Nagarajan
Ioannis Panageas
Tianlin Li
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem"
7 / 7 papers shown
Title
The Disharmony between BN and ReLU Causes Gradient Explosion, but is Offset by the Correlation between Activations
Inyoung Paik
Jaesik Choi
21
0
0
23 Apr 2023
Transformer Vs. MLP-Mixer: Exponential Expressive Gap For NLP Problems
D. Navon
A. Bronstein
MoE
38
0
0
17 Aug 2022
Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem
Clayton Sanford
Vaggos Chatziafratis
16
1
0
19 Oct 2021
Depth separation beyond radial functions
Luca Venturi
Samy Jelassi
Tristan Ozuch
Joan Bruna
19
15
0
02 Feb 2021
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
21
20
0
31 Jan 2021
On the Number of Linear Functions Composing Deep Neural Network: Towards a Refined Definition of Neural Networks Complexity
Yuuki Takai
Akiyoshi Sannai
Matthieu Cordonnier
80
4
0
23 Oct 2020
Benefits of depth in neural networks
Matus Telgarsky
153
603
0
14 Feb 2016
1