Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2102.10492
Cited By
Deep ReLU Networks Preserve Expected Length
21 February 2021
Boris Hanin
Ryan Jeong
David Rolnick
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Deep ReLU Networks Preserve Expected Length"
12 / 12 papers shown
Title
On Space Folds of ReLU Neural Networks
Michal Lewandowski
Hamid Eghbalzadeh
Bernhard Heinzl
Raphael Pisoni
Bernhard A.Moser
MLT
78
1
0
17 Feb 2025
SmoothHess: ReLU Network Feature Interactions via Stein's Lemma
Max Torop
A. Masoomi
Davin Hill
Kivanc Kose
Stratis Ioannidis
Jennifer Dy
23
4
0
01 Nov 2023
Expected Gradients of Maxout Networks and Consequences to Parameter Initialization
Hanna Tseran
Guido Montúfar
ODL
22
0
0
17 Jan 2023
Maximal Initial Learning Rates in Deep ReLU Networks
Gaurav M. Iyer
Boris Hanin
David Rolnick
21
9
0
14 Dec 2022
Curved Representation Space of Vision Transformers
Juyeop Kim
Junha Park
Songkuk Kim
Jongseok Lee
ViT
33
6
0
11 Oct 2022
On Scrambling Phenomena for Randomly Initialized Recurrent Networks
Vaggos Chatziafratis
Ioannis Panageas
Clayton Sanford
S. Stavroulakis
11
2
0
11 Oct 2022
On the Number of Regions of Piecewise Linear Neural Networks
Alexis Goujon
Arian Etemadi
M. Unser
44
13
0
17 Jun 2022
Lower and Upper Bounds for Numbers of Linear Regions of Graph Convolutional Networks
Hao Chen
Yu Wang
Huan Xiong
GNN
14
6
0
01 Jun 2022
Deep Architecture Connectivity Matters for Its Convergence: A Fine-Grained Analysis
Wuyang Chen
Wei Huang
Xinyu Gong
Boris Hanin
Zhangyang Wang
30
7
0
11 May 2022
Gradient representations in ReLU networks as similarity functions
Dániel Rácz
Bálint Daróczy
FAtt
13
1
0
26 Oct 2021
On the Expected Complexity of Maxout Networks
Hanna Tseran
Guido Montúfar
19
11
0
01 Jul 2021
Benefits of depth in neural networks
Matus Telgarsky
142
602
0
14 Feb 2016
1