Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1901.09021
Cited By
v1
v2 (latest)
Complexity of Linear Regions in Deep Networks
25 January 2019
Boris Hanin
David Rolnick
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Complexity of Linear Regions in Deep Networks"
50 / 132 papers shown
Improved Bounds on Neural Complexity for Representing Piecewise Linear Functions
Neural Information Processing Systems (NeurIPS), 2022
Kuan-Lin Chen
H. Garudadri
Bhaskar D. Rao
328
25
0
13 Oct 2022
Curved Representation Space of Vision Transformers
AAAI Conference on Artificial Intelligence (AAAI), 2022
Juyeop Kim
Junha Park
Songkuk Kim
Jongseok Lee
ViT
281
9
0
11 Oct 2022
On Scrambling Phenomena for Randomly Initialized Recurrent Networks
Neural Information Processing Systems (NeurIPS), 2022
Vaggos Chatziafratis
Ioannis Panageas
Clayton Sanford
S. Stavroulakis
189
2
0
11 Oct 2022
Magnitude and Angle Dynamics in Training Single ReLU Neurons
Neural Networks (NN), 2022
Sangmin Lee
Byeongsu Sim
Jong Chul Ye
MLT
351
6
0
27 Sep 2022
Algorithmic Determination of the Combinatorial Structure of the Linear Regions of ReLU Neural Networks
SIAM Journal on applied algebra and geometry (JSAAG), 2022
Marissa Masden
138
17
0
15 Jul 2022
Piecewise Linear Neural Networks and Deep Learning
Nature Reviews Methods Primers (NRMP), 2022
Qinghua Tao
Li Li
Xiaolin Huang
Xiangming Xi
Shuning Wang
Johan A. K. Suykens
146
38
0
18 Jun 2022
On the Number of Regions of Piecewise Linear Neural Networks
Journal of Computational and Applied Mathematics (JCAM), 2022
Alexis Goujon
Arian Etemadi
M. Unser
278
17
0
17 Jun 2022
Lower and Upper Bounds for Numbers of Linear Regions of Graph Convolutional Networks
Neural Networks (NN), 2022
Hao Chen
Yu Wang
Huan Xiong
GNN
211
6
0
01 Jun 2022
Training Fully Connected Neural Networks is
∃
R
\exists\mathbb{R}
∃
R
-Complete
Neural Information Processing Systems (NeurIPS), 2022
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
330
35
0
04 Apr 2022
Origins of Low-dimensional Adversarial Perturbations
International Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Elvis Dohmatob
Chuan Guo
Morgane Goibert
AAML
199
4
0
25 Mar 2022
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
International Conference on Machine Learning (ICML), 2022
Xin Yu
Thiago Serra
Srikumar Ramalingam
Shandian Zhe
385
58
0
09 Mar 2022
Are All Linear Regions Created Equal?
International Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Matteo Gamba
Adrian Chmielewski-Anders
Josephine Sullivan
Hossein Azizpour
Mårten Björkman
MLT
263
17
0
23 Feb 2022
Support Vectors and Gradient Dynamics of Single-Neuron ReLU Networks
Sangmin Lee
Byeongsu Sim
Jong Chul Ye
MLT
187
0
0
11 Feb 2022
Neural Architecture Search for Spiking Neural Networks
European Conference on Computer Vision (ECCV), 2022
Youngeun Kim
Yuhang Li
Hyoungseob Park
Yeshwanth Venkatesha
Priyadarshini Panda
287
108
0
23 Jan 2022
Measuring Complexity of Learning Schemes Using Hessian-Schatten Total Variation
Shayan Aziznejad
Joaquim Campos
M. Unser
248
12
0
12 Dec 2021
Unsupervised Representation Learning via Neural Activation Coding
Yookoon Park
Sangho Lee
Gunhee Kim
David M. Blei
SSL
193
8
0
07 Dec 2021
MAE-DET: Revisiting Maximum Entropy Principle in Zero-Shot NAS for Efficient Object Detection
International Conference on Machine Learning (ICML), 2021
Zhenhong Sun
Ming Lin
Xiuyu Sun
Zhiyu Tan
Hao Li
Rong Jin
340
38
0
26 Nov 2021
Labeled sample compression schemes for complexes of oriented matroids
V. Chepoi
K. Knauer
Manon Philibert
MQ
212
8
0
28 Oct 2021
Gradient representations in ReLU networks as similarity functions
Dániel Rácz
Balint Daroczy
FAtt
127
1
0
26 Oct 2021
When are Deep Networks really better than Decision Forests at small sample sizes, and how?
Haoyin Xu
K. A. Kinfu
Will LeVine
Sambit Panda
Jayanta Dey
...
M. Kusmanov
F. Engert
Christopher M. White
Joshua T. Vogelstein
Carey E. Priebe
272
27
0
31 Aug 2021
Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
Wuyang Chen
Xinyu Gong
Junru Wu
Yunchao Wei
Humphrey Shi
Zhicheng Yan
Yi Yang
Zinan Lin
221
19
0
26 Aug 2021
Random Neural Networks in the Infinite Width Limit as Gaussian Processes
Boris Hanin
BDL
222
56
0
04 Jul 2021
On the Expected Complexity of Maxout Networks
Neural Information Processing Systems (NeurIPS), 2021
Hanna Tseran
Guido Montúfar
245
14
0
01 Jul 2021
Multifidelity Modeling for Physics-Informed Neural Networks (PINNs)
Michael Penwarden
Shandian Zhe
A. Narayan
Robert M. Kirby
194
56
0
25 Jun 2021
Reachability Analysis of Convolutional Neural Networks
Xiaodong Yang
Tomoya Yamaguchi
Hoang-Dung Tran
Bardh Hoxha
Taylor T. Johnson
Danil Prokhorov
FAtt
135
6
0
22 Jun 2021
What training reveals about neural network complexity
Neural Information Processing Systems (NeurIPS), 2021
Andreas Loukas
Marinos Poiitis
Stefanie Jegelka
229
12
0
08 Jun 2021
Proxy-Normalizing Activations to Match Batch Normalization while Removing Batch Dependence
Neural Information Processing Systems (NeurIPS), 2021
A. Labatie
Dominic Masters
Zach Eaton-Rosen
Carlo Luschi
365
21
0
07 Jun 2021
DISCO Verification: Division of Input Space into COnvex polytopes for neural network verification
Julien Girard-Satabin
Aymeric Varasse
Marc Schoenauer
Guillaume Charpiat
Zakaria Chihani
118
1
0
17 May 2021
Sharp bounds for the number of regions of maxout networks and vertices of Minkowski sums
SIAM Journal on applied algebra and geometry (SIAM J. Appl. Algebra Geom.), 2021
Guido Montúfar
Yue Ren
Leon Zhang
224
46
0
16 Apr 2021
Provable Repair of Deep Neural Networks
ACM-SIGPLAN Symposium on Programming Language Design and Implementation (PLDI), 2021
Matthew Sotoudeh
Aditya V. Thakur
AAML
280
79
0
09 Apr 2021
Fast Jacobian-Vector Product for Deep Networks
Randall Balestriero
Richard Baraniuk
156
7
0
01 Apr 2021
Using activation histograms to bound the number of affine regions in ReLU feed-forward neural networks
Peter Hinz
155
7
0
31 Mar 2021
SMILE: Self-Distilled MIxup for Efficient Transfer LEarning
Xingjian Li
Haoyi Xiong
Chengzhong Xu
Dejing Dou
108
6
0
25 Mar 2021
Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective
International Conference on Learning Representations (ICLR), 2021
Wuyang Chen
Xinyu Gong
Zinan Lin
OOD
478
272
0
23 Feb 2021
Deep ReLU Networks Preserve Expected Length
International Conference on Learning Representations (ICLR), 2021
Boris Hanin
Ryan Jeong
David Rolnick
147
15
0
21 Feb 2021
Scaling Up Exact Neural Network Compression by ReLU Stability
Neural Information Processing Systems (NeurIPS), 2021
Thiago Serra
Xin Yu
Abhinav Kumar
Srikumar Ramalingam
307
27
0
15 Feb 2021
ReLU Neural Networks of Polynomial Size for Exact Maximum Flow Computation
Conference on Integer Programming and Combinatorial Optimization (IPCO), 2021
Christoph Hertrich
Leon Sering
310
13
0
12 Feb 2021
CKConv: Continuous Kernel Convolution For Sequential Data
International Conference on Learning Representations (ICLR), 2021
David W. Romero
Anna Kuzina
Erik J. Bekkers
Jakub M. Tomczak
Mark Hoogendoorn
341
140
0
04 Feb 2021
Depth separation beyond radial functions
Journal of machine learning research (JMLR), 2021
Luca Venturi
Samy Jelassi
Tristan Ozuch
Joan Bruna
243
16
0
02 Feb 2021
Zen-NAS: A Zero-Shot NAS for High-Performance Deep Image Recognition
IEEE International Conference on Computer Vision (ICCV), 2021
Ming Lin
Pichao Wang
Zhenhong Sun
Hesen Chen
Xiuyu Sun
Qi Qian
Hao Li
Rong Jin
244
146
0
01 Feb 2021
A simple geometric proof for the benefit of depth in ReLU networks
Asaf Amrami
Yoav Goldberg
210
1
0
18 Jan 2021
Neural networks behave as hash encoders: An empirical study
Fengxiang He
Shiye Lei
Jianmin Ji
Dacheng Tao
165
3
0
14 Jan 2021
A Convergence Theory Towards Practical Over-parameterized Deep Neural Networks
Asaf Noy
Yi Tian Xu
Y. Aflalo
Lihi Zelnik-Manor
Rong Jin
229
3
0
12 Jan 2021
SyReNN: A Tool for Analyzing Deep Neural Networks
International Journal on Software Tools for Technology Transfer (STTT) (STTT), 2021
Matthew Sotoudeh
Aditya V. Thakur
AAML
GNN
137
17
0
09 Jan 2021
A General Computational Framework to Measure the Expressiveness of Complex Networks Using a Tighter Upper Bound of Linear Regions
Yutong Xie
Gaoxiang Chen
Shijie Zhao
129
3
0
08 Dec 2020
Understanding Interpretability by generalized distillation in Supervised Classification
Adit Agarwal
Dr. K.K. Shukla
Arjan Kuijper
Anirban Mukhopadhyay
FaML
FAtt
140
0
0
05 Dec 2020
Locally Linear Attributes of ReLU Neural Networks
Benjamin Sattelberg
R. Cavalieri
Michael Kirby
C. Peterson
Ross Beveridge
FAtt
164
14
0
30 Nov 2020
Dissipative Deep Neural Dynamical Systems
Ján Drgoňa
Soumya Vasisht
Aaron Tuor
D. Vrabie
316
11
0
26 Nov 2020
Exploring the Security Boundary of Data Reconstruction via Neuron Exclusivity Analysis
USENIX Security Symposium (USENIX Security), 2020
Xudong Pan
Mi Zhang
Yifan Yan
Jiaming Zhu
Zhemin Yang
AAML
195
24
0
26 Oct 2020
On the Number of Linear Functions Composing Deep Neural Network: Towards a Refined Definition of Neural Networks Complexity
Yuuki Takai
Akiyoshi Sannai
Matthieu Cordonnier
282
4
0
23 Oct 2020
Previous
1
2
3
Next