ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.09021
  4. Cited By
Complexity of Linear Regions in Deep Networks
v1v2 (latest)

Complexity of Linear Regions in Deep Networks

25 January 2019
Boris Hanin
David Rolnick
ArXiv (abs)PDFHTML

Papers citing "Complexity of Linear Regions in Deep Networks"

50 / 131 papers shown
Title
Data Augmentation Techniques to Reverse-Engineer Neural Network Weights from Input-Output Queries
Data Augmentation Techniques to Reverse-Engineer Neural Network Weights from Input-Output Queries
Alexander Beiser
Flavio Martinelli
W. Gerstner
Johanni Brea
184
0
0
25 Nov 2025
TetraSDF: Precise Mesh Extraction with Multi-resolution Tetrahedral Grid
Seonghun Oh
Youngjung Uh
Jin-Hwa Kim
132
0
0
20 Nov 2025
Topological Signatures of ReLU Neural Network Activation Patterns
Topological Signatures of ReLU Neural Network Activation Patterns
Vicente Bosca
Tatum Rask
Sunia Tanweer
Andrew R. Tawfeek
Branden Stone
84
0
0
14 Oct 2025
Designing ReLU Generative Networks to Enumerate Trees with a Given Tree Edit Distance
Designing ReLU Generative Networks to Enumerate Trees with a Given Tree Edit Distance
Mamoona Ghafoor
Tatsuya Akutsu
66
0
0
12 Oct 2025
Interlaced dynamic XCT reconstruction with spatio-temporal implicit neural representations
Interlaced dynamic XCT reconstruction with spatio-temporal implicit neural representations
Mathias Boulanger
Ericmoore Jossou
84
0
0
09 Oct 2025
GLAI: GreenLightningAI for Accelerated Training through Knowledge Decoupling
GLAI: GreenLightningAI for Accelerated Training through Knowledge Decoupling
Jose I. Mestre
Alberto Fernández-Hernández
Cristian Pérez-Corral
Manuel F. Dolz
Jose Duato
Enrique S. Quintana-Ortí
169
0
0
01 Oct 2025
Machine learning approach to single-shot multiparameter estimation for the non-linear Schrödinger equation
Machine learning approach to single-shot multiparameter estimation for the non-linear Schrödinger equation
Louis Rossignol
Tangui Aladjidi
Myrann Baker-Rasooli
Quentin Glorieux
76
0
0
23 Sep 2025
Discrete Functional Geometry of ReLU Networks via ReLU Transition Graphs
Discrete Functional Geometry of ReLU Networks via ReLU Transition Graphs
Sahil Rajesh Dhayalkar
150
0
0
03 Sep 2025
Fidelity Isn't Accuracy: When Linearly Decodable Functions Fail to Match the Ground Truth
Fidelity Isn't Accuracy: When Linearly Decodable Functions Fail to Match the Ground Truth
Jackson Eshbaugh
FAtt
214
0
0
13 Jun 2025
Time to Spike? Understanding the Representational Power of Spiking Neural Networks in Discrete Time
Time to Spike? Understanding the Representational Power of Spiking Neural Networks in Discrete Time
Duc Anh Nguyen
Ernesto Araya
Adalbert Fono
Gitta Kutyniok
511
0
0
23 May 2025
Critical Points of Random Neural Networks
Critical Points of Random Neural Networks
Simmaco Di Lillo
280
1
0
22 May 2025
Fractal and Regular Geometry of Deep Neural Networks
Fractal and Regular Geometry of Deep Neural Networks
Simmaco Di Lillo
Domenico Marinucci
Michele Salvi
Stefano Vigogna
MDEAI4CE
215
2
0
08 Apr 2025
Beyond the Next Token: Towards Prompt-Robust Zero-Shot Classification via Efficient Multi-Token Prediction
Beyond the Next Token: Towards Prompt-Robust Zero-Shot Classification via Efficient Multi-Token PredictionNorth American Chapter of the Association for Computational Linguistics (NAACL), 2025
Junlang Qian
Zixiao Zhu
Hanzhang Zhou
Zijian Feng
Zepeng Zhai
K. Mao
AAMLVLM
323
0
0
04 Apr 2025
ReLU Networks as Random Functions: Their Distribution in Probability Space
ReLU Networks as Random Functions: Their Distribution in Probability Space
Shreyas Chaudhari
J. M. F. Moura
258
0
0
28 Mar 2025
On Space Folds of ReLU Neural Networks
On Space Folds of ReLU Neural Networks
Michal Lewandowski
Hamid Eghbalzadeh
Bernhard Heinzl
Raphael Pisoni
Bernhard A.Moser
MLT
392
2
0
17 Feb 2025
Hamiltonian Monte Carlo on ReLU Neural Networks is Inefficient
Hamiltonian Monte Carlo on ReLU Neural Networks is InefficientNeural Information Processing Systems (NeurIPS), 2024
Vu C. Dinh
L. Ho
Cuong V Nguyen
136
2
0
29 Oct 2024
Get rich quick: exact solutions reveal how unbalanced initializations
  promote rapid feature learning
Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learningNeural Information Processing Systems (NeurIPS), 2024
D. Kunin
Allan Raventós
Clémentine Dominé
Feng Chen
David Klindt
Andrew M. Saxe
Surya Ganguli
MLT
323
25
0
10 Jun 2024
Growing Tiny Networks: Spotting Expressivity Bottlenecks and Fixing Them
  Optimally
Growing Tiny Networks: Spotting Expressivity Bottlenecks and Fixing Them Optimally
Manon Verbockhaven
Sylvain Chevallier
Guillaume Charpiat
183
7
0
30 May 2024
Spectral Truncation Kernels: Noncommutativity in $C^*$-algebraic Kernel Machines
Spectral Truncation Kernels: Noncommutativity in C∗C^*C∗-algebraic Kernel Machines
Yuka Hashimoto
Ayoub Hafid
Masahiro Ikeda
Hachem Kadri
401
2
0
28 May 2024
Spectral complexity of deep neural networks
Spectral complexity of deep neural networksSIAM Journal on Mathematics of Data Science (SIMODS), 2024
Simmaco Di Lillo
Domenico Marinucci
Michele Salvi
Stefano Vigogna
BDL
428
2
0
15 May 2024
Graph is all you need? Lightweight data-agnostic neural architecture search without training
Graph is all you need? Lightweight data-agnostic neural architecture search without training
Zhenhan Huang
Tejaswini Pedapati
Pin-Yu Chen
Chunheng Jiang
Jianxi Gao
GNN
328
1
0
02 May 2024
Computing conservative probabilities of rare events with surrogates
Computing conservative probabilities of rare events with surrogates
Nicolas Bousquet
213
0
0
26 Mar 2024
Analyzing Generalization in Policy Networks: A Case Study with the
  Double-Integrator System
Analyzing Generalization in Policy Networks: A Case Study with the Double-Integrator SystemAAAI Conference on Artificial Intelligence (AAAI), 2023
Ruining Zhang
H. Han
Maolong Lv
Qisong Yang
Jian Cheng
OffRL
223
4
0
16 Dec 2023
The Evolution of the Interplay Between Input Distributions and Linear
  Regions in Networks
The Evolution of the Interplay Between Input Distributions and Linear Regions in Networks
Xuan Qi
Yi Wei
181
0
0
28 Oct 2023
Quantitative CLTs in Deep Neural Networks
Quantitative CLTs in Deep Neural NetworksProbability theory and related fields (PTRF), 2023
Stefano Favaro
Boris Hanin
Domenico Marinucci
I. Nourdin
G. Peccati
BDL
623
25
0
12 Jul 2023
Zero-Shot Neural Architecture Search: Challenges, Solutions, and
  Opportunities
Zero-Shot Neural Architecture Search: Challenges, Solutions, and OpportunitiesIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023
Guihong Li
Duc-Tuong Hoang
Kartikeya Bhardwaj
Ming Lin
Zinan Lin
R. Marculescu
353
36
0
05 Jul 2023
Why do CNNs excel at feature extraction? A mathematical explanation
Why do CNNs excel at feature extraction? A mathematical explanation
V. Nandakumar
Arush Tagade
Tongliang Liu
FAtt
91
1
0
03 Jul 2023
Neural Polytopes
Neural Polytopes
Koji Hashimoto
T. Naito
Hisashi Naito
287
1
0
03 Jul 2023
Polyhedral Complex Extraction from ReLU Networks using Edge Subdivision
Polyhedral Complex Extraction from ReLU Networks using Edge SubdivisionInternational Conference on Machine Learning (ICML), 2023
Arturs Berzins
184
9
0
12 Jun 2023
Deep ReLU Networks Have Surprisingly Simple Polytopes
Deep ReLU Networks Have Surprisingly Simple Polytopes
Fenglei Fan
Wei Huang
Xiang-yu Zhong
Lecheng Ruan
T. Zeng
Huan Xiong
Haiwei Yang
260
5
0
16 May 2023
SkelEx and BoundEx: Natural Visualization of ReLU Neural Networks
SkelEx and BoundEx: Natural Visualization of ReLU Neural Networks
Pawel Pukowski
Haiping Lu
139
0
0
09 May 2023
When Deep Learning Meets Polyhedral Theory: A Survey
When Deep Learning Meets Polyhedral Theory: A Survey
Joey Huchette
Gonzalo Muñoz
Thiago Serra
Calvin Tsay
AI4CE
610
44
0
29 Apr 2023
The Power of Typed Affine Decision Structures: A Case Study
The Power of Typed Affine Decision Structures: A Case StudyInternational Journal on Software Tools for Technology Transfer (STTT) (STTT), 2023
Gerrit Nolte
Maximilian Schlüter
Alnis Murtovi
Bernhard Steffen
AAML
133
2
0
28 Apr 2023
Expressivity of Shallow and Deep Neural Networks for Polynomial
  Approximation
Expressivity of Shallow and Deep Neural Networks for Polynomial Approximation
Itai Shapira
108
0
0
06 Mar 2023
SplineCam: Exact Visualization and Characterization of Deep Network
  Geometry and Decision Boundaries
SplineCam: Exact Visualization and Characterization of Deep Network Geometry and Decision BoundariesComputer Vision and Pattern Recognition (CVPR), 2023
Ahmed Imtiaz Humayun
Randall Balestriero
Guha Balakrishnan
Richard Baraniuk
305
27
0
24 Feb 2023
On the Lipschitz Constant of Deep Networks and Double Descent
On the Lipschitz Constant of Deep Networks and Double DescentBritish Machine Vision Conference (BMVC), 2023
Matteo Gamba
Hossein Azizpour
Mårten Björkman
514
10
0
28 Jan 2023
How does training shape the Riemannian geometry of neural network representations?
How does training shape the Riemannian geometry of neural network representations?
Jacob A. Zavatone-Veth
Sheng Yang
Julian Rubinfien
Cengiz Pehlevan
MLTAI4CE
407
6
0
26 Jan 2023
Towards Rigorous Understanding of Neural Networks via
  Semantics-preserving Transformations
Towards Rigorous Understanding of Neural Networks via Semantics-preserving TransformationsInternational Journal on Software Tools for Technology Transfer (STTT) (STTT), 2023
Maximilian Schlüter
Gerrit Nolte
Alnis Murtovi
Bernhard Steffen
241
6
0
19 Jan 2023
Getting Away with More Network Pruning: From Sparsity to Geometry and
  Linear Regions
Getting Away with More Network Pruning: From Sparsity to Geometry and Linear RegionsIntegration of AI and OR Techniques in Constraint Programming (CPAIOR), 2023
Junyang Cai
Khai-Nguyen Nguyen
Nishant Shrestha
Aidan Good
Ruisen Tu
Xin Yu
Shandian Zhe
Thiago Serra
MLT
268
11
0
19 Jan 2023
Expected Gradients of Maxout Networks and Consequences to Parameter
  Initialization
Expected Gradients of Maxout Networks and Consequences to Parameter InitializationInternational Conference on Machine Learning (ICML), 2023
Hanna Tseran
Guido Montúfar
ODL
162
1
0
17 Jan 2023
Understanding the Spectral Bias of Coordinate Based MLPs Via Training
  Dynamics
Understanding the Spectral Bias of Coordinate Based MLPs Via Training Dynamics
J. Lazzari
Xiuwen Liu
239
3
0
14 Jan 2023
Effects of Data Geometry in Early Deep Learning
Effects of Data Geometry in Early Deep LearningNeural Information Processing Systems (NeurIPS), 2022
Saket Tiwari
George Konidaris
313
7
0
29 Dec 2022
Maximal Initial Learning Rates in Deep ReLU Networks
Maximal Initial Learning Rates in Deep ReLU NetworksInternational Conference on Machine Learning (ICML), 2022
Gaurav M. Iyer
Boris Hanin
David Rolnick
248
14
0
14 Dec 2022
Interpreting Neural Networks through the Polytope Lens
Interpreting Neural Networks through the Polytope Lens
Sid Black
Lee D. Sharkey
Léo Grinsztajn
Eric Winsor
Daniel A. Braun
...
Kip Parker
Carlos Ramón Guevara
Beren Millidge
Gabriel Alfour
Connor Leahy
FAttMILM
162
33
0
22 Nov 2022
Scalar Invariant Networks with Zero Bias
Scalar Invariant Networks with Zero Bias
Chuqin Geng
Xiaojie Xu
Haolin Ye
X. Si
193
1
0
15 Nov 2022
Towards Reliable Neural Specifications
Towards Reliable Neural SpecificationsInternational Conference on Machine Learning (ICML), 2022
Chuqin Geng
Nham Le
Xiaojie Xu
Zhaoyue Wang
A. Gurfinkel
X. Si
AAML
290
11
0
28 Oct 2022
Non-Linear Coordination Graphs
Non-Linear Coordination GraphsNeural Information Processing Systems (NeurIPS), 2022
Yipeng Kang
Tonghan Wang
Xiao-Ren Wu
Qianlan Yang
Chongjie Zhang
157
10
0
26 Oct 2022
Understanding the Evolution of Linear Regions in Deep Reinforcement
  Learning
Understanding the Evolution of Linear Regions in Deep Reinforcement LearningNeural Information Processing Systems (NeurIPS), 2022
S. Cohan
N. Kim
David Rolnick
M. van de Panne
147
7
0
24 Oct 2022
Deep Model Reassembly
Deep Model ReassemblyNeural Information Processing Systems (NeurIPS), 2022
Xingyi Yang
Zhou Daquan
Songhua Liu
Jingwen Ye
Xinchao Wang
MoMe
240
144
0
24 Oct 2022
When Expressivity Meets Trainability: Fewer than $n$ Neurons Can Work
When Expressivity Meets Trainability: Fewer than nnn Neurons Can WorkNeural Information Processing Systems (NeurIPS), 2022
Jiawei Zhang
Yushun Zhang
Mingyi Hong
Tian Ding
Jianfeng Yao
287
10
0
21 Oct 2022
123
Next