ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1512.03965
  4. Cited By
The Power of Depth for Feedforward Neural Networks

The Power of Depth for Feedforward Neural Networks

12 December 2015
Ronen Eldan
Ohad Shamir
ArXivPDFHTML

Papers citing "The Power of Depth for Feedforward Neural Networks"

50 / 367 papers shown
Title
Better Neural Network Expressivity: Subdividing the Simplex
Better Neural Network Expressivity: Subdividing the Simplex
Egor Bakaev
Florestan Brunck
Christoph Hertrich
Jack Stade
Amir Yehudayoff
2
0
0
20 May 2025
On the Depth of Monotone ReLU Neural Networks and ICNNs
On the Depth of Monotone ReLU Neural Networks and ICNNs
Egor Bakaev
Florestan Brunck
Christoph Hertrich
Daniel Reichman
Amir Yehudayoff
26
1
0
09 May 2025
Non-identifiability distinguishes Neural Networks among Parametric Models
Non-identifiability distinguishes Neural Networks among Parametric Models
Sourav Chatterjee
Timothy Sudijono
32
0
0
25 Apr 2025
Provable wavelet-based neural approximation
Provable wavelet-based neural approximation
Youngmi Hur
Hyojae Lim
Mikyoung Lim
27
0
0
23 Apr 2025
Deep Generative Models: Complexity, Dimensionality, and Approximation
Deep Generative Models: Complexity, Dimensionality, and Approximation
Kevin Wang
Hongqian Niu
Yixin Wang
Didong Li
DRL
41
0
0
01 Apr 2025
On the Expressiveness of Rational ReLU Neural Networks With Bounded Depth
Gennadiy Averkov
Christopher Hojny
Maximilian Merkert
89
3
0
10 Feb 2025
Theoretical limitations of multi-layer Transformer
Theoretical limitations of multi-layer Transformer
Lijie Chen
Binghui Peng
Hongxun Wu
AI4CE
72
6
0
04 Dec 2024
Neural Networks and (Virtual) Extended Formulations
Neural Networks and (Virtual) Extended Formulations
Christoph Hertrich
Georg Loho
78
3
0
05 Nov 2024
Deep Recurrent Stochastic Configuration Networks for Modelling Nonlinear
  Dynamic Systems
Deep Recurrent Stochastic Configuration Networks for Modelling Nonlinear Dynamic Systems
Gang Dang
Dianhui Wang
16
0
0
28 Oct 2024
A Theoretical Study of Neural Network Expressive Power via Manifold
  Topology
A Theoretical Study of Neural Network Expressive Power via Manifold Topology
Jiachen Yao
Mayank Goswami
Chao Chen
25
0
0
21 Oct 2024
Towards Arbitrary QUBO Optimization: Analysis of Classical and
  Quantum-Activated Feedforward Neural Networks
Towards Arbitrary QUBO Optimization: Analysis of Classical and Quantum-Activated Feedforward Neural Networks
Chia-Tso Lai
Carsten Blank
Peter Schmelcher
Rick Mukherjee
16
0
0
16 Oct 2024
On the Expressive Power of Tree-Structured Probabilistic Circuits
On the Expressive Power of Tree-Structured Probabilistic Circuits
Lang Yin
Han Zhao
TPM
26
2
0
07 Oct 2024
Activation function optimization method: Learnable series linear units
  (LSLUs)
Activation function optimization method: Learnable series linear units (LSLUs)
Chuan Feng
Xi Lin
Shiping Zhu
Hongkang Shi
Maojie Tang
Hua Huang
38
0
0
28 Aug 2024
The Role of Depth, Width, and Tree Size in Expressiveness of Deep Forest
The Role of Depth, Width, and Tree Size in Expressiveness of Deep Forest
Shen-Huan Lyu
Jin-Hui Wu
Qin-Cheng Zheng
Baoliu Ye
39
0
0
06 Jul 2024
Analytical Solution of a Three-layer Network with a Matrix Exponential
  Activation Function
Analytical Solution of a Three-layer Network with a Matrix Exponential Activation Function
Kuo Gai
Shihua Zhang
FAtt
43
0
0
02 Jul 2024
Just How Flexible are Neural Networks in Practice?
Just How Flexible are Neural Networks in Practice?
Ravid Shwartz-Ziv
Micah Goldblum
Arpit Bansal
C. Bayan Bruss
Yann LeCun
Andrew Gordon Wilson
45
4
0
17 Jun 2024
Spectral complexity of deep neural networks
Spectral complexity of deep neural networks
Simmaco Di Lillo
Domenico Marinucci
Michele Salvi
Stefano Vigogna
BDL
82
1
0
15 May 2024
Mathematics of Differential Machine Learning in Derivative Pricing and
  Hedging
Mathematics of Differential Machine Learning in Derivative Pricing and Hedging
Pedro Duarte Gomes
18
2
0
02 May 2024
Multi-layer random features and the approximation power of neural
  networks
Multi-layer random features and the approximation power of neural networks
Rustem Takhanov
29
1
0
26 Apr 2024
Opportunities and challenges in the application of large artificial
  intelligence models in radiology
Opportunities and challenges in the application of large artificial intelligence models in radiology
Liangrui Pan
Zhenyu Zhao
Ying Lu
Kewei Tang
Liyong Fu
Qingchun Liang
Shaoliang Peng
LM&MA
MedIm
AI4CE
45
5
0
24 Mar 2024
Ultra-High-Resolution Image Synthesis with Pyramid Diffusion Model
Ultra-High-Resolution Image Synthesis with Pyramid Diffusion Model
Jiajie Yang
40
0
0
19 Mar 2024
Simulating Weighted Automata over Sequences and Trees with Transformers
Simulating Weighted Automata over Sequences and Trees with Transformers
Michael Rizvi
M. Lizaire
Clara Lacroce
Guillaume Rabusseau
AI4CE
53
0
0
12 Mar 2024
Hybrid Quantum-inspired Resnet and Densenet for Pattern Recognition with
  Completeness Analysis
Hybrid Quantum-inspired Resnet and Densenet for Pattern Recognition with Completeness Analysis
Andi Chen
Hua‐Lei Yin
Zeng-Bing Chen
Shengjun Wu
37
0
0
09 Mar 2024
Linearly Constrained Weights: Reducing Activation Shift for Faster
  Training of Neural Networks
Linearly Constrained Weights: Reducing Activation Shift for Faster Training of Neural Networks
Takuro Kutsuna
LLMSV
32
1
0
08 Mar 2024
On Minimal Depth in Neural Networks
On Minimal Depth in Neural Networks
J. L. Valerdi
40
4
0
23 Feb 2024
Depth Separation in Norm-Bounded Infinite-Width Neural Networks
Depth Separation in Norm-Bounded Infinite-Width Neural Networks
Suzanna Parkinson
Greg Ongie
Rebecca Willett
Ohad Shamir
Nathan Srebro
MDE
50
2
0
13 Feb 2024
Depth Separations in Neural Networks: Separating the Dimension from the
  Accuracy
Depth Separations in Neural Networks: Separating the Dimension from the Accuracy
Itay Safran
Daniel Reichman
Paul Valiant
61
0
0
11 Feb 2024
Locality Sensitive Sparse Encoding for Learning World Models Online
Locality Sensitive Sparse Encoding for Learning World Models Online
Zi-Yan Liu
Chao Du
Wee Sun Lee
Min-Bin Lin
KELM
CLL
OffRL
33
8
0
23 Jan 2024
Mathematical Algorithm Design for Deep Learning under Societal and
  Judicial Constraints: The Algorithmic Transparency Requirement
Mathematical Algorithm Design for Deep Learning under Societal and Judicial Constraints: The Algorithmic Transparency Requirement
Holger Boche
Adalbert Fono
Gitta Kutyniok
FaML
31
4
0
18 Jan 2024
Interplay between depth and width for interpolation in neural ODEs
Interplay between depth and width for interpolation in neural ODEs
Antonio Álvarez-López
Arselane Hadj Slimane
Enrique Zuazua
36
7
0
18 Jan 2024
Manipulating Feature Visualizations with Gradient Slingshots
Manipulating Feature Visualizations with Gradient Slingshots
Dilyara Bareeva
Marina M.-C. Höhne
Alexander Warnecke
Lukas Pirch
Klaus-Robert Müller
Konrad Rieck
Kirill Bykov
AAML
40
6
0
11 Jan 2024
Expressivity and Approximation Properties of Deep Neural Networks with
  ReLU$^k$ Activation
Expressivity and Approximation Properties of Deep Neural Networks with ReLUk^kk Activation
Juncai He
Tong Mao
Jinchao Xu
43
3
0
27 Dec 2023
Testing RadiX-Nets: Advances in Viable Sparse Topologies
Testing RadiX-Nets: Advances in Viable Sparse Topologies
Kevin Kwak
Zack West
Hayden Jananthan
J. Kepner
8
0
0
06 Nov 2023
The Evolution of the Interplay Between Input Distributions and Linear
  Regions in Networks
The Evolution of the Interplay Between Input Distributions and Linear Regions in Networks
Xuan Qi
Yi Wei
11
0
0
28 Oct 2023
The Expressive Power of Low-Rank Adaptation
The Expressive Power of Low-Rank Adaptation
Yuchen Zeng
Kangwook Lee
41
51
0
26 Oct 2023
Topological Expressivity of ReLU Neural Networks
Topological Expressivity of ReLU Neural Networks
Ekin Ergen
Moritz Grillo
62
2
0
17 Oct 2023
Deep Learning based Spatially Dependent Acoustical Properties Recovery
Deep Learning based Spatially Dependent Acoustical Properties Recovery
Ruixian Liu
Peter Gerstoft
28
0
0
17 Oct 2023
From Alexnet to Transformers: Measuring the Non-linearity of Deep Neural Networks with Affine Optimal Transport
From Alexnet to Transformers: Measuring the Non-linearity of Deep Neural Networks with Affine Optimal Transport
Quentin Bouniot
I. Redko
Anton Mallasto
Charlotte Laclau
Karol Arndt
Oliver Struckmeier
Markus Heinonen
Ville Kyrki
Samuel Kaski
61
2
0
17 Oct 2023
Why should autoencoders work?
Why should autoencoders work?
Matthew D. Kvalheim
E.D. Sontag
29
0
0
03 Oct 2023
Selective Feature Adapter for Dense Vision Transformers
Selective Feature Adapter for Dense Vision Transformers
XueQing Deng
Qi Fan
Xiaojie Jin
Linjie Yang
Peng Wang
37
0
0
03 Oct 2023
State-space Models with Layer-wise Nonlinearity are Universal
  Approximators with Exponential Decaying Memory
State-space Models with Layer-wise Nonlinearity are Universal Approximators with Exponential Decaying Memory
Shida Wang
Beichen Xue
19
24
0
23 Sep 2023
Amplifying Pathological Detection in EEG Signaling Pathways through
  Cross-Dataset Transfer Learning
Amplifying Pathological Detection in EEG Signaling Pathways through Cross-Dataset Transfer Learning
Mohammad Javad Darvishi Bayazi
M. S. Ghaemi
Timothée Lesort
Md Rifat Arefin
Jocelyn Faubert
Irina Rish
30
11
0
19 Sep 2023
Minimum width for universal approximation using ReLU networks on compact
  domain
Minimum width for universal approximation using ReLU networks on compact domain
Namjun Kim
Chanho Min
Sejun Park
VLM
29
10
0
19 Sep 2023
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
T. Getu
Georges Kaddoum
M. Bennis
40
1
0
13 Sep 2023
How Many Neurons Does it Take to Approximate the Maximum?
How Many Neurons Does it Take to Approximate the Maximum?
Itay Safran
Daniel Reichman
Paul Valiant
39
9
0
18 Jul 2023
Neural Hilbert Ladders: Multi-Layer Neural Networks in Function Space
Neural Hilbert Ladders: Multi-Layer Neural Networks in Function Space
Zhengdao Chen
44
1
0
03 Jul 2023
Representational Strengths and Limitations of Transformers
Representational Strengths and Limitations of Transformers
Clayton Sanford
Daniel J. Hsu
Matus Telgarsky
22
81
0
05 Jun 2023
Network Degeneracy as an Indicator of Training Performance: Comparing
  Finite and Infinite Width Angle Predictions
Network Degeneracy as an Indicator of Training Performance: Comparing Finite and Infinite Width Angle Predictions
Cameron Jakub
Mihai Nica
14
0
0
02 Jun 2023
Data Topology-Dependent Upper Bounds of Neural Network Widths
Data Topology-Dependent Upper Bounds of Neural Network Widths
Sangmin Lee
Jong Chul Ye
26
0
0
25 May 2023
VanillaNet: the Power of Minimalism in Deep Learning
VanillaNet: the Power of Minimalism in Deep Learning
Hanting Chen
Yunhe Wang
Jianyuan Guo
Dacheng Tao
VLM
34
85
0
22 May 2023
12345678
Next