ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.11245
  4. Cited By
Deep Networks and the Multiple Manifold Problem
v1v2 (latest)

Deep Networks and the Multiple Manifold Problem

International Conference on Learning Representations (ICLR), 2020
25 August 2020
Sam Buchanan
D. Gilboa
John N. Wright
ArXiv (abs)PDFHTML

Papers citing "Deep Networks and the Multiple Manifold Problem"

33 / 33 papers shown
Quantum feature-map learning with reduced resource overhead
Quantum feature-map learning with reduced resource overhead
Jonas Jäger
Philipp Elsässer
Elham Torabian
113
0
0
03 Oct 2025
Geometry of Neural Reinforcement Learning in Continuous State and Action Spaces
Geometry of Neural Reinforcement Learning in Continuous State and Action SpacesInternational Conference on Learning Representations (ICLR), 2025
Saket Tiwari
Omer Gottesman
George Konidaris
380
3
0
28 Jul 2025
A Theoretical Study of Neural Network Expressive Power via Manifold
  Topology
A Theoretical Study of Neural Network Expressive Power via Manifold Topology
Jiachen Yao
Mayank Goswami
Chao Chen
291
0
0
21 Oct 2024
Quantum Kernel Methods under Scrutiny: A Benchmarking Study
Quantum Kernel Methods under Scrutiny: A Benchmarking StudyQuantum Machine Intelligence (QMI), 2024
Jan Schnabel
M. Roth
460
37
0
06 Sep 2024
Hardness of Learning Neural Networks under the Manifold Hypothesis
Hardness of Learning Neural Networks under the Manifold Hypothesis
B. Kiani
Jason Wang
Melanie Weber
349
18
0
03 Jun 2024
Better than classical? The subtle art of benchmarking quantum machine
  learning models
Better than classical? The subtle art of benchmarking quantum machine learning models
Joseph Bowles
Shahnawaz Ahmed
Maria Schuld
423
152
0
11 Mar 2024
Defining Neural Network Architecture through Polytope Structures of
  Dataset
Defining Neural Network Architecture through Polytope Structures of Dataset
Sangmin Lee
Abbas Mammadov
Jong Chul Ye
454
1
0
04 Feb 2024
Upper and lower bounds for the Lipschitz constant of random neural networks
Upper and lower bounds for the Lipschitz constant of random neural networks
Paul Geuchen
Dominik Stöger
Dominik Stöger
Felix Voigtlaender
AAML
588
0
0
02 Nov 2023
On the Disconnect Between Theory and Practice of Neural Networks: Limits
  of the NTK Perspective
On the Disconnect Between Theory and Practice of Neural Networks: Limits of the NTK Perspective
Jonathan Wenger
Felix Dangel
Agustinus Kristiadi
540
5
0
29 Sep 2023
Deep neural networks architectures from the perspective of manifold
  learning
Deep neural networks architectures from the perspective of manifold learning
German Magai
AAMLAI4CE
285
8
0
06 Jun 2023
Data Representations' Study of Latent Image Manifolds
Data Representations' Study of Latent Image ManifoldsInternational Conference on Machine Learning (ICML), 2023
Ilya Kaufman
Omri Azencot
426
11
0
31 May 2023
Depth Degeneracy in Neural Networks: Vanishing Angles in Fully Connected
  ReLU Networks on Initialization
Depth Degeneracy in Neural Networks: Vanishing Angles in Fully Connected ReLU Networks on InitializationJournal of machine learning research (JMLR), 2023
Cameron Jakub
Mihai Nica
ODL
239
5
0
20 Feb 2023
Topological Learning in Multi-Class Data Sets
Topological Learning in Multi-Class Data SetsPhysical Review E (PRE), 2023
Christopher H. Griffin
Trevor K. Karn
Benjamin Apple
AI4CE
374
0
0
23 Jan 2023
On the Geometry of Reinforcement Learning in Continuous State and Action
  Spaces
On the Geometry of Reinforcement Learning in Continuous State and Action Spaces
Saket Tiwari
Omer Gottesman
George Konidaris
227
0
0
29 Dec 2022
Effects of Data Geometry in Early Deep Learning
Effects of Data Geometry in Early Deep LearningNeural Information Processing Systems (NeurIPS), 2022
Saket Tiwari
George Konidaris
423
8
0
29 Dec 2022
Neural Networks Efficiently Learn Low-Dimensional Representations with
  SGD
Neural Networks Efficiently Learn Low-Dimensional Representations with SGDInternational Conference on Learning Representations (ICLR), 2022
Alireza Mousavi-Hosseini
Sejun Park
M. Girotti
Ioannis Mitliagkas
Murat A. Erdogdu
MLT
665
66
0
29 Sep 2022
On the Principles of Parsimony and Self-Consistency for the Emergence of
  Intelligence
On the Principles of Parsimony and Self-Consistency for the Emergence of IntelligenceFrontiers of Information Technology & Electronic Engineering (FITEE), 2022
Yi Ma
Doris Y. Tsao
H. Shum
315
91
0
11 Jul 2022
Spectral Bias Outside the Training Set for Deep Networks in the Kernel
  Regime
Spectral Bias Outside the Training Set for Deep Networks in the Kernel RegimeNeural Information Processing Systems (NeurIPS), 2022
Benjamin Bowman
Guido Montúfar
308
17
0
06 Jun 2022
The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at
  Initialization
The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at InitializationNeural Information Processing Systems (NeurIPS), 2022
Mufan Li
Mihai Nica
Daniel M. Roy
492
45
0
06 Jun 2022
Memorization and Optimization in Deep Neural Networks with Minimum
  Over-parameterization
Memorization and Optimization in Deep Neural Networks with Minimum Over-parameterizationNeural Information Processing Systems (NeurIPS), 2022
Simone Bombari
Mohammad Hossein Amani
Marco Mondelli
463
38
0
20 May 2022
Topology and geometry of data manifold in deep learning
Topology and geometry of data manifold in deep learning
German Magai
A. Ayzenberg
AAML
303
14
0
19 Apr 2022
Fitting an immersed submanifold to data via Sussmann's orbit theorem
Fitting an immersed submanifold to data via Sussmann's orbit theoremIEEE Conference on Decision and Control (CDC), 2022
Joshua Hanson
Maxim Raginsky
306
4
0
03 Apr 2022
Deep Learning without Shortcuts: Shaping the Kernel with Tailored
  Rectifiers
Deep Learning without Shortcuts: Shaping the Kernel with Tailored RectifiersInternational Conference on Learning Representations (ICLR), 2022
Guodong Zhang
Aleksandar Botev
James Martens
OffRL
283
30
0
15 Mar 2022
Neural Tangent Kernel Beyond the Infinite-Width Limit: Effects of Depth
  and Initialization
Neural Tangent Kernel Beyond the Infinite-Width Limit: Effects of Depth and InitializationInternational Conference on Machine Learning (ICML), 2022
Mariia Seleznova
Gitta Kutyniok
513
30
0
01 Feb 2022
A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs
A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs
Ido Nachum
Jan Hkazla
Michael C. Gastpar
Anatoly Khina
244
0
0
03 Nov 2021
Deep Networks Provably Classify Data on Curves
Deep Networks Provably Classify Data on CurvesNeural Information Processing Systems (NeurIPS), 2021
Tingran Wang
Sam Buchanan
D. Gilboa
John N. Wright
283
9
0
29 Jul 2021
Small random initialization is akin to spectral learning: Optimization
  and generalization guarantees for overparameterized low-rank matrix
  reconstruction
Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstructionNeural Information Processing Systems (NeurIPS), 2021
Dominik Stöger
Mahdi Soltanolkotabi
ODL
498
92
0
28 Jun 2021
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width
  Limit at Initialization
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width Limit at InitializationNeural Information Processing Systems (NeurIPS), 2021
Mufan Li
Mihai Nica
Daniel M. Roy
410
37
0
07 Jun 2021
Convergence and Implicit Bias of Gradient Flow on Overparametrized
  Linear Networks
Convergence and Implicit Bias of Gradient Flow on Overparametrized Linear Networks
Hancheng Min
Salma Tarmoun
René Vidal
Enrique Mallada
MLT
235
5
0
13 May 2021
A Geometric Analysis of Neural Collapse with Unconstrained Features
A Geometric Analysis of Neural Collapse with Unconstrained FeaturesNeural Information Processing Systems (NeurIPS), 2021
Zhihui Zhu
Tianyu Ding
Jinxin Zhou
Xiao Li
Chong You
Jeremias Sulam
Qing Qu
358
251
0
06 May 2021
Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
  from Incomplete Measurements
Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation from Incomplete MeasurementsJournal of machine learning research (JMLR), 2021
Tian Tong
Cong Ma
Ashley Prater-Bennette
Erin E. Tripp
Yuejie Chi
446
45
0
29 Apr 2021
Generalized Approach to Matched Filtering using Neural Networks
Generalized Approach to Matched Filtering using Neural Networks
Jingkai Yan
Mariam Avagyan
R. Colgan
D. Veske
I. Bartos
John N. Wright
Z. Márka
S. Márka
291
22
0
08 Apr 2021
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for
  Deep ReLU Networks
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU NetworksInternational Conference on Machine Learning (ICML), 2020
Quynh N. Nguyen
Marco Mondelli
Guido Montúfar
835
97
0
21 Dec 2020
1
Page 1 of 1