ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.07497
  4. Cited By
Phase diagram for two-layer ReLU neural networks at infinite-width limit

Phase diagram for two-layer ReLU neural networks at infinite-width limit

15 July 2020
Yaoyu Zhang
Zhi-Qin John Xu
Zheng Ma
Tao Luo
ArXivPDFHTML

Papers citing "Phase diagram for two-layer ReLU neural networks at infinite-width limit"

16 / 16 papers shown
Title
From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks
From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks
Clémentine Dominé
Nicolas Anguita
A. Proca
Lukas Braun
D. Kunin
P. Mediano
Andrew M. Saxe
40
3
0
22 Sep 2024
Connectivity Shapes Implicit Regularization in Matrix Factorization Models for Matrix Completion
Connectivity Shapes Implicit Regularization in Matrix Factorization Models for Matrix Completion
Zhiwei Bai
Jiajie Zhao
Tao Luo
AI4CE
37
0
0
22 May 2024
Efficient and Flexible Method for Reducing Moderate-size Deep Neural
  Networks with Condensation
Efficient and Flexible Method for Reducing Moderate-size Deep Neural Networks with Condensation
Tianyi Chen
Zhi-Qin John Xu
40
1
0
02 May 2024
Early Directional Convergence in Deep Homogeneous Neural Networks for Small Initializations
Early Directional Convergence in Deep Homogeneous Neural Networks for Small Initializations
Akshay Kumar
Jarvis Haupt
ODL
46
3
0
12 Mar 2024
Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escape, and Network Embedding
Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escape, and Network Embedding
Zhengqing Wu
Berfin Simsek
Francois Ged
ODL
50
0
0
08 Feb 2024
Initialization Matters: Privacy-Utility Analysis of Overparameterized
  Neural Networks
Initialization Matters: Privacy-Utility Analysis of Overparameterized Neural Networks
Jiayuan Ye
Zhenyu Zhu
Fanghui Liu
Reza Shokri
V. Cevher
39
12
0
31 Oct 2023
Loss Spike in Training Neural Networks
Loss Spike in Training Neural Networks
Zhongwang Zhang
Z. Xu
38
5
0
20 May 2023
Phase Diagram of Initial Condensation for Two-layer Neural Networks
Phase Diagram of Initial Condensation for Two-layer Neural Networks
Zheng Chen
Yuqing Li
Yaoyu Zhang
Zhaoguang Zhou
Z. Xu
MLT
AI4CE
49
9
0
12 Mar 2023
Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
François Caron
Fadhel Ayed
Paul Jung
Hoileong Lee
Juho Lee
Hongseok Yang
67
2
0
02 Feb 2023
Linear Stability Hypothesis and Rank Stratification for Nonlinear Models
Linear Stability Hypothesis and Rank Stratification for Nonlinear Models
Tao Luo
Zhongwang Zhang
Leyang Zhang
Zhiwei Bai
Yaoyu Zhang
Z. Xu
29
7
0
21 Nov 2022
A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer
  Neural Networks
A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer Neural Networks
Zhengdao Chen
Eric Vanden-Eijnden
Joan Bruna
MLT
32
5
0
28 Oct 2022
Robustness in deep learning: The good (width), the bad (depth), and the
  ugly (initialization)
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
41
19
0
15 Sep 2022
On Feature Learning in Neural Networks with Global Convergence
  Guarantees
On Feature Learning in Neural Networks with Global Convergence Guarantees
Zhengdao Chen
Eric Vanden-Eijnden
Joan Bruna
MLT
36
13
0
22 Apr 2022
Overview frequency principle/spectral bias in deep learning
Overview frequency principle/spectral bias in deep learning
Z. Xu
Tao Luo
Yaoyu Zhang
FaML
35
66
0
19 Jan 2022
Embedding Principle: a hierarchical structure of loss landscape of deep
  neural networks
Embedding Principle: a hierarchical structure of loss landscape of deep neural networks
Tao Luo
Yuqing Li
Zhongwang Zhang
Yaoyu Zhang
Z. Xu
29
22
0
30 Nov 2021
Toward Understanding Convolutional Neural Networks from Volterra
  Convolution Perspective
Toward Understanding Convolutional Neural Networks from Volterra Convolution Perspective
Tenghui Li
Guoxu Zhou
Yuning Qiu
Qianchuan Zhao
FAtt
32
2
0
19 Oct 2021
1