ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.00939
  4. Cited By
Gradient flow dynamics of shallow ReLU networks for square loss and
  orthogonal inputs

Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs

2 June 2022
Etienne Boursier
Loucas Pillaud-Vivien
Nicolas Flammarion
    ODL
ArXivPDFHTML

Papers citing "Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs"

14 / 14 papers shown
Title
A distributional simplicity bias in the learning dynamics of transformers
A distributional simplicity bias in the learning dynamics of transformers
Riccardo Rende
Federica Gerace
A. Laio
Sebastian Goldt
68
7
0
17 Feb 2025
Learning Gaussian Multi-Index Models with Gradient Flow: Time Complexity and Directional Convergence
Learning Gaussian Multi-Index Models with Gradient Flow: Time Complexity and Directional Convergence
Berfin Simsek
Amire Bendjeddou
Daniel Hsu
32
0
0
13 Nov 2024
Early Directional Convergence in Deep Homogeneous Neural Networks for Small Initializations
Early Directional Convergence in Deep Homogeneous Neural Networks for Small Initializations
Akshay Kumar
Jarvis D. Haupt
ODL
40
3
0
12 Mar 2024
Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escape, and Network Embedding
Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escape, and Network Embedding
Zhengqing Wu
Berfin Simsek
Francois Ged
ODL
30
0
0
08 Feb 2024
Early Neuron Alignment in Two-layer ReLU Networks with Small
  Initialization
Early Neuron Alignment in Two-layer ReLU Networks with Small Initialization
Hancheng Min
Enrique Mallada
René Vidal
MLT
20
19
0
24 Jul 2023
Saddle-to-Saddle Dynamics in Diagonal Linear Networks
Saddle-to-Saddle Dynamics in Diagonal Linear Networks
Scott Pesme
Nicolas Flammarion
17
35
0
02 Apr 2023
Penalising the biases in norm regularisation enforces sparsity
Penalising the biases in norm regularisation enforces sparsity
Etienne Boursier
Nicolas Flammarion
22
14
0
02 Mar 2023
Over-Parameterization Exponentially Slows Down Gradient Descent for
  Learning a Single Neuron
Over-Parameterization Exponentially Slows Down Gradient Descent for Learning a Single Neuron
Weihang Xu
S. Du
14
16
0
20 Feb 2023
Understanding Incremental Learning of Gradient Descent: A Fine-grained
  Analysis of Matrix Sensing
Understanding Incremental Learning of Gradient Descent: A Fine-grained Analysis of Matrix Sensing
Jikai Jin
Zhiyuan Li
Kaifeng Lyu
S. Du
Jason D. Lee
MLT
28
34
0
27 Jan 2023
Regression as Classification: Influence of Task Formulation on Neural
  Network Features
Regression as Classification: Influence of Task Formulation on Neural Network Features
Lawrence Stewart
Francis R. Bach
Quentin Berthet
Jean-Philippe Vert
19
23
0
10 Nov 2022
Proximal Mean Field Learning in Shallow Neural Networks
Proximal Mean Field Learning in Shallow Neural Networks
Alexis M. H. Teter
Iman Nodozi
A. Halder
FedML
18
1
0
25 Oct 2022
On the Implicit Bias in Deep-Learning Algorithms
On the Implicit Bias in Deep-Learning Algorithms
Gal Vardi
FedML
AI4CE
22
72
0
26 Aug 2022
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer
  Neural Network
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Mo Zhou
Rong Ge
Chi Jin
67
44
0
04 Feb 2021
Trainability and Accuracy of Neural Networks: An Interacting Particle
  System Approach
Trainability and Accuracy of Neural Networks: An Interacting Particle System Approach
Grant M. Rotskoff
Eric Vanden-Eijnden
56
114
0
02 May 2018
1