ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.19510
  4. Cited By
Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape

Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape

31 May 2023
Kedar Karhadkar
Michael Murray
Hanna Tseran
Guido Montúfar
ArXivPDFHTML

Papers citing "Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape"

6 / 6 papers shown
Title
Derivation of effective gradient flow equations and dynamical truncation of training data in Deep Learning
Derivation of effective gradient flow equations and dynamical truncation of training data in Deep Learning
Thomas Chen
34
0
0
13 Jan 2025
Gradient flow in parameter space is equivalent to linear interpolation
  in output space
Gradient flow in parameter space is equivalent to linear interpolation in output space
Thomas Chen
Patrícia Muñoz Ewald
25
1
0
02 Aug 2024
Bounds for the smallest eigenvalue of the NTK for arbitrary spherical
  data of arbitrary dimension
Bounds for the smallest eigenvalue of the NTK for arbitrary spherical data of arbitrary dimension
Kedar Karhadkar
Michael Murray
Guido Montúfar
32
2
0
23 May 2024
Continual Learning with Weight Interpolation
Continual Learning with Weight Interpolation
Jkedrzej Kozal
Jan Wasilewski
Bartosz Krawczyk
Michal Wo'zniak
CLL
MoMe
34
6
0
05 Apr 2024
The Real Tropical Geometry of Neural Networks
The Real Tropical Geometry of Neural Networks
Marie-Charlotte Brandenburg
Georg Loho
Guido Montúfar
54
7
0
18 Mar 2024
Functional dimension of feedforward ReLU neural networks
Functional dimension of feedforward ReLU neural networks
J. E. Grigsby
Kathryn A. Lindsey
R. Meyerhoff
Chen-Chun Wu
27
11
0
08 Sep 2022
1