ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.11805
  4. Cited By
Model, sample, and epoch-wise descents: exact solution of gradient flow
  in the random feature model

Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model

22 October 2021
A. Bodin
N. Macris
ArXivPDFHTML

Papers citing "Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model"

14 / 14 papers shown
Title
Towards understanding epoch-wise double descent in two-layer linear
  neural networks
Towards understanding epoch-wise double descent in two-layer linear neural networks
Amanda Olmin
Fredrik Lindsten
MLT
24
3
0
13 Jul 2024
Information limits and Thouless-Anderson-Palmer equations for spiked
  matrix models with structured noise
Information limits and Thouless-Anderson-Palmer equations for spiked matrix models with structured noise
Jean Barbier
Francesco Camilli
Marco Mondelli
Yizhou Xu
25
2
0
31 May 2024
Stochastic Gradient Flow Dynamics of Test Risk and its Exact Solution
  for Weak Features
Stochastic Gradient Flow Dynamics of Test Risk and its Exact Solution for Weak Features
Rodrigo Veiga
Anastasia Remizova
Nicolas Macris
27
0
0
12 Feb 2024
A Dynamical Model of Neural Scaling Laws
A Dynamical Model of Neural Scaling Laws
Blake Bordelon
Alexander B. Atanasov
C. Pehlevan
44
36
0
02 Feb 2024
Understanding the Role of Optimization in Double Descent
Understanding the Role of Optimization in Double Descent
Chris Liu
Jeffrey Flanigan
24
0
0
06 Dec 2023
Gradient flow on extensive-rank positive semi-definite matrix denoising
Gradient flow on extensive-rank positive semi-definite matrix denoising
A. Bodin
N. Macris
13
3
0
16 Mar 2023
Learning time-scales in two-layers neural networks
Learning time-scales in two-layers neural networks
Raphael Berthier
Andrea Montanari
Kangjie Zhou
31
33
0
28 Feb 2023
Deterministic equivalent and error universality of deep random features
  learning
Deterministic equivalent and error universality of deep random features learning
Dominik Schröder
Hugo Cui
Daniil Dmitriev
Bruno Loureiro
MLT
24
28
0
01 Feb 2023
Gradient flow in the gaussian covariate model: exact solution of
  learning curves and multiple descent structures
Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures
Antione Bodin
N. Macris
19
4
0
13 Dec 2022
High-dimensional Asymptotics of Feature Learning: How One Gradient Step
  Improves the Representation
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Zhichao Wang
Denny Wu
Greg Yang
MLT
16
121
0
03 May 2022
High-dimensional Asymptotics of Langevin Dynamics in Spiked Matrix
  Models
High-dimensional Asymptotics of Langevin Dynamics in Spiked Matrix Models
Tengyuan Liang
Subhabrata Sen
Pragya Sur
26
7
0
09 Apr 2022
Generalizing similarity in noisy setups: the DIBS phenomenon
Generalizing similarity in noisy setups: the DIBS phenomenon
Nayara Fonseca
V. Guidetti
11
0
0
30 Jan 2022
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
83
152
0
02 Mar 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,424
0
23 Jan 2020
1