ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.09576
  4. Cited By
Leveraging the two timescale regime to demonstrate convergence of neural
  networks

Leveraging the two timescale regime to demonstrate convergence of neural networks

19 April 2023
P. Marion
Raphael Berthier
ArXivPDFHTML

Papers citing "Leveraging the two timescale regime to demonstrate convergence of neural networks"

5 / 5 papers shown
Title
Ultra-fast feature learning for the training of two-layer neural networks in the two-timescale regime
Ultra-fast feature learning for the training of two-layer neural networks in the two-timescale regime
Raphael Barboni
Gabriel Peyré
François-Xavier Vialard
MLT
27
0
0
25 Apr 2025
Attention layers provably solve single-location regression
Attention layers provably solve single-location regression
P. Marion
Raphael Berthier
Gérard Biau
Claire Boyer
46
2
0
02 Oct 2024
Implicit Diffusion: Efficient Optimization through Stochastic Sampling
Implicit Diffusion: Efficient Optimization through Stochastic Sampling
Pierre Marion
Anna Korba
Peter Bartlett
Mathieu Blondel
Valentin De Bortoli
Arnaud Doucet
Felipe Llinares-López
Courtney Paquette
Quentin Berthet
68
11
0
08 Feb 2024
SGD learning on neural networks: leap complexity and saddle-to-saddle
  dynamics
SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics
Emmanuel Abbe
Enric Boix-Adserà
Theodor Misiakiewicz
FedML
MLT
76
72
0
21 Feb 2023
On the Effective Number of Linear Regions in Shallow Univariate ReLU
  Networks: Convergence Guarantees and Implicit Bias
On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias
Itay Safran
Gal Vardi
Jason D. Lee
MLT
41
23
0
18 May 2022
1