ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.04902
  4. Cited By
Stochasticity helps to navigate rough landscapes: comparing
  gradient-descent-based algorithms in the phase retrieval problem
v1v2 (latest)

Stochasticity helps to navigate rough landscapes: comparing gradient-descent-based algorithms in the phase retrieval problem

8 March 2021
Francesca Mignacco
Pierfrancesco Urbani
Lenka Zdeborová
ArXiv (abs)PDFHTML

Papers citing "Stochasticity helps to navigate rough landscapes: comparing gradient-descent-based algorithms in the phase retrieval problem"

20 / 20 papers shown
Bilinear Sequence Regression: A Model for Learning from Long Sequences of High-dimensional Tokens
Bilinear Sequence Regression: A Model for Learning from Long Sequences of High-dimensional TokensPhysical Review X (PRX), 2024
Vittorio Erba
Emanuele Troiani
Luca Biggio
Antoine Maillard
Lenka Zdeborová
528
2
0
24 Oct 2024
The Role of the Time-Dependent Hessian in High-Dimensional Optimization
The Role of the Time-Dependent Hessian in High-Dimensional Optimization
Tony Bonnaire
Giulio Biroli
C. Cammarota
526
0
0
04 Mar 2024
Stochastic Gradient Flow Dynamics of Test Risk and its Exact Solution
  for Weak Features
Stochastic Gradient Flow Dynamics of Test Risk and its Exact Solution for Weak Features
Rodrigo Veiga
Anastasia Remizova
Nicolas Macris
308
1
0
12 Feb 2024
The Benefits of Reusing Batches for Gradient Descent in Two-Layer
  Networks: Breaking the Curse of Information and Leap Exponents
The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap ExponentsInternational Conference on Machine Learning (ICML), 2024
Yatin Dandi
Emanuele Troiani
Luca Arnaboldi
Luca Pesce
Lenka Zdeborová
Florent Krzakala
MLT
378
41
0
05 Feb 2024
The Local Landscape of Phase Retrieval Under Limited Samples
The Local Landscape of Phase Retrieval Under Limited SamplesIEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2023
Kaizhao Liu
Zihao Wang
Lei Wu
298
3
0
26 Nov 2023
Grokking as the Transition from Lazy to Rich Training Dynamics
Grokking as the Transition from Lazy to Rich Training DynamicsInternational Conference on Learning Representations (ICLR), 2023
Tanishq Kumar
Blake Bordelon
Samuel Gershman
Cengiz Pehlevan
417
83
0
09 Oct 2023
Stochastic Gradient Descent-like relaxation is equivalent to Metropolis
  dynamics in discrete optimization and inference problems
Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problemsScientific Reports (Sci Rep), 2023
Maria Chiara Angelini
A. Cavaliere
Raffaele Marino
F. Ricci-Tersenghi
391
5
0
11 Sep 2023
Stochastic Gradient Descent outperforms Gradient Descent in recovering a
  high-dimensional signal in a glassy energy landscape
Stochastic Gradient Descent outperforms Gradient Descent in recovering a high-dimensional signal in a glassy energy landscape
Persia Jana Kamali
Pierfrancesco Urbani
458
7
0
09 Sep 2023
Escaping mediocrity: how two-layer networks learn hard generalized
  linear models with SGD
Escaping mediocrity: how two-layer networks learn hard generalized linear models with SGD
Luca Arnaboldi
Florent Krzakala
Bruno Loureiro
Ludovic Stephan
MLT
366
11
0
29 May 2023
Catapult Dynamics and Phase Transitions in Quadratic Nets
Catapult Dynamics and Phase Transitions in Quadratic NetsJournal of Statistical Mechanics: Theory and Experiment (J. Stat. Mech.), 2023
David Meltzer
Min Chen
Sergii Strelchuk
387
10
0
18 Jan 2023
Gradient flow in the gaussian covariate model: exact solution of
  learning curves and multiple descent structures
Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures
Antione Bodin
N. Macris
313
5
0
13 Dec 2022
Disordered Systems Insights on Computational Hardness
Disordered Systems Insights on Computational HardnessJournal of Statistical Mechanics: Theory and Experiment (JSTAT), 2022
D. Gamarnik
Cristopher Moore
Lenka Zdeborová
AI4CE
323
59
0
15 Oct 2022
Rigorous dynamical mean field theory for stochastic gradient descent
  methods
Rigorous dynamical mean field theory for stochastic gradient descent methodsSIAM Journal on Mathematics of Data Science (SIMODS), 2022
Cédric Gerbelot
Emanuele Troiani
Francesca Mignacco
Florent Krzakala
Lenka Zdeborova
441
37
0
12 Oct 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural NetworksNeural Information Processing Systems (NeurIPS), 2022
Blake Bordelon
Cengiz Pehlevan
MLT
446
123
0
19 May 2022
The effective noise of Stochastic Gradient Descent
The effective noise of Stochastic Gradient DescentJournal of Statistical Mechanics: Theory and Experiment (JSTAT), 2021
Francesca Mignacco
Pierfrancesco Urbani
311
44
0
20 Dec 2021
Model, sample, and epoch-wise descents: exact solution of gradient flow
  in the random feature model
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature modelNeural Information Processing Systems (NeurIPS), 2021
A. Bodin
N. Macris
402
16
0
22 Oct 2021
The Limiting Dynamics of SGD: Modified Loss, Phase Space Oscillations,
  and Anomalous Diffusion
The Limiting Dynamics of SGD: Modified Loss, Phase Space Oscillations, and Anomalous DiffusionNeural Computation (Neural Comput.), 2021
D. Kunin
Javier Sagastuy-Breña
Lauren Gillespie
Eshed Margalit
Hidenori Tanaka
Surya Ganguli
Daniel L. K. Yamins
603
20
0
19 Jul 2021
On the Cryptographic Hardness of Learning Single Periodic Neurons
On the Cryptographic Hardness of Learning Single Periodic NeuronsNeural Information Processing Systems (NeurIPS), 2021
M. Song
Ilias Zadik
Joan Bruna
AAML
260
35
0
20 Jun 2021
Learning Curves for SGD on Structured Features
Learning Curves for SGD on Structured Features
Blake Bordelon
Cengiz Pehlevan
MLT
376
0
0
04 Jun 2021
Analytical Study of Momentum-Based Acceleration Methods in Paradigmatic
  High-Dimensional Non-Convex Problems
Analytical Study of Momentum-Based Acceleration Methods in Paradigmatic High-Dimensional Non-Convex ProblemsNeural Information Processing Systems (NeurIPS), 2021
Stefano Sarao Mannelli
Pierfrancesco Urbani
339
11
0
23 Feb 2021
1
Page 1 of 1