ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.08602
  4. Cited By
Sliding down the stairs: how correlated latent variables accelerate
  learning with neural networks
v1v2 (latest)

Sliding down the stairs: how correlated latent variables accelerate learning with neural networks

12 April 2024
Lorenzo Bardone
Sebastian Goldt
ArXiv (abs)PDFHTML

Papers citing "Sliding down the stairs: how correlated latent variables accelerate learning with neural networks"

9 / 9 papers shown
From Information to Generative Exponent: Learning Rate Induces Phase Transitions in SGD
From Information to Generative Exponent: Learning Rate Induces Phase Transitions in SGD
Konstantinos Christopher Tsiolis
Alireza Mousavi-Hosseini
Murat A. Erdogdu
MLT
125
0
0
23 Oct 2025
Scaling Laws and Representation Learning in Simple Hierarchical Languages: Transformers vs. Convolutional Architectures
Scaling Laws and Representation Learning in Simple Hierarchical Languages: Transformers vs. Convolutional Architectures
Francesco Cagnetta
Alessandro Favero
Antonio Sclocchi
Matthieu Wyart
328
1
0
11 May 2025
Feature learning from non-Gaussian inputs: the case of Independent Component Analysis in high dimensions
Feature learning from non-Gaussian inputs: the case of Independent Component Analysis in high dimensions
Fabiola Ricci
Lorenzo Bardone
Sebastian Goldt
OOD
446
1
0
31 Mar 2025
A distributional simplicity bias in the learning dynamics of transformers
A distributional simplicity bias in the learning dynamics of transformersNeural Information Processing Systems (NeurIPS), 2024
Riccardo Rende
Federica Gerace
Alessandro Laio
Sebastian Goldt
397
14
0
17 Feb 2025
A Random Matrix Theory Perspective on the Spectrum of Learned Features
  and Asymptotic Generalization Capabilities
A Random Matrix Theory Perspective on the Spectrum of Learned Features and Asymptotic Generalization CapabilitiesInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Yatin Dandi
Luca Pesce
Hugo Cui
Florent Krzakala
Yue M. Lu
Bruno Loureiro
MLT
322
9
0
24 Oct 2024
How Feature Learning Can Improve Neural Scaling Laws
How Feature Learning Can Improve Neural Scaling LawsInternational Conference on Learning Representations (ICLR), 2024
Blake Bordelon
Alexander B. Atanasov
Cengiz Pehlevan
477
35
0
26 Sep 2024
How transformers learn structured data: insights from hierarchical filtering
How transformers learn structured data: insights from hierarchical filtering
Jerome Garnier-Brun
Marc Mézard
Emanuele Moscato
Luca Saglietti
479
9
0
27 Aug 2024
Learning from higher-order statistics, efficiently: hypothesis tests,
  random features, and neural networks
Learning from higher-order statistics, efficiently: hypothesis tests, random features, and neural networks
Eszter Székely
Lorenzo Bardone
Federica Gerace
Sebastian Goldt
380
3
0
22 Dec 2023
Learning time-scales in two-layers neural networks
Learning time-scales in two-layers neural networksFoundations of Computational Mathematics (FoCM), 2023
Raphael Berthier
Andrea Montanari
Kangjie Zhou
690
49
0
28 Feb 2023
1