ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.15110
  4. Cited By
Perspective: A Phase Diagram for Deep Learning unifying Jamming, Feature
  Learning and Lazy Training

Perspective: A Phase Diagram for Deep Learning unifying Jamming, Feature Learning and Lazy Training

30 December 2020
Mario Geiger
Leonardo Petrini
Matthieu Wyart
    DRL
ArXiv (abs)PDFHTML

Papers citing "Perspective: A Phase Diagram for Deep Learning unifying Jamming, Feature Learning and Lazy Training"

11 / 11 papers shown
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regimeAnnual Conference Computational Learning Theory (COLT), 2025
Francesco Camilli
D. Tieplova
Eleonora Bergamin
Jean Barbier
1.0K
3
0
06 May 2025
Local Loss Optimization in the Infinite Width: Stable Parameterization of Predictive Coding Networks and Target Propagation
Local Loss Optimization in the Infinite Width: Stable Parameterization of Predictive Coding Networks and Target PropagationInternational Conference on Learning Representations (ICLR), 2024
Satoki Ishikawa
Rio Yokota
Ryo Karakida
497
2
0
04 Nov 2024
Building a Multivariate Time Series Benchmarking Datasets Inspired by
  Natural Language Processing (NLP)
Building a Multivariate Time Series Benchmarking Datasets Inspired by Natural Language Processing (NLP)
Mohammad Asif Ibna Mustafa
Ferdinand Heinrich
AI4TS
388
0
0
14 Oct 2024
On the Parameterization of Second-Order Optimization Effective Towards
  the Infinite Width
On the Parameterization of Second-Order Optimization Effective Towards the Infinite Width
Satoki Ishikawa
Ryo Karakida
349
10
0
19 Dec 2023
Fundamental limits of overparametrized shallow neural networks for
  supervised learning
Fundamental limits of overparametrized shallow neural networks for supervised learning
Francesco Camilli
D. Tieplova
Jean Barbier
268
11
0
11 Jul 2023
Modularity Trumps Invariance for Compositional Robustness
Modularity Trumps Invariance for Compositional Robustness
I. Mason
Anirban Sarkar
Tomotake Sasaki
Xavier Boix
OOD
324
1
0
15 Jun 2023
Phase diagram of early training dynamics in deep neural networks: effect
  of the learning rate, depth, and width
Phase diagram of early training dynamics in deep neural networks: effect of the learning rate, depth, and widthNeural Information Processing Systems (NeurIPS), 2023
Dayal Singh Kalra
M. Barkeshli
352
18
0
23 Feb 2023
Learning through atypical "phase transitions" in overparameterized
  neural networks
Learning through atypical "phase transitions" in overparameterized neural networks
Carlo Baldassi
Clarissa Lauditi
Enrico M. Malatesta
R. Pacelli
Gabriele Perugini
R. Zecchina
406
31
0
01 Oct 2021
Relative stability toward diffeomorphisms indicates performance in deep
  nets
Relative stability toward diffeomorphisms indicates performance in deep netsNeural Information Processing Systems (NeurIPS), 2021
Leonardo Petrini
Alessandro Favero
Mario Geiger
Matthieu Wyart
OOD
402
16
0
06 May 2021
Double-descent curves in neural networks: a new perspective using
  Gaussian processes
Double-descent curves in neural networks: a new perspective using Gaussian processesAAAI Conference on Artificial Intelligence (AAAI), 2021
Ouns El Harzli
Bernardo Cuenca Grau
Guillermo Valle Pérez
A. Louis
526
6
0
14 Feb 2021
Tilting the playing field: Dynamical loss functions for machine learning
Tilting the playing field: Dynamical loss functions for machine learningInternational Conference on Machine Learning (ICML), 2021
M. Ruíz-García
Ge Zhang
S. Schoenholz
Andrea J. Liu
330
12
0
07 Feb 2021
1
Page 1 of 1