ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.09339
  4. Cited By
Generalisation error in learning with random features and the hidden
  manifold model
v1v2 (latest)

Generalisation error in learning with random features and the hidden manifold model

International Conference on Machine Learning (ICML), 2020
21 February 2020
Federica Gerace
Bruno Loureiro
Florent Krzakala
M. Mézard
Lenka Zdeborová
ArXiv (abs)PDFHTML

Papers citing "Generalisation error in learning with random features and the hidden manifold model"

50 / 118 papers shown
Condition Numbers and Eigenvalue Spectra of Shallow Networks on Spheres
Condition Numbers and Eigenvalue Spectra of Shallow Networks on Spheres
Xinliang Liu
Tong Mao
Jinchao Xu
218
0
0
04 Nov 2025
One-Bit Quantization for Random Features Models
One-Bit Quantization for Random Features Models
D. Akhtiamov
Reza Ghane
B. Hassibi
MQ
185
0
0
17 Oct 2025
Gaussian Equivalence for Self-Attention: Asymptotic Spectral Analysis of Attention Matrix
Gaussian Equivalence for Self-Attention: Asymptotic Spectral Analysis of Attention Matrix
Tomohiro Hayase
B. Collins
Ryo Karakida
195
0
0
08 Oct 2025
Statistical mechanics of extensive-width Bayesian neural networks near interpolation
Statistical mechanics of extensive-width Bayesian neural networks near interpolation
Jean Barbier
Francesco Camilli
Minh-Toan Nguyen
Mauro Pastore
Rudy Skerk
254
1
0
30 May 2025
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regimeAnnual Conference Computational Learning Theory (COLT), 2025
Francesco Camilli
D. Tieplova
Eleonora Bergamin
Jean Barbier
1.0K
3
0
06 May 2025
Nonlinear dynamics of localization in neural receptive fields
Nonlinear dynamics of localization in neural receptive fieldsNeural Information Processing Systems (NeurIPS), 2025
Leon Lufkin
Andrew M. Saxe
Erin Grant
307
2
0
28 Jan 2025
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
Kaito Takanami
Takashi Takahashi
Ayaka Sakata
567
4
0
27 Jan 2025
A High Dimensional Statistical Model for Adversarial Training: Geometry and Trade-Offs
A High Dimensional Statistical Model for Adversarial Training: Geometry and Trade-OffsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Kasimir Tanner
Matteo Vilucchio
Bruno Loureiro
Florent Krzakala
AAML
442
4
0
31 Dec 2024
Statistical Inference in Classification of High-dimensional Gaussian
  Mixture
Statistical Inference in Classification of High-dimensional Gaussian Mixture
Hanwen Huang
Peng Zeng
256
0
0
25 Oct 2024
A Random Matrix Theory Perspective on the Spectrum of Learned Features
  and Asymptotic Generalization Capabilities
A Random Matrix Theory Perspective on the Spectrum of Learned Features and Asymptotic Generalization CapabilitiesInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Yatin Dandi
Luca Pesce
Hugo Cui
Florent Krzakala
Yue M. Lu
Bruno Loureiro
MLT
351
11
0
24 Oct 2024
Bilinear Sequence Regression: A Model for Learning from Long Sequences of High-dimensional Tokens
Bilinear Sequence Regression: A Model for Learning from Long Sequences of High-dimensional TokensPhysical Review X (PRX), 2024
Vittorio Erba
Emanuele Troiani
Luca Biggio
Antoine Maillard
Lenka Zdeborová
512
2
0
24 Oct 2024
Input-Label Correlation Governs a Linear-to-Nonlinear Transition in Random Features under Spiked Covariance
Input-Label Correlation Governs a Linear-to-Nonlinear Transition in Random Features under Spiked Covariance
Samet Demir
Zafer Dogan
266
4
0
30 Sep 2024
Statistical Mechanics of Min-Max Problems
Statistical Mechanics of Min-Max Problems
Yuma Ichikawa
Koji Hukushima
325
2
0
09 Sep 2024
Risk and cross validation in ridge regression with correlated samples
Risk and cross validation in ridge regression with correlated samples
Alexander B. Atanasov
Jacob A. Zavatone-Veth
Cengiz Pehlevan
556
8
0
08 Aug 2024
Random Features Hopfield Networks generalize retrieval to previously
  unseen examples
Random Features Hopfield Networks generalize retrieval to previously unseen examples
Silvio Kalaj
Clarissa Lauditi
Gabriele Perugini
Carlo Lucibello
Enrico M. Malatesta
Matteo Negri
AAML
217
14
0
08 Jul 2024
From Spikes to Heavy Tails: Unveiling the Spectral Evolution of Neural Networks
From Spikes to Heavy Tails: Unveiling the Spectral Evolution of Neural Networks
Vignesh Kothapalli
Tianyu Pang
Shenyang Deng
Zongmin Liu
Yaoqing Yang
439
4
0
07 Jun 2024
Asymptotic theory of in-context learning by linear attention
Asymptotic theory of in-context learning by linear attention
Yue M. Lu
Mary I. Letey
Jacob A. Zavatone-Veth
Anindita Maiti
Cengiz Pehlevan
645
47
0
20 May 2024
Restoring balance: principled under/oversampling of data for optimal classification
Restoring balance: principled under/oversampling of data for optimal classificationInternational Conference on Machine Learning (ICML), 2024
Emanuele Loffredo
Mauro Pastore
Simona Cocco
R. Monasson
318
13
0
15 May 2024
A replica analysis of under-bagging
A replica analysis of under-bagging
Takashi Takahashi
402
3
0
15 Apr 2024
Sliding down the stairs: how correlated latent variables accelerate
  learning with neural networks
Sliding down the stairs: how correlated latent variables accelerate learning with neural networks
Lorenzo Bardone
Sebastian Goldt
317
12
0
12 Apr 2024
Asymptotics of Learning with Deep Structured (Random) Features
Asymptotics of Learning with Deep Structured (Random) Features
Dominik Schröder
Daniil Dmitriev
Hugo Cui
Bruno Loureiro
307
11
0
21 Feb 2024
Asymptotics of feature learning in two-layer networks after one
  gradient-step
Asymptotics of feature learning in two-layer networks after one gradient-step
Hugo Cui
Luca Pesce
Yatin Dandi
Florent Krzakala
Yue M. Lu
Lenka Zdeborová
Bruno Loureiro
MLT
341
27
0
07 Feb 2024
Analyzing the Neural Tangent Kernel of Periodically Activated Coordinate
  Networks
Analyzing the Neural Tangent Kernel of Periodically Activated Coordinate Networks
Hemanth Saratchandran
Shin-Fang Chng
Simon Lucey
271
2
0
07 Feb 2024
The twin peaks of learning neural networks
The twin peaks of learning neural networks
Elizaveta Demyanenko
Christoph Feinauer
Enrico M. Malatesta
Luca Saglietti
290
0
0
23 Jan 2024
Generalization in Kernel Regression Under Realistic Assumptions
Generalization in Kernel Regression Under Realistic Assumptions
Daniel Barzilai
Ohad Shamir
458
22
0
26 Dec 2023
Learning from higher-order statistics, efficiently: hypothesis tests,
  random features, and neural networks
Learning from higher-order statistics, efficiently: hypothesis tests, random features, and neural networks
Eszter Székely
Lorenzo Bardone
Federica Gerace
Sebastian Goldt
414
3
0
22 Dec 2023
More is Better in Modern Machine Learning: when Infinite
  Overparameterization is Optimal and Overfitting is Obligatory
More is Better in Modern Machine Learning: when Infinite Overparameterization is Optimal and Overfitting is Obligatory
James B. Simon
Dhruva Karkada
Nikhil Ghosh
Mikhail Belkin
AI4CEBDL
507
23
0
24 Nov 2023
Benchmarking the optimization optical machines with the planted
  solutions
Benchmarking the optimization optical machines with the planted solutions
N. Stroev
N. Berloff
Nir Davidson
270
1
0
12 Nov 2023
Universality for the global spectrum of random inner-product kernel
  matrices in the polynomial regime
Universality for the global spectrum of random inner-product kernel matrices in the polynomial regime
S. Dubova
Yue M. Lu
Benjamin McKenna
H. Yau
283
10
0
27 Oct 2023
Orthogonal Random Features: Explicit Forms and Sharp Inequalities
Orthogonal Random Features: Explicit Forms and Sharp Inequalities
N. Demni
Hachem Kadri
300
1
0
11 Oct 2023
Statistical physics, Bayesian inference and neural information
  processing
Statistical physics, Bayesian inference and neural information processingJournal of Statistical Mechanics: Theory and Experiment (J. Stat. Mech.), 2023
Ehtesamul Azim
Dongjie Wang
Berfin cSimcsek
Yanjie Fu
AI4CE
134
1
0
29 Sep 2023
Optimal Nonlinearities Improve Generalization Performance of Random
  Features
Optimal Nonlinearities Improve Generalization Performance of Random FeaturesAsian Conference on Machine Learning (ACML), 2023
Samet Demir
Zafer Dogan
MLT
150
4
0
28 Sep 2023
Six Lectures on Linearized Neural Networks
Six Lectures on Linearized Neural NetworksJournal of Statistical Mechanics: Theory and Experiment (J. Stat. Mech.), 2023
Theodor Misiakiewicz
Andrea Montanari
383
18
0
25 Aug 2023
Local Kernel Renormalization as a mechanism for feature learning in
  overparametrized Convolutional Neural Networks
Local Kernel Renormalization as a mechanism for feature learning in overparametrized Convolutional Neural NetworksNature Communications (Nat. Commun.), 2023
R. Aiudi
R. Pacelli
A. Vezzani
R. Burioni
P. Rotondo
MLT
306
29
0
21 Jul 2023
Fundamental limits of overparametrized shallow neural networks for
  supervised learning
Fundamental limits of overparametrized shallow neural networks for supervised learning
Francesco Camilli
D. Tieplova
Jean Barbier
256
11
0
11 Jul 2023
The Underlying Scaling Laws and Universal Statistical Structure of
  Complex Datasets
The Underlying Scaling Laws and Universal Statistical Structure of Complex Datasets
Noam Levi
Yaron Oz
434
11
0
26 Jun 2023
The RL Perceptron: Generalisation Dynamics of Policy Learning in High
  Dimensions
The RL Perceptron: Generalisation Dynamics of Policy Learning in High DimensionsPhysical Review X (PRX), 2023
Nishil Patel
Sebastian Lee
Stefano Sarao Mannelli
Sebastian Goldt
Adrew Saxe
OffRL
494
7
0
17 Jun 2023
Gibbs-Based Information Criteria and the Over-Parameterized Regime
Gibbs-Based Information Criteria and the Over-Parameterized RegimeInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023
Haobo Chen
Yuheng Bu
Greg Wornell
361
1
0
08 Jun 2023
How Two-Layer Neural Networks Learn, One (Giant) Step at a Time
How Two-Layer Neural Networks Learn, One (Giant) Step at a Time
Yatin Dandi
Florent Krzakala
Bruno Loureiro
Luca Pesce
Ludovic Stephan
MLT
564
56
0
29 May 2023
Least Squares Regression Can Exhibit Under-Parameterized Double Descent
Least Squares Regression Can Exhibit Under-Parameterized Double DescentNeural Information Processing Systems (NeurIPS), 2023
Xinyue Li
Rishi Sonthalia
420
5
0
24 May 2023
High-dimensional Asymptotics of Denoising Autoencoders
High-dimensional Asymptotics of Denoising AutoencodersNeural Information Processing Systems (NeurIPS), 2023
Hugo Cui
Lenka Zdeborová
196
21
0
18 May 2023
Mapping of attention mechanisms to a generalized Potts model
Mapping of attention mechanisms to a generalized Potts modelPhysical Review Research (Phys. Rev. Res.), 2023
Riccardo Rende
Federica Gerace
Alessandro Laio
Sebastian Goldt
442
38
0
14 Apr 2023
Classification of Heavy-tailed Features in High Dimensions: a
  Superstatistical Approach
Classification of Heavy-tailed Features in High Dimensions: a Superstatistical ApproachNeural Information Processing Systems (NeurIPS), 2023
Urte Adomaityte
G. Sicuro
P. Vivo
298
14
0
06 Apr 2023
Storage and Learning phase transitions in the Random-Features Hopfield
  Model
Storage and Learning phase transitions in the Random-Features Hopfield ModelPhysical Review Letters (PRL), 2023
M. Negri
Clarissa Lauditi
Gabriele Perugini
Carlo Lucibello
Enrico M. Malatesta
274
20
0
29 Mar 2023
Learning curves for deep structured Gaussian feature models
Learning curves for deep structured Gaussian feature modelsNeural Information Processing Systems (NeurIPS), 2023
Jacob A. Zavatone-Veth
Cengiz Pehlevan
MLT
354
14
0
01 Mar 2023
Universality laws for Gaussian mixtures in generalized linear models
Universality laws for Gaussian mixtures in generalized linear modelsNeural Information Processing Systems (NeurIPS), 2023
Yatin Dandi
Ludovic Stephan
Florent Krzakala
Bruno Loureiro
Lenka Zdeborová
FedML
276
31
0
17 Feb 2023
Are Gaussian data all you need? Extents and limits of universality in
  high-dimensional generalized linear estimation
Are Gaussian data all you need? Extents and limits of universality in high-dimensional generalized linear estimation
Luca Pesce
Florent Krzakala
Bruno Loureiro
Ludovic Stephan
271
29
0
17 Feb 2023
Spatially heterogeneous learning by a deep student machine
Spatially heterogeneous learning by a deep student machinePhysical Review Research (Phys. Rev. Res.), 2023
H. Yoshino
336
4
0
15 Feb 2023
Precise Asymptotic Analysis of Deep Random Feature Models
Precise Asymptotic Analysis of Deep Random Feature ModelsAnnual Conference Computational Learning Theory (COLT), 2023
David Bosch
Ashkan Panahi
B. Hassibi
382
21
0
13 Feb 2023
Deterministic equivalent and error universality of deep random features
  learning
Deterministic equivalent and error universality of deep random features learningInternational Conference on Machine Learning (ICML), 2023
Dominik Schröder
Hugo Cui
Daniil Dmitriev
Bruno Loureiro
MLT
324
36
0
01 Feb 2023
123
Next
Page 1 of 3