A Random Matrix Theory Perspective on the Spectrum of Learned Features
and Asymptotic Generalization CapabilitiesInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024 |
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural NetworksInternational Conference on Machine Learning (ICML), 2023 |
Spectral Evolution and Invariance in Linear-width Neural NetworksNeural Information Processing Systems (NeurIPS), 2022 |
Overparameterized random feature regression with nearly orthogonal dataInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022 |
High-dimensional Asymptotics of Feature Learning: How One Gradient Step
Improves the RepresentationNeural Information Processing Systems (NeurIPS), 2022 |