ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.07626
  4. Cited By
Random Feature Amplification: Feature Learning and Generalization in
  Neural Networks

Random Feature Amplification: Feature Learning and Generalization in Neural Networks

15 February 2022
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
    MLT
ArXivPDFHTML

Papers citing "Random Feature Amplification: Feature Learning and Generalization in Neural Networks"

26 / 26 papers shown
Title
Online Federation For Mixtures of Proprietary Agents with Black-Box Encoders
Online Federation For Mixtures of Proprietary Agents with Black-Box Encoders
Xuwei Yang
Fatemeh Tavakoli
D. B. Emerson
Anastasis Kratsios
FedML
62
0
0
30 Apr 2025
Simplicity bias and optimization threshold in two-layer ReLU networks
Simplicity bias and optimization threshold in two-layer ReLU networks
Etienne Boursier
Nicolas Flammarion
31
2
0
03 Oct 2024
Get rich quick: exact solutions reveal how unbalanced initializations
  promote rapid feature learning
Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning
D. Kunin
Allan Raventós
Clémentine Dominé
Feng Chen
David Klindt
Andrew M. Saxe
Surya Ganguli
MLT
40
15
0
10 Jun 2024
Simplicity Bias of Two-Layer Networks beyond Linearly Separable Data
Simplicity Bias of Two-Layer Networks beyond Linearly Separable Data
Nikita Tsoy
Nikola Konstantinov
37
4
0
27 May 2024
Matching the Statistical Query Lower Bound for k-sparse Parity Problems
  with Stochastic Gradient Descent
Matching the Statistical Query Lower Bound for k-sparse Parity Problems with Stochastic Gradient Descent
Yiwen Kou
Zixiang Chen
Quanquan Gu
Sham Kakade
42
0
0
18 Apr 2024
Feature emergence via margin maximization: case studies in algebraic
  tasks
Feature emergence via margin maximization: case studies in algebraic tasks
Depen Morwani
Benjamin L. Edelman
Costin-Andrei Oncescu
Rosie Zhao
Sham Kakade
34
14
0
13 Nov 2023
Benign Overfitting and Grokking in ReLU Networks for XOR Cluster Data
Benign Overfitting and Grokking in ReLU Networks for XOR Cluster Data
Zhiwei Xu
Yutong Wang
Spencer Frei
Gal Vardi
Wei Hu
MLT
26
23
0
04 Oct 2023
SGD Finds then Tunes Features in Two-Layer Neural Networks with
  near-Optimal Sample Complexity: A Case Study in the XOR problem
SGD Finds then Tunes Features in Two-Layer Neural Networks with near-Optimal Sample Complexity: A Case Study in the XOR problem
Margalit Glasgow
MLT
74
13
0
26 Sep 2023
Pareto Frontiers in Neural Feature Learning: Data, Compute, Width, and
  Luck
Pareto Frontiers in Neural Feature Learning: Data, Compute, Width, and Luck
Benjamin L. Edelman
Surbhi Goel
Sham Kakade
Eran Malach
Cyril Zhang
43
8
0
07 Sep 2023
The Effect of SGD Batch Size on Autoencoder Learning: Sparsity,
  Sharpness, and Feature Learning
The Effect of SGD Batch Size on Autoencoder Learning: Sparsity, Sharpness, and Feature Learning
Nikhil Ghosh
Spencer Frei
Wooseok Ha
Ting Yu
MLT
28
3
0
06 Aug 2023
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural
  Networks
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks
Liam Collins
Hamed Hassani
Mahdi Soltanolkotabi
Aryan Mokhtari
Sanjay Shakkottai
25
10
0
13 Jul 2023
On the Role of Attention in Prompt-tuning
On the Role of Attention in Prompt-tuning
Samet Oymak
A. S. Rawat
Mahdi Soltanolkotabi
Christos Thrampoulidis
MLT
LRM
20
41
0
06 Jun 2023
Generalization and Stability of Interpolating Neural Networks with
  Minimal Width
Generalization and Stability of Interpolating Neural Networks with Minimal Width
Hossein Taheri
Christos Thrampoulidis
22
15
0
18 Feb 2023
Do Neural Networks Generalize from Self-Averaging Sub-classifiers in the
  Same Way As Adaptive Boosting?
Do Neural Networks Generalize from Self-Averaging Sub-classifiers in the Same Way As Adaptive Boosting?
Michael Sun
Peter Chatain
AI4CE
14
0
0
14 Feb 2023
Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
François Caron
Fadhel Ayed
Paul Jung
Hoileong Lee
Juho Lee
Hongseok Yang
59
2
0
02 Feb 2023
Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data
Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data
Spencer Frei
Gal Vardi
Peter L. Bartlett
Nathan Srebro
Wei Hu
MLT
26
38
0
13 Oct 2022
Neural Networks Efficiently Learn Low-Dimensional Representations with
  SGD
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD
Alireza Mousavi-Hosseini
Sejun Park
M. Girotti
Ioannis Mitliagkas
Murat A. Erdogdu
MLT
319
48
0
29 Sep 2022
Hidden Progress in Deep Learning: SGD Learns Parities Near the
  Computational Limit
Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit
Boaz Barak
Benjamin L. Edelman
Surbhi Goel
Sham Kakade
Eran Malach
Cyril Zhang
25
123
0
18 Jul 2022
Max-Margin Works while Large Margin Fails: Generalization without
  Uniform Convergence
Max-Margin Works while Large Margin Fails: Generalization without Uniform Convergence
Margalit Glasgow
Colin Wei
Mary Wootters
Tengyu Ma
34
5
0
16 Jun 2022
Intrinsic dimensionality and generalization properties of the
  $\mathcal{R}$-norm inductive bias
Intrinsic dimensionality and generalization properties of the R\mathcal{R}R-norm inductive bias
Navid Ardeshir
Daniel J. Hsu
Clayton Sanford
CML
AI4CE
18
6
0
10 Jun 2022
High-dimensional Asymptotics of Feature Learning: How One Gradient Step
  Improves the Representation
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Zhichao Wang
Denny Wu
Greg Yang
MLT
27
121
0
03 May 2022
Optimization-Based Separations for Neural Networks
Optimization-Based Separations for Neural Networks
Itay Safran
Jason D. Lee
115
14
0
04 Dec 2021
Understanding the Generalization of Adam in Learning Neural Networks
  with Proper Regularization
Understanding the Generalization of Adam in Learning Neural Networks with Proper Regularization
Difan Zou
Yuan Cao
Yuanzhi Li
Quanquan Gu
MLT
AI4CE
39
37
0
25 Aug 2021
On the Power of Differentiable Learning versus PAC and SQ Learning
On the Power of Differentiable Learning versus PAC and SQ Learning
Emmanuel Abbe
Pritish Kamath
Eran Malach
Colin Sandon
Nathan Srebro
MLT
69
23
0
09 Aug 2021
Provable Generalization of SGD-trained Neural Networks of Any Width in
  the Presence of Adversarial Label Noise
Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise
Spencer Frei
Yuan Cao
Quanquan Gu
FedML
MLT
58
18
0
04 Jan 2021
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
123
602
0
14 Feb 2016
1