ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.00687
  4. Cited By
On the Power and Limitations of Random Features for Understanding Neural
  Networks

On the Power and Limitations of Random Features for Understanding Neural Networks

1 April 2019
Gilad Yehudai
Ohad Shamir
    MLT
ArXivPDFHTML

Papers citing "On the Power and Limitations of Random Features for Understanding Neural Networks"

46 / 46 papers shown
Title
Tensor Sketch: Fast and Scalable Polynomial Kernel Approximation
Tensor Sketch: Fast and Scalable Polynomial Kernel Approximation
Ninh Pham
Rasmus Pagh
34
0
0
13 May 2025
Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input
Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input
Ziang Chen
Rong Ge
MLT
61
1
0
10 Jan 2025
Adaptive Random Fourier Features Training Stabilized By Resampling With Applications in Image Regression
Adaptive Random Fourier Features Training Stabilized By Resampling With Applications in Image Regression
Aku Kammonen
Anamika Pandey
E. von Schwerin
Raúl Tempone
35
0
0
08 Oct 2024
Learning with Norm Constrained, Over-parameterized, Two-layer Neural
  Networks
Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks
Fanghui Liu
L. Dadi
V. Cevher
82
2
0
29 Apr 2024
Analysis of the expected $L_2$ error of an over-parametrized deep neural
  network estimate learned by gradient descent without regularization
Analysis of the expected L2L_2L2​ error of an over-parametrized deep neural network estimate learned by gradient descent without regularization
Selina Drews
Michael Kohler
36
2
0
24 Nov 2023
Polynomially Over-Parameterized Convolutional Neural Networks Contain
  Structured Strong Winning Lottery Tickets
Polynomially Over-Parameterized Convolutional Neural Networks Contain Structured Strong Winning Lottery Tickets
A. D. Cunha
Francesco d’Amore
Emanuele Natale
MLT
27
1
0
16 Nov 2023
Gradient-Based Feature Learning under Structured Data
Gradient-Based Feature Learning under Structured Data
Alireza Mousavi-Hosseini
Denny Wu
Taiji Suzuki
Murat A. Erdogdu
MLT
37
18
0
07 Sep 2023
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Eshaan Nichani
Alexandru Damian
Jason D. Lee
MLT
44
13
0
11 May 2023
Online Learning for the Random Feature Model in the Student-Teacher
  Framework
Online Learning for the Random Feature Model in the Student-Teacher Framework
Roman Worschech
B. Rosenow
46
0
0
24 Mar 2023
Over-Parameterization Exponentially Slows Down Gradient Descent for
  Learning a Single Neuron
Over-Parameterization Exponentially Slows Down Gradient Descent for Learning a Single Neuron
Weihang Xu
S. Du
37
16
0
20 Feb 2023
Understanding Impacts of Task Similarity on Backdoor Attack and
  Detection
Understanding Impacts of Task Similarity on Backdoor Attack and Detection
Di Tang
Rui Zhu
Xiaofeng Wang
Haixu Tang
Yi Chen
AAML
24
5
0
12 Oct 2022
Annihilation of Spurious Minima in Two-Layer ReLU Networks
Annihilation of Spurious Minima in Two-Layer ReLU Networks
Yossi Arjevani
M. Field
16
8
0
12 Oct 2022
Neural Networks Efficiently Learn Low-Dimensional Representations with
  SGD
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD
Alireza Mousavi-Hosseini
Sejun Park
M. Girotti
Ioannis Mitliagkas
Murat A. Erdogdu
MLT
324
48
0
29 Sep 2022
Hidden Progress in Deep Learning: SGD Learns Parities Near the
  Computational Limit
Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit
Boaz Barak
Benjamin L. Edelman
Surbhi Goel
Sham Kakade
Eran Malach
Cyril Zhang
39
123
0
18 Jul 2022
Neural Networks can Learn Representations with Gradient Descent
Neural Networks can Learn Representations with Gradient Descent
Alexandru Damian
Jason D. Lee
Mahdi Soltanolkotabi
SSL
MLT
22
114
0
30 Jun 2022
Learning sparse features can lead to overfitting in neural networks
Learning sparse features can lead to overfitting in neural networks
Leonardo Petrini
Francesco Cagnetta
Eric Vanden-Eijnden
M. Wyart
MLT
42
23
0
24 Jun 2022
Intrinsic dimensionality and generalization properties of the
  $\mathcal{R}$-norm inductive bias
Intrinsic dimensionality and generalization properties of the R\mathcal{R}R-norm inductive bias
Navid Ardeshir
Daniel J. Hsu
Clayton Sanford
CML
AI4CE
18
6
0
10 Jun 2022
Identifying good directions to escape the NTK regime and efficiently
  learn low-degree plus sparse polynomials
Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials
Eshaan Nichani
Yunzhi Bai
Jason D. Lee
29
10
0
08 Jun 2022
Randomly Initialized One-Layer Neural Networks Make Data Linearly
  Separable
Randomly Initialized One-Layer Neural Networks Make Data Linearly Separable
Promit Ghosal
Srinath Mahankali
Yihang Sun
MLT
26
4
0
24 May 2022
Sparse Neural Additive Model: Interpretable Deep Learning with Feature
  Selection via Group Sparsity
Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity
Shiyun Xu
Zhiqi Bu
Pratik Chaudhari
Ian Barnett
24
21
0
25 Feb 2022
Random Feature Amplification: Feature Learning and Generalization in
  Neural Networks
Random Feature Amplification: Feature Learning and Generalization in Neural Networks
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
30
29
0
15 Feb 2022
Subquadratic Overparameterization for Shallow Neural Networks
Subquadratic Overparameterization for Shallow Neural Networks
Chaehwan Song
Ali Ramezani-Kebrya
Thomas Pethick
Armin Eftekhari
V. Cevher
27
31
0
02 Nov 2021
Provable Regret Bounds for Deep Online Learning and Control
Provable Regret Bounds for Deep Online Learning and Control
Xinyi Chen
Edgar Minasyan
Jason D. Lee
Elad Hazan
36
6
0
15 Oct 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
  on Pruned Neural Networks
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCV
MLT
31
13
0
12 Oct 2021
ReLU Regression with Massart Noise
ReLU Regression with Massart Noise
Ilias Diakonikolas
Jongho Park
Christos Tzamos
56
11
0
10 Sep 2021
On the Power of Differentiable Learning versus PAC and SQ Learning
On the Power of Differentiable Learning versus PAC and SQ Learning
Emmanuel Abbe
Pritish Kamath
Eran Malach
Colin Sandon
Nathan Srebro
MLT
77
23
0
09 Aug 2021
Analytic Study of Families of Spurious Minima in Two-Layer ReLU Neural
  Networks: A Tale of Symmetry II
Analytic Study of Families of Spurious Minima in Two-Layer ReLU Neural Networks: A Tale of Symmetry II
Yossi Arjevani
M. Field
28
18
0
21 Jul 2021
The Limitations of Large Width in Neural Networks: A Deep Gaussian
  Process Perspective
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective
Geoff Pleiss
John P. Cunningham
28
24
0
11 Jun 2021
Relative stability toward diffeomorphisms indicates performance in deep
  nets
Relative stability toward diffeomorphisms indicates performance in deep nets
Leonardo Petrini
Alessandro Favero
Mario Geiger
M. Wyart
OOD
38
15
0
06 May 2021
The Connection Between Approximation, Depth Separation and Learnability
  in Neural Networks
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
21
20
0
31 Jan 2021
Towards Understanding Ensemble, Knowledge Distillation and
  Self-Distillation in Deep Learning
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
FedML
60
355
0
17 Dec 2020
Deep Learning is Singular, and That's Good
Deep Learning is Singular, and That's Good
Daniel Murfet
Susan Wei
Biwei Huang
Hui Li
Jesse Gell-Redman
T. Quella
UQCV
24
26
0
22 Oct 2020
Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural
  Network Initialization?
Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural Network Initialization?
Yaniv Blumenfeld
D. Gilboa
Daniel Soudry
ODL
30
13
0
02 Jul 2020
Approximation Schemes for ReLU Regression
Approximation Schemes for ReLU Regression
Ilias Diakonikolas
Surbhi Goel
Sushrut Karmalkar
Adam R. Klivans
Mahdi Soltanolkotabi
16
51
0
26 May 2020
Spectra of the Conjugate Kernel and Neural Tangent Kernel for
  linear-width neural networks
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
Z. Fan
Zhichao Wang
44
71
0
25 May 2020
Random Features for Kernel Approximation: A Survey on Algorithms,
  Theory, and Beyond
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
44
172
0
23 Apr 2020
Uncertainty Quantification for Sparse Deep Learning
Uncertainty Quantification for Sparse Deep Learning
Yuexi Wang
Veronika Rockova
BDL
UQCV
31
31
0
26 Feb 2020
An Optimization and Generalization Analysis for Max-Pooling Networks
An Optimization and Generalization Analysis for Max-Pooling Networks
Alon Brutzkus
Amir Globerson
MLT
AI4CE
16
4
0
22 Feb 2020
Learning Parities with Neural Networks
Learning Parities with Neural Networks
Amit Daniely
Eran Malach
24
76
0
18 Feb 2020
A closer look at the approximation capabilities of neural networks
A closer look at the approximation capabilities of neural networks
Kai Fong Ernest Chong
13
16
0
16 Feb 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
58
271
0
03 Feb 2020
Beyond Linearization: On Quadratic and Higher-Order Approximation of
  Wide Neural Networks
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks
Yu Bai
J. Lee
24
116
0
03 Oct 2019
Linearized two-layers neural networks in high dimension
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
18
241
0
27 Apr 2019
Regularization Matters: Generalization and Optimization of Neural Nets
  v.s. their Induced Kernel
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
Colin Wei
J. Lee
Qiang Liu
Tengyu Ma
23
245
0
12 Oct 2018
Learning ReLU Networks on Linearly Separable Data: Algorithm,
  Optimality, and Generalization
Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization
G. Wang
G. Giannakis
Jie Chen
MLT
24
131
0
14 Aug 2018
Approximation by Combinations of ReLU and Squared ReLU Ridge Functions
  with $ \ell^1 $ and $ \ell^0 $ Controls
Approximation by Combinations of ReLU and Squared ReLU Ridge Functions with ℓ1 \ell^1 ℓ1 and ℓ0 \ell^0 ℓ0 Controls
Jason M. Klusowski
Andrew R. Barron
132
142
0
26 Jul 2016
1