ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1702.08503
  4. Cited By
SGD Learns the Conjugate Kernel Class of the Network

SGD Learns the Conjugate Kernel Class of the Network

27 February 2017
Amit Daniely
ArXivPDFHTML

Papers citing "SGD Learns the Conjugate Kernel Class of the Network"

50 / 130 papers shown
Title
On Function Approximation in Reinforcement Learning: Optimism in the
  Face of Large State Spaces
On Function Approximation in Reinforcement Learning: Optimism in the Face of Large State Spaces
Zhuoran Yang
Chi Jin
Zhaoran Wang
Mengdi Wang
Michael I. Jordan
39
18
0
09 Nov 2020
Which Minimizer Does My Neural Network Converge To?
Which Minimizer Does My Neural Network Converge To?
Manuel Nonnenmacher
David Reeb
Ingo Steinwart
ODL
8
4
0
04 Nov 2020
Towards a Unified Quadrature Framework for Large-Scale Kernel Machines
Towards a Unified Quadrature Framework for Large-Scale Kernel Machines
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
16
4
0
03 Nov 2020
On Convergence and Generalization of Dropout Training
On Convergence and Generalization of Dropout Training
Poorya Mianjy
R. Arora
37
30
0
23 Oct 2020
Deep Learning is Singular, and That's Good
Deep Learning is Singular, and That's Good
Daniel Murfet
Susan Wei
Biwei Huang
Hui Li
Jesse Gell-Redman
T. Quella
UQCV
24
26
0
22 Oct 2020
Precise Statistical Analysis of Classification Accuracies for
  Adversarial Training
Precise Statistical Analysis of Classification Accuracies for Adversarial Training
Adel Javanmard
Mahdi Soltanolkotabi
AAML
33
61
0
21 Oct 2020
A Modular Analysis of Provable Acceleration via Polyak's Momentum:
  Training a Wide ReLU Network and a Deep Linear Network
A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network
Jun-Kun Wang
Chi-Heng Lin
Jacob D. Abernethy
8
23
0
04 Oct 2020
Computational Separation Between Convolutional and Fully-Connected
  Networks
Computational Separation Between Convolutional and Fully-Connected Networks
Eran Malach
Shai Shalev-Shwartz
24
26
0
03 Oct 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
22
36
0
28 Sep 2020
Towards a Mathematical Understanding of Neural Network-Based Machine
  Learning: what we know and what we don't
Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don't
E. Weinan
Chao Ma
Stephan Wojtowytsch
Lei Wu
AI4CE
29
133
0
22 Sep 2020
Generalized Leverage Score Sampling for Neural Networks
Generalized Leverage Score Sampling for Neural Networks
Jason D. Lee
Ruoqi Shen
Zhao Song
Mengdi Wang
Zheng Yu
26
42
0
21 Sep 2020
Understanding Boolean Function Learnability on Deep Neural Networks: PAC
  Learning Meets Neurosymbolic Models
Understanding Boolean Function Learnability on Deep Neural Networks: PAC Learning Meets Neurosymbolic Models
Márcio Nicolau
Anderson R. Tavares
Zhiwei Zhang
Pedro H. C. Avelar
J. Flach
Luís C. Lamb
Moshe Y. Vardi
NAI
33
2
0
13 Sep 2020
Asymptotics of Wide Convolutional Neural Networks
Asymptotics of Wide Convolutional Neural Networks
Anders Andreassen
Ethan Dyer
22
23
0
19 Aug 2020
How Powerful are Shallow Neural Networks with Bandlimited Random
  Weights?
How Powerful are Shallow Neural Networks with Bandlimited Random Weights?
Ming Li
Sho Sonoda
Feilong Cao
Yu Wang
Jiye Liang
11
7
0
19 Aug 2020
Benign Overfitting and Noisy Features
Benign Overfitting and Noisy Features
Zhu Li
Weijie Su
Dino Sejdinovic
12
22
0
06 Aug 2020
Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy
Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy
Zuyue Fu
Zhuoran Yang
Zhaoran Wang
21
42
0
02 Aug 2020
Finite Versus Infinite Neural Networks: an Empirical Study
Finite Versus Infinite Neural Networks: an Empirical Study
Jaehoon Lee
S. Schoenholz
Jeffrey Pennington
Ben Adlam
Lechao Xiao
Roman Novak
Jascha Narain Sohl-Dickstein
28
209
0
31 Jul 2020
From deep to Shallow: Equivalent Forms of Deep Networks in Reproducing
  Kernel Krein Space and Indefinite Support Vector Machines
From deep to Shallow: Equivalent Forms of Deep Networks in Reproducing Kernel Krein Space and Indefinite Support Vector Machines
A. Shilton
Sunil Gupta
Santu Rana
Svetha Venkatesh
19
0
0
15 Jul 2020
On the Global Optimality of Model-Agnostic Meta-Learning
On the Global Optimality of Model-Agnostic Meta-Learning
Lingxiao Wang
Qi Cai
Zhuoran Yang
Zhaoran Wang
22
43
0
23 Jun 2020
Can Temporal-Difference and Q-Learning Learn Representation? A
  Mean-Field Theory
Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory
Yufeng Zhang
Qi Cai
Zhuoran Yang
Yongxin Chen
Zhaoran Wang
OOD
MLT
156
11
0
08 Jun 2020
Hardness of Learning Neural Networks with Natural Weights
Hardness of Learning Neural Networks with Natural Weights
Amit Daniely
Gal Vardi
6
19
0
05 Jun 2020
The Effects of Mild Over-parameterization on the Optimization Landscape
  of Shallow ReLU Neural Networks
The Effects of Mild Over-parameterization on the Optimization Landscape of Shallow ReLU Neural Networks
Itay Safran
Gilad Yehudai
Ohad Shamir
95
34
0
01 Jun 2020
Generalization Error of Generalized Linear Models in High Dimensions
Generalization Error of Generalized Linear Models in High Dimensions
M Motavali Emami
Mojtaba Sahraee-Ardakan
Parthe Pandit
S. Rangan
A. Fletcher
AI4CE
22
37
0
01 May 2020
Random Features for Kernel Approximation: A Survey on Algorithms,
  Theory, and Beyond
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
44
172
0
23 Apr 2020
Memorizing Gaussians with no over-parameterizaion via gradient decent on
  neural networks
Memorizing Gaussians with no over-parameterizaion via gradient decent on neural networks
Amit Daniely
VLM
MLT
18
15
0
28 Mar 2020
Approximate is Good Enough: Probabilistic Variants of Dimensional and
  Margin Complexity
Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity
Pritish Kamath
Omar Montasser
Nathan Srebro
6
28
0
09 Mar 2020
The large learning rate phase of deep learning: the catapult mechanism
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
159
236
0
04 Mar 2020
On the Global Convergence of Training Deep Linear ResNets
On the Global Convergence of Training Deep Linear ResNets
Difan Zou
Philip M. Long
Quanquan Gu
26
37
0
02 Mar 2020
Learning Parities with Neural Networks
Learning Parities with Neural Networks
Amit Daniely
Eran Malach
24
76
0
18 Feb 2020
Taylorized Training: Towards Better Approximation of Neural Network
  Training at Finite Width
Taylorized Training: Towards Better Approximation of Neural Network Training at Finite Width
Yu Bai
Ben Krause
Huan Wang
Caiming Xiong
R. Socher
22
22
0
10 Feb 2020
A Deep Conditioning Treatment of Neural Networks
A Deep Conditioning Treatment of Neural Networks
Naman Agarwal
Pranjal Awasthi
Satyen Kale
AI4CE
25
15
0
04 Feb 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
64
272
0
03 Feb 2020
A Corrective View of Neural Networks: Representation, Memorization and
  Learning
A Corrective View of Neural Networks: Representation, Memorization and Learning
Guy Bresler
Dheeraj M. Nagaraj
MLT
29
18
0
01 Feb 2020
Learning a Single Neuron with Gradient Methods
Learning a Single Neuron with Gradient Methods
Gilad Yehudai
Ohad Shamir
MLT
19
63
0
15 Jan 2020
Disentangling Trainability and Generalization in Deep Neural Networks
Disentangling Trainability and Generalization in Deep Neural Networks
Lechao Xiao
Jeffrey Pennington
S. Schoenholz
11
34
0
30 Dec 2019
Neural Contextual Bandits with UCB-based Exploration
Neural Contextual Bandits with UCB-based Exploration
Dongruo Zhou
Lihong Li
Quanquan Gu
36
15
0
11 Nov 2019
Learning Boolean Circuits with Neural Networks
Learning Boolean Circuits with Neural Networks
Eran Malach
Shai Shalev-Shwartz
17
4
0
25 Oct 2019
The Local Elasticity of Neural Networks
The Local Elasticity of Neural Networks
Hangfeng He
Weijie J. Su
45
44
0
15 Oct 2019
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
Sanjeev Arora
S. Du
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
Dingli Yu
AAML
22
161
0
03 Oct 2019
Wider Networks Learn Better Features
Wider Networks Learn Better Features
D. Gilboa
Guy Gur-Ari
18
7
0
25 Sep 2019
Asymptotics of Wide Networks from Feynman Diagrams
Asymptotics of Wide Networks from Feynman Diagrams
Ethan Dyer
Guy Gur-Ari
32
114
0
25 Sep 2019
Neural Policy Gradient Methods: Global Optimality and Rates of
  Convergence
Neural Policy Gradient Methods: Global Optimality and Rates of Convergence
Lingxiao Wang
Qi Cai
Zhuoran Yang
Zhaoran Wang
25
236
0
29 Aug 2019
Theoretical Issues in Deep Networks: Approximation, Optimization and
  Generalization
Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization
T. Poggio
Andrzej Banburski
Q. Liao
ODL
40
161
0
25 Aug 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
62
625
0
14 Aug 2019
On Symmetry and Initialization for Neural Networks
On Symmetry and Initialization for Neural Networks
Ido Nachum
Amir Yehudayoff
MLT
36
5
0
01 Jul 2019
ID3 Learns Juntas for Smoothed Product Distributions
ID3 Learns Juntas for Smoothed Product Distributions
Alon Brutzkus
Amit Daniely
Eran Malach
28
20
0
20 Jun 2019
Convergence of Adversarial Training in Overparametrized Neural Networks
Convergence of Adversarial Training in Overparametrized Neural Networks
Ruiqi Gao
Tianle Cai
Haochuan Li
Liwei Wang
Cho-Jui Hsieh
Jason D. Lee
AAML
21
107
0
19 Jun 2019
Gradient Dynamics of Shallow Univariate ReLU Networks
Gradient Dynamics of Shallow Univariate ReLU Networks
Francis Williams
Matthew Trager
Claudio Silva
Daniele Panozzo
Denis Zorin
Joan Bruna
21
79
0
18 Jun 2019
Approximation power of random neural networks
Bolton Bailey
Ziwei Ji
Matus Telgarsky
Ruicheng Xian
22
6
0
18 Jun 2019
Kernel and Rich Regimes in Overparametrized Models
Blake E. Woodworth
Suriya Gunasekar
Pedro H. P. Savarese
E. Moroshko
Itay Golan
Jason D. Lee
Daniel Soudry
Nathan Srebro
30
353
0
13 Jun 2019
Previous
123
Next