Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.06793
Cited By
Learning Two-layer Neural Networks with Symmetric Inputs
16 October 2018
Rong Ge
Rohith Kuditipudi
Zhize Li
Xiang Wang
OOD
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning Two-layer Neural Networks with Symmetric Inputs"
10 / 10 papers shown
Title
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials
Ilias Diakonikolas
D. Kane
24
4
0
24 Jul 2023
A faster and simpler algorithm for learning shallow networks
Sitan Chen
Shyam Narayanan
33
7
0
24 Jul 2023
Global Convergence of SGD On Two Layer Neural Nets
Pulkit Gopalani
Anirbit Mukherjee
18
5
0
20 Oct 2022
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Sitan Chen
Adam R. Klivans
Raghu Meka
MLAU
MLT
42
8
0
08 Nov 2021
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
FedML
22
354
0
17 Dec 2020
Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models
Ilias Diakonikolas
D. Kane
17
32
0
14 Dec 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
19
36
0
28 Sep 2020
From Boltzmann Machines to Neural Networks and Back Again
Surbhi Goel
Adam R. Klivans
Frederic Koehler
13
5
0
25 Jul 2020
What Can ResNet Learn Efficiently, Going Beyond Kernels?
Zeyuan Allen-Zhu
Yuanzhi Li
24
183
0
24 May 2019
Learning One-hidden-layer ReLU Networks via Gradient Descent
Xiao Zhang
Yaodong Yu
Lingxiao Wang
Quanquan Gu
MLT
26
134
0
20 Jun 2018
1