ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.10258
  4. Cited By
Reliably Learning the ReLU in Polynomial Time

Reliably Learning the ReLU in Polynomial Time

30 November 2016
Surbhi Goel
Varun Kanade
Adam R. Klivans
J. Thaler
ArXivPDFHTML

Papers citing "Reliably Learning the ReLU in Polynomial Time"

34 / 34 papers shown
Title
Importance Sampling for Nonlinear Models
Importance Sampling for Nonlinear Models
Prakash Palanivelu Rajmohan
Fred Roosta
7
0
0
18 May 2025
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
  Polynomials
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials
Ilias Diakonikolas
D. Kane
32
4
0
24 Jul 2023
A faster and simpler algorithm for learning shallow networks
A faster and simpler algorithm for learning shallow networks
Sitan Chen
Shyam Narayanan
41
7
0
24 Jul 2023
Computational Complexity of Learning Neural Networks: Smoothness and
  Degeneracy
Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy
Amit Daniely
Nathan Srebro
Gal Vardi
33
4
0
15 Feb 2023
Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces
  and ReLU Regression under Gaussian Marginals
Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces and ReLU Regression under Gaussian Marginals
Ilias Diakonikolas
D. Kane
Lisheng Ren
19
26
0
13 Feb 2023
Transformers Learn Shortcuts to Automata
Transformers Learn Shortcuts to Automata
Bingbin Liu
Jordan T. Ash
Surbhi Goel
A. Krishnamurthy
Cyril Zhang
OffRL
LRM
48
156
0
19 Oct 2022
Loss Minimization through the Lens of Outcome Indistinguishability
Loss Minimization through the Lens of Outcome Indistinguishability
Parikshit Gopalan
Lunjia Hu
Michael P. Kim
Omer Reingold
Udi Wieder
UQCV
38
31
0
16 Oct 2022
Neural Networks can Learn Representations with Gradient Descent
Neural Networks can Learn Representations with Gradient Descent
Alexandru Damian
Jason D. Lee
Mahdi Soltanolkotabi
SSL
MLT
25
114
0
30 Jun 2022
Training Fully Connected Neural Networks is $\exists\mathbb{R}$-Complete
Training Fully Connected Neural Networks is ∃R\exists\mathbb{R}∃R-Complete
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
61
30
0
04 Apr 2022
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
27
30
0
10 Feb 2022
Neural networks with linear threshold activations: structure and
  algorithms
Neural networks with linear threshold activations: structure and algorithms
Sammy Khalife
Hongyu Cheng
A. Basu
42
14
0
15 Nov 2021
Exponentially Many Local Minima in Quantum Neural Networks
Exponentially Many Local Minima in Quantum Neural Networks
Xuchen You
Xiaodi Wu
72
52
0
06 Oct 2021
ReLU Regression with Massart Noise
ReLU Regression with Massart Noise
Ilias Diakonikolas
Jongho Park
Christos Tzamos
56
11
0
10 Sep 2021
Small Covers for Near-Zero Sets of Polynomials and Learning Latent
  Variable Models
Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models
Ilias Diakonikolas
D. Kane
21
32
0
14 Dec 2020
Learning Graph Neural Networks with Approximate Gradient Descent
Learning Graph Neural Networks with Approximate Gradient Descent
Qunwei Li
Shaofeng Zou
Leon Wenliang Zhong
GNN
32
1
0
07 Dec 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
22
36
0
28 Sep 2020
From Boltzmann Machines to Neural Networks and Back Again
From Boltzmann Machines to Neural Networks and Back Again
Surbhi Goel
Adam R. Klivans
Frederic Koehler
19
5
0
25 Jul 2020
Near-Optimal SQ Lower Bounds for Agnostically Learning Halfspaces and
  ReLUs under Gaussian Marginals
Near-Optimal SQ Lower Bounds for Agnostically Learning Halfspaces and ReLUs under Gaussian Marginals
Ilias Diakonikolas
D. Kane
Nikos Zarifis
21
66
0
29 Jun 2020
Approximation Schemes for ReLU Regression
Approximation Schemes for ReLU Regression
Ilias Diakonikolas
Surbhi Goel
Sushrut Karmalkar
Adam R. Klivans
Mahdi Soltanolkotabi
18
51
0
26 May 2020
Neural Networks are Convex Regularizers: Exact Polynomial-time Convex
  Optimization Formulations for Two-layer Networks
Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-layer Networks
Mert Pilanci
Tolga Ergen
26
116
0
24 Feb 2020
Convex Relaxations of Convolutional Neural Nets
Convex Relaxations of Convolutional Neural Nets
Burak Bartan
Mert Pilanci
20
5
0
31 Dec 2018
Deep Frank-Wolfe For Neural Network Optimization
Deep Frank-Wolfe For Neural Network Optimization
Leonard Berrada
Andrew Zisserman
M. P. Kumar
ODL
21
40
0
19 Nov 2018
Learning and Generalization in Overparameterized Neural Networks, Going
  Beyond Two Layers
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
32
765
0
12 Nov 2018
Learning Two-layer Neural Networks with Symmetric Inputs
Learning Two-layer Neural Networks with Symmetric Inputs
Rong Ge
Rohith Kuditipudi
Zhize Li
Xiang Wang
OOD
MLT
36
57
0
16 Oct 2018
Learning One-hidden-layer ReLU Networks via Gradient Descent
Learning One-hidden-layer ReLU Networks via Gradient Descent
Xiao Zhang
Yaodong Yu
Lingxiao Wang
Quanquan Gu
MLT
30
134
0
20 Jun 2018
How Many Samples are Needed to Estimate a Convolutional or Recurrent
  Neural Network?
How Many Samples are Needed to Estimate a Convolutional or Recurrent Neural Network?
S. Du
Yining Wang
Xiyu Zhai
Sivaraman Balakrishnan
Ruslan Salakhutdinov
Aarti Singh
SSL
21
57
0
21 May 2018
Improved Learning of One-hidden-layer Convolutional Neural Networks with
  Overlaps
Improved Learning of One-hidden-layer Convolutional Neural Networks with Overlaps
S. Du
Surbhi Goel
MLT
30
17
0
20 May 2018
Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial
  Convergence and SQ Lower Bounds
Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds
Santosh Vempala
John Wilmes
MLT
13
50
0
07 May 2018
Learning One Convolutional Layer with Overlapping Patches
Learning One Convolutional Layer with Overlapping Patches
Surbhi Goel
Adam R. Klivans
Raghu Meka
MLT
21
80
0
07 Feb 2018
Theoretical insights into the optimization landscape of
  over-parameterized shallow neural networks
Theoretical insights into the optimization landscape of over-parameterized shallow neural networks
Mahdi Soltanolkotabi
Adel Javanmard
J. Lee
36
415
0
16 Jul 2017
Recovery Guarantees for One-hidden-layer Neural Networks
Recovery Guarantees for One-hidden-layer Neural Networks
Kai Zhong
Zhao Song
Prateek Jain
Peter L. Bartlett
Inderjit S. Dhillon
MLT
34
336
0
10 Jun 2017
Learning ReLUs via Gradient Descent
Learning ReLUs via Gradient Descent
Mahdi Soltanolkotabi
MLT
26
181
0
10 May 2017
Convergence Results for Neural Networks via Electrodynamics
Convergence Results for Neural Networks via Electrodynamics
Rina Panigrahy
Sushant Sachdeva
Qiuyi Zhang
MLT
MDE
29
22
0
01 Feb 2017
Understanding Deep Neural Networks with Rectified Linear Units
Understanding Deep Neural Networks with Rectified Linear Units
R. Arora
A. Basu
Poorya Mianjy
Anirbit Mukherjee
PINN
41
634
0
04 Nov 2016
1