ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.08059
  4. Cited By
When Hardness of Approximation Meets Hardness of Learning
v1v2 (latest)

When Hardness of Approximation Meets Hardness of Learning

18 August 2020
Eran Malach
Shai Shalev-Shwartz
ArXiv (abs)PDFHTML

Papers citing "When Hardness of Approximation Meets Hardness of Learning"

10 / 10 papers shown
From Reasoning to Super-Intelligence: A Search-Theoretic Perspective
From Reasoning to Super-Intelligence: A Search-Theoretic Perspective
Shai Shalev-Shwartz
Amnon Shashua
LRM
347
4
0
13 Jul 2025
From Sparse Dependence to Sparse Attention: Unveiling How Chain-of-Thought Enhances Transformer Sample Efficiency
From Sparse Dependence to Sparse Attention: Unveiling How Chain-of-Thought Enhances Transformer Sample EfficiencyInternational Conference on Learning Representations (ICLR), 2024
Kaiyue Wen
Huaqing Zhang
Hongzhou Lin
Jingzhao Zhang
MoELRM
665
18
0
07 Oct 2024
Why Do You Grok? A Theoretical Analysis of Grokking Modular Addition
Why Do You Grok? A Theoretical Analysis of Grokking Modular Addition
Mohamad Amin Mohamadi
Zhiyuan Li
Lei Wu
Danica J. Sutherland
376
21
0
17 Jul 2024
Auto-Regressive Next-Token Predictors are Universal Learners
Auto-Regressive Next-Token Predictors are Universal LearnersInternational Conference on Machine Learning (ICML), 2023
Eran Malach
LRM
264
60
0
13 Sep 2023
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural
  Networks
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural NetworksInternational Conference on Machine Learning (ICML), 2023
Liam Collins
Hamed Hassani
Mahdi Soltanolkotabi
Aryan Mokhtari
Sanjay Shakkottai
610
14
0
13 Jul 2023
Hidden Progress in Deep Learning: SGD Learns Parities Near the
  Computational Limit
Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational LimitNeural Information Processing Systems (NeurIPS), 2022
Boaz Barak
Benjamin L. Edelman
Surbhi Goel
Sham Kakade
Eran Malach
Cyril Zhang
448
172
0
18 Jul 2022
Learning a Single Neuron for Non-monotonic Activation Functions
Learning a Single Neuron for Non-monotonic Activation FunctionsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Lei Wu
MLT
212
17
0
16 Feb 2022
A spectral-based analysis of the separation between two-layer neural
  networks and linear methods
A spectral-based analysis of the separation between two-layer neural networks and linear methodsJournal of machine learning research (JMLR), 2021
Lei Wu
Jihao Long
282
9
0
10 Aug 2021
The Polynomial Method is Universal for Distribution-Free Correlational
  SQ Learning
The Polynomial Method is Universal for Distribution-Free Correlational SQ Learning
Aravind Gollakota
Sushrut Karmalkar
Adam R. Klivans
395
3
0
22 Oct 2020
Computational Separation Between Convolutional and Fully-Connected
  Networks
Computational Separation Between Convolutional and Fully-Connected NetworksInternational Conference on Learning Representations (ICLR), 2020
Eran Malach
Shai Shalev-Shwartz
230
32
0
03 Oct 2020
1
Page 1 of 1