ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.08658
  4. Cited By
Pruning is Optimal for Learning Sparse Features in High-Dimensions

Pruning is Optimal for Learning Sparse Features in High-Dimensions

12 June 2024
Nuri Mert Vural
Murat A. Erdogdu
    MLT
ArXiv (abs)PDFHTMLGithub

Papers citing "Pruning is Optimal for Learning Sparse Features in High-Dimensions"

4 / 4 papers shown
From Information to Generative Exponent: Learning Rate Induces Phase Transitions in SGD
From Information to Generative Exponent: Learning Rate Induces Phase Transitions in SGD
Konstantinos Christopher Tsiolis
Alireza Mousavi-Hosseini
Murat A. Erdogdu
MLT
153
0
0
23 Oct 2025
On the creation of narrow AI: hierarchy and nonlocality of neural network skills
On the creation of narrow AI: hierarchy and nonlocality of neural network skills
Eric J. Michaud
Asher Parker-Sartori
Max Tegmark
609
3
0
21 May 2025
Robust Feature Learning for Multi-Index Models in High Dimensions
Robust Feature Learning for Multi-Index Models in High DimensionsInternational Conference on Learning Representations (ICLR), 2024
Alireza Mousavi-Hosseini
Adel Javanmard
Murat A. Erdogdu
OODAAML
561
5
0
21 Oct 2024
Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics
Learning Multi-Index Models with Neural Networks via Mean-Field Langevin DynamicsInternational Conference on Learning Representations (ICLR), 2024
Alireza Mousavi-Hosseini
Denny Wu
Murat A. Erdogdu
MLTAI4CE
432
12
0
14 Aug 2024
1
Page 1 of 1