ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.01629
  4. Cited By
Beyond the Universal Law of Robustness: Sharper Laws for Random Features
  and Neural Tangent Kernels
v1v2 (latest)

Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels

International Conference on Machine Learning (ICML), 2023
3 February 2023
Simone Bombari
Shayan Kiyani
Marco Mondelli
    AAML
ArXiv (abs)PDFHTML

Papers citing "Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels"

10 / 10 papers shown
Title
A Law of Data Reconstruction for Random Features (and Beyond)
A Law of Data Reconstruction for Random Features (and Beyond)
Leonardo Iurada
Simone Bombari
Tatiana Tommasi
Marco Mondelli
92
0
0
26 Sep 2025
Spurious Correlations in High Dimensional Regression: The Roles of Regularization, Simplicity Bias and Over-Parameterization
Spurious Correlations in High Dimensional Regression: The Roles of Regularization, Simplicity Bias and Over-Parameterization
Simone Bombari
Marco Mondelli
594
5
0
03 Feb 2025
Infinite Width Limits of Self Supervised Neural NetworksInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Maximilian Fleissner
Gautham Govind Anil
Debarghya Ghoshdastidar
SSL
1.0K
2
0
17 Nov 2024
Towards Understanding the Word Sensitivity of Attention Layers: A Study
  via Random Features
Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random FeaturesInternational Conference on Machine Learning (ICML), 2024
Simone Bombari
Marco Mondelli
213
6
0
05 Feb 2024
No Free Prune: Information-Theoretic Barriers to Pruning at
  Initialization
No Free Prune: Information-Theoretic Barriers to Pruning at Initialization
Tanishq Kumar
Kevin Luo
Mark Sellke
200
7
0
02 Feb 2024
1-Lipschitz Neural Networks are more expressive with N-Activations
1-Lipschitz Neural Networks are more expressive with N-Activations
Bernd Prach
Christoph H. Lampert
AAMLFAtt
163
1
0
10 Nov 2023
Upper and lower bounds for the Lipschitz constant of random neural networks
Upper and lower bounds for the Lipschitz constant of random neural networks
Paul Geuchen
Dominik Stöger
Dominik Stöger
Felix Voigtlaender
AAML
354
0
0
02 Nov 2023
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural NetworksInternational Conference on Machine Learning (ICML), 2023
Behrad Moniri
Donghwan Lee
Hamed Hassani
Guang Cheng
MLT
327
31
0
11 Oct 2023
Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK
  Approach
Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK ApproachIEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2023
Shaopeng Fu
Haiyan Zhao
AAML
326
3
0
09 Oct 2023
How Spurious Features Are Memorized: Precise Analysis for Random and NTK
  Features
How Spurious Features Are Memorized: Precise Analysis for Random and NTK FeaturesInternational Conference on Machine Learning (ICML), 2023
Simone Bombari
Marco Mondelli
AAML
337
9
0
20 May 2023
1