ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.01629
  4. Cited By
Beyond the Universal Law of Robustness: Sharper Laws for Random Features
  and Neural Tangent Kernels

Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels

3 February 2023
Simone Bombari
Shayan Kiyani
Marco Mondelli
    AAML
ArXivPDFHTML

Papers citing "Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels"

12 / 12 papers shown
Title
Spurious Correlations in High Dimensional Regression: The Roles of Regularization, Simplicity Bias and Over-Parameterization
Spurious Correlations in High Dimensional Regression: The Roles of Regularization, Simplicity Bias and Over-Parameterization
Simone Bombari
Marco Mondelli
106
0
0
03 Feb 2025
Infinite Width Limits of Self Supervised Neural Networks
Maximilian Fleissner
Gautham Govind Anil
D. Ghoshdastidar
SSL
112
0
0
17 Nov 2024
Towards Understanding the Word Sensitivity of Attention Layers: A Study
  via Random Features
Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features
Simone Bombari
Marco Mondelli
26
3
0
05 Feb 2024
No Free Prune: Information-Theoretic Barriers to Pruning at
  Initialization
No Free Prune: Information-Theoretic Barriers to Pruning at Initialization
Tanishq Kumar
Kevin Luo
Mark Sellke
14
3
0
02 Feb 2024
1-Lipschitz Neural Networks are more expressive with N-Activations
1-Lipschitz Neural Networks are more expressive with N-Activations
Bernd Prach
Christoph H. Lampert
AAML
FAtt
6
0
0
10 Nov 2023
Upper and lower bounds for the Lipschitz constant of random neural
  networks
Upper and lower bounds for the Lipschitz constant of random neural networks
Paul Geuchen
Thomas Heindl
Dominik Stöger
Felix Voigtlaender
AAML
14
0
0
02 Nov 2023
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
Behrad Moniri
Donghwan Lee
Hamed Hassani
Edgar Dobriban
MLT
16
19
0
11 Oct 2023
Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK
  Approach
Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK Approach
Shaopeng Fu
Di Wang
AAML
20
1
0
09 Oct 2023
How Spurious Features Are Memorized: Precise Analysis for Random and NTK
  Features
How Spurious Features Are Memorized: Precise Analysis for Random and NTK Features
Simone Bombari
Marco Mondelli
AAML
14
4
0
20 May 2023
Robustness in deep learning: The good (width), the bad (depth), and the
  ugly (initialization)
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
24
19
0
15 Sep 2022
Exploring Architectural Ingredients of Adversarially Robust Deep Neural
  Networks
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
Hanxun Huang
Yisen Wang
S. Erfani
Quanquan Gu
James Bailey
Xingjun Ma
AAML
TPM
44
100
0
07 Oct 2021
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
253
3,102
0
04 Nov 2016
1