ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.02883
  4. Cited By
Interpolation can hurt robust generalization even when there is no noise

Interpolation can hurt robust generalization even when there is no noise

5 August 2021
Konstantin Donhauser
Alexandru cTifrea
Michael Aerni
Reinhard Heckel
Fanny Yang
ArXivPDFHTML

Papers citing "Interpolation can hurt robust generalization even when there is no noise"

12 / 12 papers shown
Title
Towards unlocking the mystery of adversarial fragility of neural
  networks
Towards unlocking the mystery of adversarial fragility of neural networks
Jingchao Gao
Raghu Mudumbai
Xiaodong Wu
Jirong Yi
Catherine Xu
Hui Xie
Weiyu Xu
26
1
0
23 Jun 2024
The Surprising Harmfulness of Benign Overfitting for Adversarial
  Robustness
The Surprising Harmfulness of Benign Overfitting for Adversarial Robustness
Yifan Hao
Tong Zhang
AAML
19
4
0
19 Jan 2024
Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK
  Approach
Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK Approach
Shaopeng Fu
Di Wang
AAML
28
1
0
09 Oct 2023
On Achieving Optimal Adversarial Test Error
On Achieving Optimal Adversarial Test Error
Justin D. Li
Matus Telgarsky
AAML
17
1
0
13 Jun 2023
Beyond the Universal Law of Robustness: Sharper Laws for Random Features
  and Neural Tangent Kernels
Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels
Simone Bombari
Shayan Kiyani
Marco Mondelli
AAML
16
10
0
03 Feb 2023
Strong inductive biases provably prevent harmless interpolation
Strong inductive biases provably prevent harmless interpolation
Michael Aerni
Marco Milanta
Konstantin Donhauser
Fanny Yang
30
9
0
18 Jan 2023
Margin-based sampling in high dimensions: When being active is less
  efficient than staying passive
Margin-based sampling in high dimensions: When being active is less efficient than staying passive
A. Tifrea
Jacob Clarysse
Fanny Yang
17
2
0
01 Dec 2022
Rethinking Cost-sensitive Classification in Deep Learning via
  Adversarial Data Augmentation
Rethinking Cost-sensitive Classification in Deep Learning via Adversarial Data Augmentation
Qiyuan Chen
Raed Al Kontar
Maher Nouiehed
Xi Yang
Corey A. Lester
AAML
13
2
0
24 Aug 2022
Why adversarial training can hurt robust accuracy
Why adversarial training can hurt robust accuracy
Jacob Clarysse
Julia Hörrmann
Fanny Yang
AAML
11
18
0
03 Mar 2022
Interpolation and Regularization for Causal Learning
Interpolation and Regularization for Causal Learning
L. C. Vankadara
Luca Rendsburg
U. V. Luxburg
D. Ghoshdastidar
CML
21
1
0
18 Feb 2022
Hierarchical Shrinkage: improving the accuracy and interpretability of
  tree-based methods
Hierarchical Shrinkage: improving the accuracy and interpretability of tree-based methods
Abhineet Agarwal
Yan Shuo Tan
Omer Ronen
Chandan Singh
Bin-Xia Yu
63
27
0
02 Feb 2022
NoiLIn: Improving Adversarial Training and Correcting Stereotype of
  Noisy Labels
NoiLIn: Improving Adversarial Training and Correcting Stereotype of Noisy Labels
Jingfeng Zhang
Xilie Xu
Bo Han
Tongliang Liu
Gang Niu
Li-zhen Cui
Masashi Sugiyama
NoLa
AAML
10
9
0
31 May 2021
1