ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.11080
  4. Cited By
The Curious Case of Adversarially Robust Models: More Data Can Help,
  Double Descend, or Hurt Generalization

The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization

25 February 2020
Yifei Min
Lin Chen
Amin Karbasi
    AAML
ArXivPDFHTML

Papers citing "The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization"

13 / 13 papers shown
Title
Efficient Optimization Algorithms for Linear Adversarial Training
Efficient Optimization Algorithms for Linear Adversarial Training
Antônio H. Ribeiro
Thomas B. Schon
Dave Zahariah
Francis Bach
AAML
39
1
0
16 Oct 2024
Investigating the Impact of Model Complexity in Large Language Models
Investigating the Impact of Model Complexity in Large Language Models
Jing Luo
Huiyuan Wang
Weiran Huang
34
0
0
01 Oct 2024
Recent Advances in Attack and Defense Approaches of Large Language
  Models
Recent Advances in Attack and Defense Approaches of Large Language Models
Jing Cui
Yishi Xu
Zhewei Huang
Shuchang Zhou
Jianbin Jiao
Junge Zhang
PILM
AAML
52
1
0
05 Sep 2024
CUDA: Convolution-based Unlearnable Datasets
CUDA: Convolution-based Unlearnable Datasets
Vinu Sankar Sadasivan
Mahdi Soltanolkotabi
S. Feizi
MU
29
23
0
07 Mar 2023
Beyond the Universal Law of Robustness: Sharper Laws for Random Features
  and Neural Tangent Kernels
Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels
Simone Bombari
Shayan Kiyani
Marco Mondelli
AAML
18
10
0
03 Feb 2023
Data Augmentation Alone Can Improve Adversarial Training
Data Augmentation Alone Can Improve Adversarial Training
Lin Li
Michael W. Spratling
16
49
0
24 Jan 2023
The Effects of Regularization and Data Augmentation are Class Dependent
The Effects of Regularization and Data Augmentation are Class Dependent
Randall Balestriero
Léon Bottou
Yann LeCun
28
94
0
07 Apr 2022
On the (Non-)Robustness of Two-Layer Neural Networks in Different
  Learning Regimes
On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes
Elvis Dohmatob
A. Bietti
AAML
18
13
0
22 Mar 2022
Adversarial robustness of sparse local Lipschitz predictors
Adversarial robustness of sparse local Lipschitz predictors
Ramchandran Muthukumar
Jeremias Sulam
AAML
29
13
0
26 Feb 2022
Precise Statistical Analysis of Classification Accuracies for
  Adversarial Training
Precise Statistical Analysis of Classification Accuracies for Adversarial Training
Adel Javanmard
Mahdi Soltanolkotabi
AAML
24
62
0
21 Oct 2020
Multiple Descent: Design Your Own Generalization Curve
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
18
61
0
03 Aug 2020
Disentangling Adversarial Robustness and Generalization
Disentangling Adversarial Robustness and Generalization
David Stutz
Matthias Hein
Bernt Schiele
AAML
OOD
186
272
0
03 Dec 2018
Adversarial examples from computational constraints
Adversarial examples from computational constraints
Sébastien Bubeck
Eric Price
Ilya P. Razenshteyn
AAML
60
230
0
25 May 2018
1