ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.16262
  4. Cited By
Layer-Aware Analysis of Catastrophic Overfitting: Revealing the
  Pseudo-Robust Shortcut Dependency

Layer-Aware Analysis of Catastrophic Overfitting: Revealing the Pseudo-Robust Shortcut Dependency

25 May 2024
Runqi Lin
Chaojian Yu
Bo Han
Hang Su
Tongliang Liu
    AAML
ArXivPDFHTML

Papers citing "Layer-Aware Analysis of Catastrophic Overfitting: Revealing the Pseudo-Robust Shortcut Dependency"

7 / 7 papers shown
Title
Fast Adversarial Training against Sparse Attacks Requires Loss Smoothing
Fast Adversarial Training against Sparse Attacks Requires Loss Smoothing
Xuyang Zhong
Yixiao Huang
Chen Liu
AAML
31
0
0
28 Feb 2025
Adversarial Training: A Survey
Adversarial Training: A Survey
Mengnan Zhao
Lihe Zhang
Jingwen Ye
Huchuan Lu
Baocai Yin
Xinchao Wang
AAML
21
0
0
19 Oct 2024
On Using Certified Training towards Empirical Robustness
On Using Certified Training towards Empirical Robustness
Alessandro De Palma
Serge Durand
Zakaria Chihani
François Terrier
Caterina Urban
OOD
AAML
19
1
0
02 Oct 2024
Evaluating the Adversarial Robustness of Adaptive Test-time Defenses
Evaluating the Adversarial Robustness of Adaptive Test-time Defenses
Francesco Croce
Sven Gowal
T. Brunner
Evan Shelhamer
Matthias Hein
A. Cemgil
TTA
AAML
160
67
0
28 Feb 2022
Make Some Noise: Reliable and Efficient Single-Step Adversarial Training
Make Some Noise: Reliable and Efficient Single-Step Adversarial Training
Pau de Jorge
Adel Bibi
Riccardo Volpi
Amartya Sanyal
Philip H. S. Torr
Grégory Rogez
P. Dokania
AAML
38
45
0
02 Feb 2022
Modeling Adversarial Noise for Adversarial Training
Modeling Adversarial Noise for Adversarial Training
Dawei Zhou
Nannan Wang
Bo Han
Tongliang Liu
AAML
24
15
0
21 Sep 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,696
0
15 Sep 2016
1