Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.12042
Cited By
Logit Pairing Methods Can Fool Gradient-Based Attacks
29 October 2018
Marius Mosbach
Maksym Andriushchenko
T. A. Trost
Matthias Hein
Dietrich Klakow
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Logit Pairing Methods Can Fool Gradient-Based Attacks"
21 / 21 papers shown
Title
Standard-Deviation-Inspired Regularization for Improving Adversarial Robustness
Olukorede Fakorede
Modeste Atsague
Jin Tian
AAML
37
0
0
31 Dec 2024
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Zhenxing Niu
Yuyao Sun
Qiguang Miao
Rong Jin
Gang Hua
AAML
38
6
0
28 May 2024
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Tianlong Chen
Zhenyu (Allen) Zhang
Pengju Wang
Santosh Balachandra
Haoyu Ma
Zehao Wang
Zhangyang Wang
OOD
AAML
79
46
0
20 Feb 2022
Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Weiran Lin
Keane Lucas
Lujo Bauer
Michael K. Reiter
Mahmood Sharif
AAML
29
5
0
28 Dec 2021
Data Augmentation Can Improve Robustness
Sylvestre-Alvise Rebuffi
Sven Gowal
D. A. Calian
Florian Stimberg
Olivia Wiles
Timothy A. Mann
AAML
17
269
0
09 Nov 2021
Improving Robustness using Generated Data
Sven Gowal
Sylvestre-Alvise Rebuffi
Olivia Wiles
Florian Stimberg
D. A. Calian
Timothy A. Mann
30
293
0
18 Oct 2021
Understanding the Logit Distributions of Adversarially-Trained Deep Neural Networks
Landan Seguin
A. Ndirango
Neeli Mishra
SueYeon Chung
Tyler Lee
OOD
25
2
0
26 Aug 2021
Fixing Data Augmentation to Improve Adversarial Robustness
Sylvestre-Alvise Rebuffi
Sven Gowal
D. A. Calian
Florian Stimberg
Olivia Wiles
Timothy A. Mann
AAML
30
268
0
02 Mar 2021
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Sven Gowal
Chongli Qin
J. Uesato
Timothy A. Mann
Pushmeet Kohli
AAML
17
323
0
07 Oct 2020
Label Smoothing and Adversarial Robustness
Chaohao Fu
Hongbin Chen
Na Ruan
Weijia Jia
AAML
8
12
0
17 Sep 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
W. R. Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
19
215
0
04 Sep 2020
Adversarial Training against Location-Optimized Adversarial Patches
Sukrut Rao
David Stutz
Bernt Schiele
AAML
11
91
0
05 May 2020
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
75
98
0
20 Mar 2020
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
30
785
0
26 Feb 2020
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAML
OOD
52
1,158
0
12 Jan 2020
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
26
68
0
06 Nov 2019
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Francesco Croce
Matthias Hein
AAML
34
474
0
03 Jul 2019
Intriguing properties of adversarial training at scale
Cihang Xie
Alan Yuille
AAML
8
68
0
10 Jun 2019
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
20
61
0
08 Jun 2019
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
Francesco Croce
Jonas Rauber
Matthias Hein
AAML
20
30
0
27 Mar 2019
Using Pre-Training Can Improve Model Robustness and Uncertainty
Dan Hendrycks
Kimin Lee
Mantas Mazeika
NoLa
17
717
0
28 Jan 2019
1