Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2302.13464
Cited By
Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
27 February 2023
Keane Lucas
Matthew Jagielski
Florian Tramèr
Lujo Bauer
Nicholas Carlini
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators"
9 / 9 papers shown
Title
Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search
Zachary Coalson
Huazheng Wang
Qingyun Wu
Sanghyun Hong
OOD
AAML
34
0
0
09 May 2024
Diffusion Denoising as a Certified Defense against Clean-label Poisoning
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
DiffM
40
3
0
18 Mar 2024
Universal Adversarial Defense in Remote Sensing Based on Pre-trained Denoising Diffusion Models
Weikang Yu
Yonghao Xu
Pedram Ghamisi
21
4
0
31 Jul 2023
Are aligned neural networks adversarially aligned?
Nicholas Carlini
Milad Nasr
Christopher A. Choquette-Choo
Matthew Jagielski
Irena Gao
...
Pang Wei Koh
Daphne Ippolito
Katherine Lee
Florian Tramèr
Ludwig Schmidt
AAML
22
221
0
26 Jun 2023
On the Role of Randomization in Adversarially Robust Classification
Lucas Gnecco-Heredia
Y. Chevaleyre
Benjamin Négrevergne
Laurent Meunier
Muni Sreenivas Pydi
AAML
16
5
0
14 Feb 2023
Diffusion Models for Adversarial Purification
Weili Nie
Brandon Guo
Yujia Huang
Chaowei Xiao
Arash Vahdat
Anima Anandkumar
WIGM
195
415
0
16 May 2022
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
217
675
0
19 Oct 2020
Benchmarking Adversarial Robustness
Yinpeng Dong
Qi-An Fu
Xiao Yang
Tianyu Pang
Hang Su
Zihao Xiao
Jun Zhu
AAML
23
36
0
26 Dec 2019
ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples
Xiaojun Jia
Xingxing Wei
Xiaochun Cao
H. Foroosh
AAML
50
264
0
30 Nov 2018
1