ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.11565
  4. Cited By
Randomization matters. How to defend against strong adversarial attacks
v1v2v3v4v5 (latest)

Randomization matters. How to defend against strong adversarial attacks

26 February 2020
Rafael Pinot
Raphael Ettedgui
Geovani Rizk
Y. Chevaleyre
Jamal Atif
    AAML
ArXiv (abs)PDFHTML

Papers citing "Randomization matters. How to defend against strong adversarial attacks"

16 / 16 papers shown
Title
Lattice Climber Attack: Adversarial attacks for randomized mixtures of classifiers
Lattice Climber Attack: Adversarial attacks for randomized mixtures of classifiers
Lucas Gnecco-Heredia
Benjamin Négrevergne
Y. Chevaleyre
AAML
104
0
0
12 Jun 2025
Towards provable probabilistic safety for scalable embodied AI systems
Towards provable probabilistic safety for scalable embodied AI systems
Linxuan He
Qing-Shan Jia
Ang Li
Hongyan Sang
Ling Wang
...
Yisen Wang
Peng Wei
Zhongyuan Wang
Henry X. Liu
Shuo Feng
25
0
0
05 Jun 2025
Adversarial attacks for mixtures of classifiers
Adversarial attacks for mixtures of classifiers
Lucas Gnecco-Heredia
Benjamin Négrevergne
Y. Chevaleyre
AAML
67
1
0
20 Jul 2023
The Best Defense is a Good Offense: Adversarial Augmentation against
  Adversarial Attacks
The Best Defense is a Good Offense: Adversarial Augmentation against Adversarial Attacks
I. Frosio
Jan Kautz
AAML
98
15
0
23 May 2023
On the Robustness of Randomized Ensembles to Adversarial Perturbations
On the Robustness of Randomized Ensembles to Adversarial Perturbations
Hassan Dbouk
Naresh R Shanbhag
AAML
74
8
0
02 Feb 2023
Achieve Optimal Adversarial Accuracy for Adversarial Deep Learning using
  Stackelberg Game
Achieve Optimal Adversarial Accuracy for Adversarial Deep Learning using Stackelberg Game
Xiao-Shan Gao
Shuang Liu
Lijia Yu
AAML
76
0
0
17 Jul 2022
Metric-Fair Classifier Derandomization
Metric-Fair Classifier Derandomization
Jimmy Wu
Yatong Chen
Yang Liu
FaML
140
6
0
15 Jun 2022
Towards Consistency in Adversarial Classification
Towards Consistency in Adversarial Classification
Laurent Meunier
Raphael Ettedgui
Rafael Pinot
Y. Chevaleyre
Jamal Atif
AAML
84
11
0
20 May 2022
Diffusion Models for Adversarial Purification
Diffusion Models for Adversarial Purification
Weili Nie
Brandon Guo
Yujia Huang
Chaowei Xiao
Arash Vahdat
Anima Anandkumar
WIGM
278
450
0
16 May 2022
The Many Faces of Adversarial Risk
The Many Faces of Adversarial Risk
Muni Sreenivas Pydi
Varun Jog
AAML
71
30
0
22 Jan 2022
A Dynamical System Perspective for Lipschitz Neural Networks
A Dynamical System Perspective for Lipschitz Neural Networks
Laurent Meunier
Blaise Delattre
Alexandre Araujo
A. Allauzen
128
56
0
25 Oct 2021
Adversarial purification with Score-based generative models
Adversarial purification with Score-based generative models
Jongmin Yoon
Sung Ju Hwang
Juho Lee
DiffM
90
159
0
11 Jun 2021
Attacking Adversarial Attacks as A Defense
Attacking Adversarial Attacks as A Defense
Boxi Wu
Heng Pan
Li Shen
Jindong Gu
Shuai Zhao
Zhifeng Li
Deng Cai
Xiaofei He
Wei Liu
AAML
93
32
0
09 Jun 2021
Mixed Nash Equilibria in the Adversarial Examples Game
Mixed Nash Equilibria in the Adversarial Examples Game
Laurent Meunier
M. Scetbon
Rafael Pinot
Jamal Atif
Y. Chevaleyre
AAML
91
30
0
13 Feb 2021
A survey on practical adversarial examples for malware classifiers
A survey on practical adversarial examples for malware classifiers
Daniel Park
B. Yener
AAML
96
16
0
06 Nov 2020
Adversarial Risk via Optimal Transport and Optimal Couplings
Adversarial Risk via Optimal Transport and Optimal Couplings
Muni Sreenivas Pydi
Varun Jog
85
60
0
05 Dec 2019
1