ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.01690
  4. Cited By
Reliable evaluation of adversarial robustness with an ensemble of
  diverse parameter-free attacks

Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks

3 March 2020
Francesco Croce
Matthias Hein
    AAML
ArXivPDFHTML

Papers citing "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"

50 / 353 papers shown
Title
White-Box Attacks on Hate-speech BERT Classifiers in German with
  Explicit and Implicit Character Level Defense
White-Box Attacks on Hate-speech BERT Classifiers in German with Explicit and Implicit Character Level Defense
Shahrukh Khan
Mahnoor Shahid
Navdeeppal Singh
AAML
23
2
0
11 Feb 2022
D4: Detection of Adversarial Diffusion Deepfakes Using Disjoint
  Ensembles
D4: Detection of Adversarial Diffusion Deepfakes Using Disjoint Ensembles
Ashish Hooda
Neal Mangaokar
Ryan Feng
Kassem Fawaz
S. Jha
Atul Prakash
24
11
0
11 Feb 2022
Deadwooding: Robust Global Pruning for Deep Neural Networks
Deadwooding: Robust Global Pruning for Deep Neural Networks
Sawinder Kaur
Ferdinando Fioretto
Asif Salekin
19
4
0
10 Feb 2022
Layer-wise Regularized Adversarial Training using Layers Sustainability
  Analysis (LSA) framework
Layer-wise Regularized Adversarial Training using Layers Sustainability Analysis (LSA) framework
Mohammad Khalooei
M. Homayounpour
M. Amirmazlaghani
AAML
17
3
0
05 Feb 2022
Robust Binary Models by Pruning Randomly-initialized Networks
Robust Binary Models by Pruning Randomly-initialized Networks
Chen Liu
Ziqi Zhao
Sabine Süsstrunk
Mathieu Salzmann
TPM
AAML
MQ
19
4
0
03 Feb 2022
Few-Shot Backdoor Attacks on Visual Object Tracking
Few-Shot Backdoor Attacks on Visual Object Tracking
Yiming Li
Haoxiang Zhong
Xingjun Ma
Yong Jiang
Shutao Xia
AAML
38
53
0
31 Jan 2022
MEGA: Model Stealing via Collaborative Generator-Substitute Networks
MEGA: Model Stealing via Collaborative Generator-Substitute Networks
Chi Hong
Jiyue Huang
L. Chen
19
2
0
31 Jan 2022
Scale-Invariant Adversarial Attack for Evaluating and Enhancing
  Adversarial Defenses
Scale-Invariant Adversarial Attack for Evaluating and Enhancing Adversarial Defenses
Mengting Xu
Tao Zhang
Zhongnian Li
Daoqiang Zhang
AAML
30
1
0
29 Jan 2022
On Adversarial Robustness of Trajectory Prediction for Autonomous
  Vehicles
On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles
Qingzhao Zhang
Shengtuo Hu
Jiachen Sun
Qi Alfred Chen
Z. Morley Mao
AAML
35
133
0
13 Jan 2022
Constrained Gradient Descent: A Powerful and Principled Evasion Attack
  Against Neural Networks
Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Weiran Lin
Keane Lucas
Lujo Bauer
Michael K. Reiter
Mahmood Sharif
AAML
29
5
0
28 Dec 2021
Learning Robust and Lightweight Model through Separable Structured
  Transformations
Learning Robust and Lightweight Model through Separable Structured Transformations
Xian Wei
Yanhui Huang
Yang Xu
Mingsong Chen
Hai Lan
Yuanxiang Li
Zhongfeng Wang
Xuan Tang
OOD
24
0
0
27 Dec 2021
Understanding and Measuring Robustness of Multimodal Learning
Understanding and Measuring Robustness of Multimodal Learning
Nishant Vishwamitra
Hongxin Hu
Ziming Zhao
Long Cheng
Feng Luo
AAML
19
5
0
22 Dec 2021
A Theoretical View of Linear Backpropagation and Its Convergence
A Theoretical View of Linear Backpropagation and Its Convergence
Ziang Li
Yiwen Guo
Haodi Liu
Changshui Zhang
AAML
16
3
0
21 Dec 2021
On the Impact of Hard Adversarial Instances on Overfitting in
  Adversarial Training
On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training
Chen Liu
Zhichao Huang
Mathieu Salzmann
Tong Zhang
Sabine Süsstrunk
AAML
15
13
0
14 Dec 2021
Triangle Attack: A Query-efficient Decision-based Adversarial Attack
Triangle Attack: A Query-efficient Decision-based Adversarial Attack
Xiaosen Wang
Zeliang Zhang
Kangheng Tong
Dihong Gong
Kun He
Zhifeng Li
Wei Liu
AAML
24
56
0
13 Dec 2021
Interpolated Joint Space Adversarial Training for Robust and
  Generalizable Defenses
Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses
Chun Pong Lau
Jiang-Long Liu
Hossein Souri
Wei-An Lin
S. Feizi
Ramalingam Chellappa
AAML
29
12
0
12 Dec 2021
The Fundamental Limits of Interval Arithmetic for Neural Networks
The Fundamental Limits of Interval Arithmetic for Neural Networks
M. Mirman
Maximilian Baader
Martin Vechev
24
6
0
09 Dec 2021
Understanding Square Loss in Training Overparametrized Neural Network
  Classifiers
Understanding Square Loss in Training Overparametrized Neural Network Classifiers
Tianyang Hu
Jun Wang
Wenjia Wang
Zhenguo Li
UQCV
AAML
35
19
0
07 Dec 2021
Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial
  Robustness
Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness
Konstantinos P. Panousis
S. Chatzis
Sergios Theodoridis
BDL
AAML
60
11
0
05 Dec 2021
Is RobustBench/AutoAttack a suitable Benchmark for Adversarial
  Robustness?
Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness?
P. Lorenz
Dominik Strassel
M. Keuper
J. Keuper
AAML
19
10
0
02 Dec 2021
Push Stricter to Decide Better: A Class-Conditional Feature Adaptive
  Framework for Improving Adversarial Robustness
Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness
Jia-Li Yin
Lehui Xie
Wanqing Zhu
Ximeng Liu
Bo-Hao Chen
TTA
AAML
19
3
0
01 Dec 2021
FROB: Few-shot ROBust Model for Classification and Out-of-Distribution
  Detection
FROB: Few-shot ROBust Model for Classification and Out-of-Distribution Detection
Nikolaos Dionelis
Mehrdad Yaghoobi
Sotirios A. Tsaftaris
OODD
13
4
0
30 Nov 2021
Subspace Adversarial Training
Subspace Adversarial Training
Tao Li
Yingwen Wu
Sizhe Chen
Kun Fang
Xiaolin Huang
AAML
OOD
38
56
0
24 Nov 2021
Medical Aegis: Robust adversarial protectors for medical images
Medical Aegis: Robust adversarial protectors for medical images
Qingsong Yao
Zecheng He
S. Kevin Zhou
AAML
MedIm
19
2
0
22 Nov 2021
Detecting AutoAttack Perturbations in the Frequency Domain
Detecting AutoAttack Perturbations in the Frequency Domain
P. Lorenz
P. Harder
Dominik Strassel
M. Keuper
J. Keuper
AAML
11
13
0
16 Nov 2021
Robust and Accurate Object Detection via Self-Knowledge Distillation
Robust and Accurate Object Detection via Self-Knowledge Distillation
Weipeng Xu
Pengzhi Chu
Renhao Xie
Xiongziyan Xiao
Hongcheng Huang
AAML
ObjD
24
4
0
14 Nov 2021
Are Transformers More Robust Than CNNs?
Are Transformers More Robust Than CNNs?
Yutong Bai
Jieru Mei
Alan Yuille
Cihang Xie
ViT
AAML
192
257
0
10 Nov 2021
Data Augmentation Can Improve Robustness
Data Augmentation Can Improve Robustness
Sylvestre-Alvise Rebuffi
Sven Gowal
D. A. Calian
Florian Stimberg
Olivia Wiles
Timothy A. Mann
AAML
17
269
0
09 Nov 2021
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated
  Channel Maps
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps
Muhammad Awais
Fengwei Zhou
Chuanlong Xie
Jiawei Li
Sung-Ho Bae
Zhenguo Li
AAML
32
17
0
09 Nov 2021
LTD: Low Temperature Distillation for Robust Adversarial Training
LTD: Low Temperature Distillation for Robust Adversarial Training
Erh-Chung Chen
Che-Rung Lee
AAML
24
26
0
03 Nov 2021
Meta-Learning the Search Distribution of Black-Box Random Search Based
  Adversarial Attacks
Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks
Maksym Yatsura
J. H. Metzen
Matthias Hein
OOD
26
14
0
02 Nov 2021
Adversarial Robustness in Multi-Task Learning: Promises and Illusions
Adversarial Robustness in Multi-Task Learning: Promises and Illusions
Salah Ghamizi
Maxime Cordy
Mike Papadakis
Yves Le Traon
OOD
AAML
25
18
0
26 Oct 2021
Improving Robustness using Generated Data
Improving Robustness using Generated Data
Sven Gowal
Sylvestre-Alvise Rebuffi
Olivia Wiles
Florian Stimberg
D. A. Calian
Timothy A. Mann
30
293
0
18 Oct 2021
Parameterizing Activation Functions for Adversarial Robustness
Parameterizing Activation Functions for Adversarial Robustness
Sihui Dai
Saeed Mahloujifar
Prateek Mittal
AAML
42
32
0
11 Oct 2021
Exploring Architectural Ingredients of Adversarially Robust Deep Neural
  Networks
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
Hanxun Huang
Yisen Wang
S. Erfani
Quanquan Gu
James Bailey
Xingjun Ma
AAML
TPM
46
100
0
07 Oct 2021
Label Noise in Adversarial Training: A Novel Perspective to Study Robust
  Overfitting
Label Noise in Adversarial Training: A Novel Perspective to Study Robust Overfitting
Chengyu Dong
Liyuan Liu
Jingbo Shang
NoLa
AAML
56
18
0
07 Oct 2021
Introducing the DOME Activation Functions
Introducing the DOME Activation Functions
Mohamed E. Hussein
Wael AbdAlmageed
27
1
0
30 Sep 2021
Modeling Adversarial Noise for Adversarial Training
Modeling Adversarial Noise for Adversarial Training
Dawei Zhou
Nannan Wang
Bo Han
Tongliang Liu
AAML
30
15
0
21 Sep 2021
Simple Post-Training Robustness Using Test Time Augmentations and Random
  Forest
Simple Post-Training Robustness Using Test Time Augmentations and Random Forest
Gilad Cohen
Raja Giryes
AAML
32
4
0
16 Sep 2021
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency
Yonggan Fu
Yang Katie Zhao
Qixuan Yu
Chaojian Li
Yingyan Lin
AAML
44
12
0
11 Sep 2021
Training Meta-Surrogate Model for Transferable Adversarial Attack
Training Meta-Surrogate Model for Transferable Adversarial Attack
Yunxiao Qin
Yuanhao Xiong
Jinfeng Yi
Cho-Jui Hsieh
AAML
15
18
0
05 Sep 2021
Understanding the Logit Distributions of Adversarially-Trained Deep
  Neural Networks
Understanding the Logit Distributions of Adversarially-Trained Deep Neural Networks
Landan Seguin
A. Ndirango
Neeli Mishra
SueYeon Chung
Tyler Lee
OOD
25
2
0
26 Aug 2021
PatchCleanser: Certifiably Robust Defense against Adversarial Patches
  for Any Image Classifier
PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier
Chong Xiang
Saeed Mahloujifar
Prateek Mittal
VLM
AAML
24
73
0
20 Aug 2021
Towards Understanding the Generative Capability of Adversarially Robust
  Classifiers
Towards Understanding the Generative Capability of Adversarially Robust Classifiers
Yao Zhu
Jiacheng Ma
Jiacheng Sun
Zewei Chen
Rongxin Jiang
Zhenguo Li
AAML
15
21
0
20 Aug 2021
Neural Architecture Dilation for Adversarial Robustness
Neural Architecture Dilation for Adversarial Robustness
Yanxi Li
Zhaohui Yang
Yunhe Wang
Chang Xu
AAML
30
23
0
16 Aug 2021
AGKD-BML: Defense Against Adversarial Attack by Attention Guided
  Knowledge Distillation and Bi-directional Metric Learning
AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Metric Learning
Hong Wang
Yuefan Deng
Shinjae Yoo
Haibin Ling
Yuewei Lin
AAML
19
15
0
13 Aug 2021
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them
Florian Tramèr
AAML
24
64
0
24 Jul 2021
AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense
AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense
Duhun Hwang
Eunjung Lee
Wonjong Rhee
AAML
167
14
0
14 Jul 2021
Towards Robust General Medical Image Segmentation
Towards Robust General Medical Image Segmentation
Laura Alexandra Daza
Juan C. Pérez
Pablo Arbelaez
OOD
23
25
0
09 Jul 2021
ROPUST: Improving Robustness through Fine-tuning with Photonic
  Processors and Synthetic Gradients
ROPUST: Improving Robustness through Fine-tuning with Photonic Processors and Synthetic Gradients
Alessandro Cappelli
Julien Launay
Laurent Meunier
Ruben Ohana
Iacopo Poli
AAML
16
4
0
06 Jul 2021
Previous
12345678
Next