Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2105.10304
Cited By
Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks
21 May 2021
Leo Schwinn
René Raab
A. Nguyen
Dario Zanca
Bjoern M. Eskofier
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks"
6 / 6 papers shown
Title
Unveiling AI's Blind Spots: An Oracle for In-Domain, Out-of-Domain, and Adversarial Errors
Shuangpeng Han
Mengmi Zhang
78
0
0
03 Oct 2024
Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding Space
Leo Schwinn
David Dobre
Sophie Xhonneux
Gauthier Gidel
Stephan Gunnemann
AAML
47
36
0
14 Feb 2024
Raising the Bar for Certified Adversarial Robustness with Diffusion Models
Thomas Altstidl
David Dobre
Björn Eskofier
Gauthier Gidel
Leo Schwinn
DiffM
25
7
0
17 May 2023
Robust Smart Home Face Recognition under Starving Federated Data
Jaechul Roh
Yajun Fang
FedML
CVBM
AAML
19
0
0
10 Nov 2022
Measure and Improve Robustness in NLP Models: A Survey
Xuezhi Wang
Haohan Wang
Diyi Yang
139
130
0
15 Dec 2021
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
217
674
0
19 Oct 2020
1