Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1312.6199
Cited By
Intriguing properties of neural networks
21 December 2013
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Intriguing properties of neural networks"
50 / 189 papers shown
Title
CTBENCH: A Library and Benchmark for Certified Training
Yuhao Mao
Stefan Balauca
Martin Vechev
OOD
62
5
0
07 Jun 2024
ZeroPur: Succinct Training-Free Adversarial Purification
Xiuli Bi
Zonglin Yang
Bo Liu
Xiaodong Cun
Chi-Man Pun
69
0
0
05 Jun 2024
CONFINE: Conformal Prediction for Interpretable Neural Networks
Linhui Huang
S. Lala
N. Jha
149
2
0
01 Jun 2024
Evaluating the Effectiveness and Robustness of Visual Similarity-based Phishing Detection Models
Fujiao Ji
Kiho Lee
Hyungjoon Koo
Wenhao You
Euijin Choo
Hyoungshick Kim
Doowon Kim
AAML
72
2
0
30 May 2024
Probabilistic Verification of Neural Networks using Branch and Bound
David Boetius
Stefan Leue
Tobias Sutter
57
1
0
27 May 2024
Adversarial Attacks on Both Face Recognition and Face Anti-spoofing Models
Fengfan Zhou
Qianyu Zhou
Hefei Ling
Xuequan Lu
AAML
80
3
0
27 May 2024
The Uncanny Valley: Exploring Adversarial Robustness from a Flatness Perspective
Nils Philipp Walter
Linara Adilova
Jilles Vreeken
Michael Kamp
AAML
59
2
0
27 May 2024
Verifying Properties of Binary Neural Networks Using Sparse Polynomial Optimization
Jianting Yang
Srecko Ðurasinovic
Jean B. Lasserre
Victor Magron
Jun Zhao
AAML
68
1
0
27 May 2024
Towards Certification of Uncertainty Calibration under Adversarial Attacks
Cornelius Emde
Francesco Pinto
Thomas Lukasiewicz
Philip Torr
Adel Bibi
AAML
64
0
0
22 May 2024
On Robust Reinforcement Learning with Lipschitz-Bounded Policy Networks
Nicholas H. Barbara
Ruigang Wang
I. Manchester
61
4
0
19 May 2024
Risks of Practicing Large Language Models in Smart Grid: Threat Modeling and Validation
Jiangnan Li
Yingyuan Yang
Jinyuan Stella Sun
75
4
0
10 May 2024
AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples
Antonio Emanuele Cinà
Jérôme Rony
Maura Pintor
Christian Scano
Ambra Demontis
Battista Biggio
Ismail Ben Ayed
Fabio Roli
ELM
AAML
SILM
69
9
0
30 Apr 2024
Global Counterfactual Directions
Bartlomiej Sobieski
P. Biecek
DiffM
67
5
0
18 Apr 2024
Factorized Diffusion: Perceptual Illusions by Noise Decomposition
Daniel Geng
Inbum Park
Andrew Owens
DiffM
71
16
0
17 Apr 2024
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
AAML
113
186
0
02 Apr 2024
Bidirectional Consistency Models
Liangchen Li
Jiajun He
DiffM
76
12
0
26 Mar 2024
ADAPT to Robustify Prompt Tuning Vision Transformers
Masih Eskandar
Tooba Imtiaz
Zifeng Wang
Jennifer Dy
VPVLM
VLM
AAML
56
0
0
19 Mar 2024
Adversarial Example Soups: Improving Transferability and Stealthiness for Free
Bo Yang
Hengwei Zhang
Jin-dong Wang
Yulong Yang
Chenhao Lin
Chao Shen
Zhengyu Zhao
SILM
AAML
95
2
0
27 Feb 2024
Rethinking Invariance Regularization in Adversarial Training to Improve Robustness-Accuracy Trade-off
Futa Waseda
Ching-Chun Chang
Isao Echizen
AAML
64
0
0
22 Feb 2024
Accelerated Smoothing: A Scalable Approach to Randomized Smoothing
Devansh Bhardwaj
Kshitiz Kaushik
Sarthak Gupta
AAML
48
0
0
12 Feb 2024
An Inexact Halpern Iteration with Application to Distributionally Robust Optimization
Ling Liang
Zusen Xu
Kim-Chuan Toh
Jia Jie Zhu
80
4
0
08 Feb 2024
Set-Based Training for Neural Network Verification
Lukas Koller
Tobias Ladner
Matthias Althoff
AAML
62
2
0
26 Jan 2024
Machine unlearning through fine-grained model parameters perturbation
Zhiwei Zuo
Zhuo Tang
KenLi Li
Anwitaman Datta
AAML
MU
41
0
0
09 Jan 2024
MIMIR: Masked Image Modeling for Mutual Information-based Adversarial Robustness
Xiaoyun Xu
Shujian Yu
Jingzheng Wu
S. Picek
AAML
54
0
0
08 Dec 2023
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Guangjing Wang
Ce Zhou
Yuanda Wang
Bocheng Chen
Hanqing Guo
Qiben Yan
AAML
SILM
88
3
0
20 Nov 2023
Generating Less Certain Adversarial Examples Improves Robust Generalization
Minxing Zhang
Michael Backes
Xiao Zhang
AAML
71
1
0
06 Oct 2023
On Continuity of Robust and Accurate Classifiers
Ramin Barati
Reza Safabakhsh
Mohammad Rahmati
AAML
34
1
0
29 Sep 2023
Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization
Mahyar Fazlyab
Taha Entesari
Aniket Roy
Ramalingam Chellappa
AAML
31
11
0
29 Sep 2023
Certifying LLM Safety against Adversarial Prompting
Aounon Kumar
Chirag Agarwal
Suraj Srinivas
Aaron Jiaxun Li
Soheil Feizi
Himabindu Lakkaraju
AAML
45
182
0
06 Sep 2023
HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds
Hejia Geng
Peng Li
AAML
62
3
0
20 Aug 2023
Variational Positive-incentive Noise: How Noise Benefits Models
Hongyuan Zhang
Si-Ying Huang
Yubin Guo
Xuelong Li
38
9
0
13 Jun 2023
Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning
Mohamed el Shehaby
Ashraf Matrawy
AAML
53
7
0
08 Jun 2023
On the Importance of Backbone to the Adversarial Robustness of Object Detectors
Xiao-Li Li
Hang Chen
Xiaolin Hu
AAML
61
4
0
27 May 2023
When Deep Learning Meets Polyhedral Theory: A Survey
Joey Huchette
Gonzalo Muñoz
Thiago Serra
Calvin Tsay
AI4CE
117
36
0
29 Apr 2023
Interpretability is a Kind of Safety: An Interpreter-based Ensemble for Adversary Defense
Jingyuan Wang
Yufan Wu
Mingxuan Li
Xin Lin
Junjie Wu
Chao Li
AAML
31
13
0
14 Apr 2023
Decentralized Adversarial Training over Graphs
Ying Cao
Elsa Rizk
Stefan Vlaski
Ali H. Sayed
AAML
77
1
0
23 Mar 2023
The Representational Status of Deep Learning Models
Eamon Duede
44
0
0
21 Mar 2023
Nash Equilibria, Regularization and Computation in Optimal Transport-Based Distributionally Robust Optimization
Soroosh Shafieezadeh-Abadeh
Liviu Aolaritei
Florian Dorfler
Daniel Kuhn
86
21
0
07 Mar 2023
Mitigating Adversarial Effects of False Data Injection Attacks in Power Grid
Farhin Farhad Riya
Shahinul Hoque
Jinyuan Stella Sun
Jiangnan Li
Hairong Qi
Hairong Qi
AAML
AI4CE
54
0
0
29 Jan 2023
SAIF: Sparse Adversarial and Imperceptible Attack Framework
Tooba Imtiaz
Morgan Kohler
Jared Miller
Zifeng Wang
Octavia Camps
Mario Sznaier
Octavia Camps
Jennifer Dy
AAML
48
0
0
14 Dec 2022
Adversarial Detection by Approximation of Ensemble Boundary
T. Windeatt
AAML
79
0
0
18 Nov 2022
On the Shift Invariance of Max Pooling Feature Maps in Convolutional Neural Networks
Hubert Leterme
K. Polisano
V. Perrier
Alahari Karteek
FAtt
73
2
0
19 Sep 2022
DTA: Physical Camouflage Attacks using Differentiable Transformation Network
Naufal Suryanto
Yongsu Kim
Hyoeun Kang
Harashta Tatimma Larasati
Youngyeo Yun
Thi-Thu-Huong Le
Hunmin Yang
Se-Yoon Oh
Howon Kim
AAML
49
58
0
18 Mar 2022
Energy-Latency Attacks via Sponge Poisoning
Antonio Emanuele Cinà
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
SILM
84
29
0
14 Mar 2022
Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation
Qilong Zhang
Chaoning Zhang
Chaoning Zhang
Chaoqun Li
Xuanhan Wang
Jingkuan Song
Lianli Gao
AAML
61
21
0
09 Mar 2022
ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Maura Pintor
Daniele Angioni
Angelo Sotgiu
Christian Scano
Ambra Demontis
Battista Biggio
Fabio Roli
AAML
49
51
0
07 Mar 2022
The mathematics of adversarial attacks in AI -- Why deep learning is unstable despite the existence of stable neural networks
Alexander Bastounis
A. Hansen
Verner Vlacic
AAML
OOD
51
28
0
13 Sep 2021
Benchmarking the Robustness of Instance Segmentation Models
Said Fahri Altindis
Yusuf Dalva
Hamza Pehlivan
Aysegül Dündar
VLM
OOD
84
12
0
02 Sep 2021
Differentiable sampling of molecular geometries with uncertainty-based adversarial attacks
Daniel Schwalbe-Koda
Aik Rui Tan
Rafael Gómez-Bombarelli
AAML
44
60
0
27 Jan 2021
Towards Optimal Branching of Linear and Semidefinite Relaxations for Neural Network Robustness Certification
Brendon G. Anderson
Ziye Ma
Jingqi Li
Somayeh Sojoudi
73
1
0
22 Jan 2021
Previous
1
2
3
4
Next