ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1707.07397
  4. Cited By
Synthesizing Robust Adversarial Examples
v1v2v3 (latest)

Synthesizing Robust Adversarial Examples

24 July 2017
Anish Athalye
Logan Engstrom
Ilya Sutskever
Kevin Kwok
    AAML
ArXiv (abs)PDFHTML

Papers citing "Synthesizing Robust Adversarial Examples"

31 / 31 papers shown
Adversarial Confusion Attack: Disrupting Multimodal Large Language Models
Adversarial Confusion Attack: Disrupting Multimodal Large Language Models
Jakub Ho'scilowicz
Artur Janicki
AAML
396
1
0
25 Nov 2025
Probably Approximately Global Robustness Certification
Probably Approximately Global Robustness Certification
Peter Blohm
Patrick Indri
Thomas Gärtner
Sagar Malhotra
AAML
164
0
0
09 Nov 2025
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services
Shaopeng Fu
Xuexue Sun
Ke Qing
Tianhang Zheng
Haiyan Zhao
AAMLMIACVSILM
620
1
0
05 Aug 2024
PRIME: Protect Your Videos From Malicious Editing
PRIME: Protect Your Videos From Malicious Editing
Guanlin Li
Shuai Yang
Jie Zhang
Tianwei Zhang
173
4
0
02 Feb 2024
Targeted Adversarial Attacks on Generalizable Neural Radiance Fields
Targeted Adversarial Attacks on Generalizable Neural Radiance Fields
András Horváth
C. M. Józsa
AAMLAI4CE
229
6
0
05 Oct 2023
When Vision Fails: Text Attacks Against ViT and OCR
When Vision Fails: Text Attacks Against ViT and OCR
Nicholas Boucher
Jenny Blessing
Ilia Shumailov
Ross J. Anderson
Nicolas Papernot
AAML
226
5
0
12 Jun 2023
Raising the Cost of Malicious AI-Powered Image Editing
Raising the Cost of Malicious AI-Powered Image EditingInternational Conference on Machine Learning (ICML), 2023
Hadi Salman
Alaa Khaddaj
Guillaume Leclerc
Andrew Ilyas
Aleksander Madry
DiffM
256
164
0
13 Feb 2023
Benchmarking Robustness to Adversarial Image Obfuscations
Benchmarking Robustness to Adversarial Image ObfuscationsNeural Information Processing Systems (NeurIPS), 2023
Florian Stimberg
Ayan Chakrabarti
Chun-Ta Lu
Hussein Hazimeh
Otilia Stretcu
...
Merve Kaya
Cyrus Rashtchian
Ariel Fuxman
Mehmet Tek
Sven Gowal
AAML
241
10
0
30 Jan 2023
Is Face Recognition Safe from Realizable Attacks?
Is Face Recognition Safe from Realizable Attacks?
Sanjay Saha
Terence Sim
CVBMAAML
124
3
0
15 Oct 2022
A Survey on Physical Adversarial Attack in Computer Vision
A Survey on Physical Adversarial Attack in Computer Vision
Donghua Wang
Wen Yao
Tingsong Jiang
Guijian Tang
Xiaoqian Chen
AAML
484
49
0
28 Sep 2022
Do Perceptually Aligned Gradients Imply Adversarial Robustness?
Do Perceptually Aligned Gradients Imply Adversarial Robustness?International Conference on Machine Learning (ICML), 2022
Roy Ganz
Bahjat Kawar
Michael Elad
AAML
310
16
0
22 Jul 2022
Narcissus: A Practical Clean-Label Backdoor Attack with Limited
  Information
Narcissus: A Practical Clean-Label Backdoor Attack with Limited InformationConference on Computer and Communications Security (CCS), 2022
Yi Zeng
Minzhou Pan
H. Just
Lingjuan Lyu
M. Qiu
R. Jia
AAML
281
226
0
11 Apr 2022
Detecting Audio Adversarial Examples with Logit Noising
Detecting Audio Adversarial Examples with Logit Noising
N. Park
Sangwoo Ji
Jong Kim
AAML
190
5
0
13 Dec 2021
DAFAR: Defending against Adversaries by Feedback-Autoencoder
  Reconstruction
DAFAR: Defending against Adversaries by Feedback-Autoencoder Reconstruction
Haowen Liu
Ping Yi
Hsiao-Ying Lin
Jie Shi
Weidong Qiu
AAML
134
2
0
11 Mar 2021
WaveGuard: Understanding and Mitigating Audio Adversarial Examples
WaveGuard: Understanding and Mitigating Audio Adversarial ExamplesUSENIX Security Symposium (USENIX Security), 2021
Shehzeen Samarah Hussain
Paarth Neekhara
Shlomo Dubnov
Julian McAuley
F. Koushanfar
AAML
177
83
0
04 Mar 2021
Meta Adversarial Training against Universal Patches
Meta Adversarial Training against Universal Patches
J. H. Metzen
Nicole Finnie
Robin Hutmacher
OODAAML
301
27
0
27 Jan 2021
Dynamic Adversarial Patch for Evading Object Detection Models
Dynamic Adversarial Patch for Evading Object Detection Models
Shahar Hoory
T. Shapira
A. Shabtai
Yuval Elovici
AAML
182
51
0
25 Oct 2020
An Epistemic Approach to the Formal Specification of Statistical Machine
  Learning
An Epistemic Approach to the Formal Specification of Statistical Machine LearningJournal of Software and Systems Modeling (SoSyM), 2020
Yusuke Kawamoto
CML
190
5
0
27 Apr 2020
Robustness from Simple Classifiers
Robustness from Simple Classifiers
Sharon Qian
Dimitris Kalimeris
Gal Kaplun
Yaron Singer
AAML
61
1
0
21 Feb 2020
Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to
  Adversarial Examples
Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial ExamplesIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2020
Shehzeen Samarah Hussain
Paarth Neekhara
Malhar Jere
F. Koushanfar
Julian McAuley
AAML
236
179
0
09 Feb 2020
Adversarial Attacks on GMM i-vector based Speaker Verification Systems
Adversarial Attacks on GMM i-vector based Speaker Verification SystemsIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2019
Xu Li
Jinghua Zhong
Xixin Wu
Jianwei Yu
Xunying Liu
Helen Meng
AAML
258
93
0
08 Nov 2019
Structure Matters: Towards Generating Transferable Adversarial Images
Structure Matters: Towards Generating Transferable Adversarial ImagesEuropean Conference on Artificial Intelligence (ECAI), 2019
Dan Peng
Zizhan Zheng
Linhao Luo
Xiaofeng Zhang
AAML
201
2
0
22 Oct 2019
Characterizing Attacks on Deep Reinforcement Learning
Characterizing Attacks on Deep Reinforcement LearningAdaptive Agents and Multi-Agent Systems (AAMAS), 2019
Xinlei Pan
Chaowei Xiao
Warren He
Shuang Yang
Jian Peng
...
Jinfeng Yi
Zijiang Yang
Mingyan D. Liu
Yue Liu
Basel Alomair
AAML
235
77
0
21 Jul 2019
LiveSketch: Query Perturbations for Guided Sketch-based Visual Search
LiveSketch: Query Perturbations for Guided Sketch-based Visual Search
John Collomosse
Tu Bui
Hailin Jin
173
60
0
14 Apr 2019
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic
  Speech Recognition
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition
Yao Qin
Nicholas Carlini
Ian Goodfellow
G. Cottrell
Colin Raffel
AAML
282
417
0
22 Mar 2019
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Eric Wong
Frank R. Schmidt
J. Zico Kolter
AAML
255
222
0
21 Feb 2019
Robustness Certificates Against Adversarial Examples for ReLU Networks
Robustness Certificates Against Adversarial Examples for ReLU Networks
Sahil Singla
Soheil Feizi
AAML
143
21
0
01 Feb 2019
Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial
  Attacks
Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks
T. Brunner
Frederik Diehl
Michael Truong-Le
Alois Knoll
MLAUAAML
186
125
0
24 Dec 2018
Random Spiking and Systematic Evaluation of Defenses Against Adversarial
  Examples
Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples
Huangyi Ge
Sze Yiu Chau
Bruno Ribeiro
Ninghui Li
AAML
214
1
0
05 Dec 2018
Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses
  of Familiar Objects
Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects
Michael A. Alcorn
Melvin Johnson
Zhitao Gong
Chengfei Wang
Long Mai
Naveen Ari
Stella Laurenzo
406
316
0
28 Nov 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
2.6K
3,386
0
01 Feb 2018
1
Page 1 of 1