ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXivPDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

21 / 521 papers shown
Title
Gradient-Leaks: Understanding and Controlling Deanonymization in
  Federated Learning
Gradient-Leaks: Understanding and Controlling Deanonymization in Federated Learning
Tribhuvanesh Orekondy
Seong Joon Oh
Yang Zhang
Bernt Schiele
Mario Fritz
PICV
FedML
336
37
0
15 May 2018
Curriculum Adversarial Training
Curriculum Adversarial Training
Qi-Zhi Cai
Min Du
Chang-rui Liu
D. Song
AAML
11
159
0
13 May 2018
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object
  Detector
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector
Shang-Tse Chen
Cory Cornelius
Jason Martin
Duen Horng Chau
ObjD
148
424
0
16 Apr 2018
Fortified Networks: Improving the Robustness of Deep Networks by
  Modeling the Manifold of Hidden Representations
Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations
Alex Lamb
Jonathan Binas
Anirudh Goyal
Dmitriy Serdyuk
Sandeep Subramanian
Ioannis Mitliagkas
Yoshua Bengio
OOD
26
43
0
07 Apr 2018
On the Limitation of Local Intrinsic Dimensionality for Characterizing
  the Subspaces of Adversarial Examples
On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples
Pei-Hsuan Lu
Pin-Yu Chen
Chia-Mu Yu
AAML
9
26
0
26 Mar 2018
Adversarial Defense based on Structure-to-Signal Autoencoders
Adversarial Defense based on Structure-to-Signal Autoencoders
Joachim Folz
Sebastián M. Palacio
Jörn Hees
Damian Borth
Andreas Dengel
AAML
23
31
0
21 Mar 2018
Adversarial Logit Pairing
Adversarial Logit Pairing
Harini Kannan
Alexey Kurakin
Ian Goodfellow
AAML
10
624
0
16 Mar 2018
Understanding and Enhancing the Transferability of Adversarial Examples
Understanding and Enhancing the Transferability of Adversarial Examples
Lei Wu
Zhanxing Zhu
Cheng Tai
E. Weinan
AAML
SILM
25
96
0
27 Feb 2018
Robust GANs against Dishonest Adversaries
Robust GANs against Dishonest Adversaries
Zhi Xu
Chengtao Li
Stefanie Jegelka
AAML
32
3
0
27 Feb 2018
Secure Detection of Image Manipulation by means of Random Feature
  Selection
Secure Detection of Image Manipulation by means of Random Feature Selection
Z. Chen
B. Tondi
Xiaolong Li
R. Ni
Yao-Min Zhao
Mauro Barni
AAML
11
32
0
02 Feb 2018
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini
D. Wagner
AAML
21
1,072
0
05 Jan 2018
A General Framework for Adversarial Examples with Objectives
A General Framework for Adversarial Examples with Objectives
Mahmood Sharif
Sruti Bhagavatula
Lujo Bauer
Michael K. Reiter
AAML
GAN
13
191
0
31 Dec 2017
The Robust Manifold Defense: Adversarial Training using Generative
  Models
The Robust Manifold Defense: Adversarial Training using Generative Models
A. Jalal
Andrew Ilyas
C. Daskalakis
A. Dimakis
AAML
23
174
0
26 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
23
1,388
0
08 Dec 2017
Generative Adversarial Perturbations
Generative Adversarial Perturbations
Omid Poursaeed
Isay Katsman
Bicheng Gao
Serge J. Belongie
AAML
GAN
WIGM
22
350
0
06 Dec 2017
Towards Robust Neural Networks via Random Self-ensemble
Towards Robust Neural Networks via Random Self-ensemble
Xuanqing Liu
Minhao Cheng
Huan Zhang
Cho-Jui Hsieh
FedML
AAML
26
418
0
02 Dec 2017
Reinforcing Adversarial Robustness using Model Confidence Induced by
  Adversarial Training
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training
Xi Wu
Uyeong Jang
Jiefeng Chen
Lingjiao Chen
S. Jha
AAML
27
21
0
21 Nov 2017
Evaluating Robustness of Neural Networks with Mixed Integer Programming
Evaluating Robustness of Neural Networks with Mixed Integer Programming
Vincent Tjeng
Kai Y. Xiao
Russ Tedrake
AAML
47
117
0
20 Nov 2017
Adversarial Attacks Beyond the Image Space
Adversarial Attacks Beyond the Image Space
Xiaohui Zeng
Chenxi Liu
Yu-Siang Wang
Weichao Qiu
Lingxi Xie
Yu-Wing Tai
Chi-Keung Tang
Alan Yuille
AAML
30
145
0
20 Nov 2017
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
261
3,109
0
04 Nov 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
281
5,835
0
08 Jul 2016
Previous
123...10119