ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
v1v2v3v4 (latest)

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXiv (abs)PDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 1,982 papers shown
Studying Various Activation Functions and Non-IID Data for Machine Learning Model Robustness
Studying Various Activation Functions and Non-IID Data for Machine Learning Model Robustness
Long Dang
T. Hapuarachchi
Kaiqi Xiong
Jing Lin
OODAAML
152
0
0
03 Dec 2025
Dual Randomized Smoothing: Beyond Global Noise Variance
Dual Randomized Smoothing: Beyond Global Noise Variance
Chenhao Sun
Yuhao Mao
Martin Vechev
AAML
296
0
0
01 Dec 2025
Systems Security Foundations for Agentic Computing
Systems Security Foundations for Agentic Computing
Mihai Christodorescu
Earlence Fernandes
Ashish Hooda
S. Jha
Johann Rehberger
Khawaja Shams
62
0
0
01 Dec 2025
Breaking the Illusion: Consensus-Based Generative Mitigation of Adversarial Illusions in Multi-Modal Embeddings
Breaking the Illusion: Consensus-Based Generative Mitigation of Adversarial Illusions in Multi-Modal Embeddings
Fatemeh Akbarian
Anahita Baninajjar
Yingyi Zhang
Ananth Balashankar
Amir Aminifar
AAML
194
0
0
26 Nov 2025
When Robots Obey the Patch: Universal Transferable Patch Attacks on Vision-Language-Action Models
When Robots Obey the Patch: Universal Transferable Patch Attacks on Vision-Language-Action Models
Hui Lu
Yi Yu
Yiming Yang
Chenyu Yi
Qixin Zhang
Bingquan Shen
Alex Chichung Kot
Xudong Jiang
AAML
471
0
0
26 Nov 2025
TopoReformer: Mitigating Adversarial Attacks Using Topological Purification in OCR Models
Bhagyesh Kumar
A S Aravinthakashan
Akshat Satyanarayan
Ishaan Gakhar
Ujjwal Verma
AAML
105
0
0
19 Nov 2025
DeepDefense: Layer-Wise Gradient-Feature Alignment for Building Robust Neural Networks
DeepDefense: Layer-Wise Gradient-Feature Alignment for Building Robust Neural Networks
Ci Lin
T. Yeap
I. Kiringa
Biwei Zhang
AAML
113
0
0
13 Nov 2025
T-MLA: A Targeted Multiscale Log--Exponential Attack Framework for Neural Image Compression
T-MLA: A Targeted Multiscale Log--Exponential Attack Framework for Neural Image Compression
Nikolay I. Kalmykov
Razan Dibo
Kaiyu Shen
Xu Zhonghan
Anh-Huy Phan
Yipeng Liu
Ivan Oseledets
AAML
113
0
0
02 Nov 2025
BlurGuard: A Simple Approach for Robustifying Image Protection Against AI-Powered Editing
BlurGuard: A Simple Approach for Robustifying Image Protection Against AI-Powered Editing
J. Kim
Yunhun Nam
Minseon Kim
Sangpil Kim
Jongheon Jeong
AAMLDiffM
215
0
0
31 Oct 2025
Agentic AI Security: Threats, Defenses, Evaluation, and Open Challenges
Agentic AI Security: Threats, Defenses, Evaluation, and Open Challenges
Shrestha Datta
Shahriar Kabir Nahin
Anshuman Chhabra
P. Mohapatra
LLMAGLM&Ro
299
4
0
27 Oct 2025
Towards Strong Certified Defense with Universal Asymmetric Randomization
Towards Strong Certified Defense with Universal Asymmetric Randomization
Hanbin Hong
Ashish Kundu
Ali Payani
Binghui Wang
Yuan Hong
AAML
157
0
0
22 Oct 2025
Black-Box Evasion Attacks on Data-Driven Open RAN Apps: Tailored Design and Experimental Evaluation
Black-Box Evasion Attacks on Data-Driven Open RAN Apps: Tailored Design and Experimental Evaluation
Pranshav Gajjar
Molham Khoja
Abiodun Ganiyu
Marc Juarez
Mahesh K. Marina
Andrew Lehane
Vijay K. Shah
133
0
0
20 Oct 2025
Bridging Symmetry and Robustness: On the Role of Equivariance in Enhancing Adversarial Robustness
Bridging Symmetry and Robustness: On the Role of Equivariance in Enhancing Adversarial Robustness
Longwei Wang
Ifrat Ikhtear Uddin
KC Santosh
Chaowei Zhang
Xiao Qin
Yang Zhou
AAML
256
2
0
17 Oct 2025
Generalist++: A Meta-learning Framework for Mitigating Trade-off in Adversarial Training
Generalist++: A Meta-learning Framework for Mitigating Trade-off in Adversarial Training
Yisen Wang
Yichuan Mo
Hongjun Wang
Junyi Li
Zhouchen Lin
AAML
128
1
0
15 Oct 2025
High-Dimensional Learning Dynamics of Quantized Models with Straight-Through Estimator
High-Dimensional Learning Dynamics of Quantized Models with Straight-Through Estimator
Yuma Ichikawa
Shuhei Kashiwamura
Ayaka Sakata
MQ
215
2
0
12 Oct 2025
Tight Robustness Certificates and Wasserstein Distributional Attacks for Deep Neural Networks
Tight Robustness Certificates and Wasserstein Distributional Attacks for Deep Neural Networks
Bach C. Le
Tung V. Dao
Binh T. Nguyen
Hong T.M. Chu
OOD
183
0
0
11 Oct 2025
A geometrical approach to solve the proximity of a point to an axisymmetric quadric in space
A geometrical approach to solve the proximity of a point to an axisymmetric quadric in space
Bibekananda Patra
Aditya Mahesh Kolte
Sandipan Bandyopadhyay
119
11
0
10 Oct 2025
The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against Llm Jailbreaks and Prompt Injections
The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against Llm Jailbreaks and Prompt Injections
Milad Nasr
Nicholas Carlini
Chawin Sitawarin
Sander Schulhoff
Jamie Hayes
...
Ilia Shumailov
Abhradeep Thakurta
Kai Yuanqing Xiao
Seth Neel
F. Tramèr
AAMLELM
179
14
0
10 Oct 2025
SVDefense: Effective Defense against Gradient Inversion Attacks via Singular Value Decomposition
SVDefense: Effective Defense against Gradient Inversion Attacks via Singular Value Decomposition
Chenxiang Luo
David K.Y. Yau
Qun Song
AAML
173
0
0
01 Oct 2025
Reconcile Certified Robustness and Accuracy for DNN-based Smoothed Majority Vote Classifier
Reconcile Certified Robustness and Accuracy for DNN-based Smoothed Majority Vote Classifier
Gaojie Jin
Xinping Yi
Xiaowei Huang
AAML
137
1
0
30 Sep 2025
DRIFT: Divergent Response in Filtered Transformations for Robust Adversarial Defense
DRIFT: Divergent Response in Filtered Transformations for Robust Adversarial Defense
Amira Guesmi
Muhammad Shafique
AAML
114
0
0
29 Sep 2025
Merge Now, Regret Later: The Hidden Cost of Model Merging is Adversarial Transferability
Merge Now, Regret Later: The Hidden Cost of Model Merging is Adversarial Transferability
Ankit Gangwal
Aaryan Ajay Sharma
AAMLMoMe
189
1
0
28 Sep 2025
Accuracy-Robustness Trade Off via Spiking Neural Network Gradient Sparsity Trail
Accuracy-Robustness Trade Off via Spiking Neural Network Gradient Sparsity Trail
Nhan T. Luu
Luu Trung Duong
Pham Ngoc Nam
Truong Cong Thang
AAML
245
1
0
28 Sep 2025
Randomized Smoothing Meets Vision-Language Models
Randomized Smoothing Meets Vision-Language Models
Emmanouil Seferis
Changshun Wu
Stefanos D. Kollias
Saddek Bensalem
Chih-Hong Cheng
AAML
123
0
0
19 Sep 2025
ParaAegis: Parallel Protection for Flexible Privacy-preserved Federated Learning
ParaAegis: Parallel Protection for Flexible Privacy-preserved Federated Learning
Zihou Wu
Yuecheng Li
Tianchi Liao
Jian Lou
Chuan Chen
FedML
98
0
0
17 Sep 2025
Towards Robust Defense against Customization via Protective Perturbation Resistant to Diffusion-based Purification
Towards Robust Defense against Customization via Protective Perturbation Resistant to Diffusion-based Purification
Wenkui Yang
Jie Cao
Junxian Duan
Ran He
DiffMAAMLWIGM
287
0
0
17 Sep 2025
Who Taught the Lie? Responsibility Attribution for Poisoned Knowledge in Retrieval-Augmented Generation
Who Taught the Lie? Responsibility Attribution for Poisoned Knowledge in Retrieval-Augmented Generation
Baolei Zhang
Haoran Xin
Yuxi Chen
Zhuqing Liu
Biao Yi
Tong Li
Lihai Nie
Zheli Liu
Minghong Fang
SILM
256
1
0
17 Sep 2025
CIARD: Cyclic Iterative Adversarial Robustness Distillation
CIARD: Cyclic Iterative Adversarial Robustness Distillation
Liming Lu
Shuchao Pang
Xu Zheng
Xiang Gu
Anan Du
Yunhuai Liu
Yongbin Zhou
AAML
179
0
0
16 Sep 2025
Evaluating the Impact of Adversarial Attacks on Traffic Sign Classification using the LISA Dataset
Evaluating the Impact of Adversarial Attacks on Traffic Sign Classification using the LISA Dataset
Nabeyou Tadessa
Balaji Iyangar
Mashrur Chowdhury
AAML
65
0
0
08 Sep 2025
Unifying Adversarial Perturbation for Graph Neural Networks
Unifying Adversarial Perturbation for Graph Neural Networks
Jinluan Yang
Ruihao Zhang
Zhengyu Chen
Fei Wu
Kun Kuang
AAML
173
1
0
30 Aug 2025
Robustness Feature Adapter for Efficient Adversarial Training
Robustness Feature Adapter for Efficient Adversarial Training
Quanwei Wu
Jun Guo
Wei Wang
Yi Alice Wang
AAML
91
0
0
25 Aug 2025
An Investigation of Visual Foundation Models Robustness
An Investigation of Visual Foundation Models Robustness
Sandeep Gupta
Roberto Passerone
AAML
124
0
0
22 Aug 2025
On Evaluating the Adversarial Robustness of Foundation Models for Multimodal Entity Linking
On Evaluating the Adversarial Robustness of Foundation Models for Multimodal Entity Linking
Fang Wang
Yongjie Wang
Zonghao Yang
Minghao Hu
Xiaoying Bai
AAML
85
0
0
21 Aug 2025
DASH: A Meta-Attack Framework for Synthesizing Effective and Stealthy Adversarial Examples
DASH: A Meta-Attack Framework for Synthesizing Effective and Stealthy Adversarial Examples
Abdullah Al Nomaan Nafi
Habibur Rahaman
Zafaryab Haider
Tanzim Mahfuz
Fnu Suya
Swarup Bhunia
Prabuddha Chakraborty
AAML
166
0
0
18 Aug 2025
MirGuard: Towards a Robust Provenance-based Intrusion Detection System Against Graph Manipulation Attacks
MirGuard: Towards a Robust Provenance-based Intrusion Detection System Against Graph Manipulation Attacks
Anyuan Sang
Lu Zhou
Li Yang
Junbo Jia
Huipeng Yang
Pengbin Feng
Jianfeng Ma
AAML
138
0
0
14 Aug 2025
Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance
Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance
Yuchu Jiang
Jian Zhao
Yuchen Yuan
Tianle Zhang
Yao Huang
...
Ya Zhang
Shuicheng Yan
Chi Zhang
Z. He
Xuelong Li
SILM
460
3
0
12 Aug 2025
Keep It Real: Challenges in Attacking Compression-Based Adversarial Purification
Keep It Real: Challenges in Attacking Compression-Based Adversarial Purification
Samuel Räber
Till Aczél
Andreas Plesner
Roger Wattenhofer
DiffMAAML
256
0
0
07 Aug 2025
Failure Cases Are Better Learned But Boundary Says Sorry: Facilitating Smooth Perception Change for Accuracy-Robustness Trade-Off in Adversarial Training
Failure Cases Are Better Learned But Boundary Says Sorry: Facilitating Smooth Perception Change for Accuracy-Robustness Trade-Off in Adversarial Training
Yanyun Wang
Li Liu
AAML
169
0
0
04 Aug 2025
Improving Adversarial Robustness Through Adaptive Learning-Driven Multi-Teacher Knowledge Distillation
Improving Adversarial Robustness Through Adaptive Learning-Driven Multi-Teacher Knowledge Distillation
Hayat Ullah
Syed Muhammad Talha Zaidi
Arslan Munir
AAML
216
0
0
28 Jul 2025
Reinforced Embodied Active Defense: Exploiting Adaptive Interaction for Robust Visual Perception in Adversarial 3D Environments
Reinforced Embodied Active Defense: Exploiting Adaptive Interaction for Robust Visual Perception in Adversarial 3D EnvironmentsIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2025
Xiao Yang
Lingxuan Wu
L. Wang
Chengyang Ying
Hang Su
Jun Zhu
AAML
194
2
0
24 Jul 2025
ROBAD: Robust Adversary-aware Local-Global Attended Bad Actor Detection Sequential Model
ROBAD: Robust Adversary-aware Local-Global Attended Bad Actor Detection Sequential Model
Bing He
M. Ahamad
Srijan Kumar
AAML
107
0
0
20 Jul 2025
RL-Obfuscation: Can Language Models Learn to Evade Latent-Space Monitors?
RL-Obfuscation: Can Language Models Learn to Evade Latent-Space Monitors?
Rohan Gupta
Erik Jenner
366
3
0
17 Jun 2025
Busting the Paper Ballot: Voting Meets Adversarial Machine Learning
Busting the Paper Ballot: Voting Meets Adversarial Machine Learning
Kaleel Mahmood
Caleb Manicke
Ethan Rathbun
Aayushi Verma
Sohaib Ahmad
Nicholas Stamatakis
L. Michel
Benjamin Fuller
AAML
226
0
0
17 Jun 2025
Position: Certified Robustness Does Not (Yet) Imply Model Security
Position: Certified Robustness Does Not (Yet) Imply Model Security
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
252
0
0
16 Jun 2025
Existence of Adversarial Examples for Random Convolutional Networks via Isoperimetric Inequalities on $\mathbb{so}(d)$
Existence of Adversarial Examples for Random Convolutional Networks via Isoperimetric Inequalities on so(d)\mathbb{so}(d)so(d)Annual Conference Computational Learning Theory (COLT), 2025
Amit Daniely
158
0
0
14 Jun 2025
Attention-based Adversarial Robust Distillation in Radio Signal Classifications for Low-Power IoT Devices
Attention-based Adversarial Robust Distillation in Radio Signal Classifications for Low-Power IoT DevicesIEEE Internet of Things Journal (IEEE IoT J.), 2023
Lu Zhang
S. Lambotharan
G. Zheng
G. Liao
Basil AsSadhan
Fabio Roli
AAML
177
15
0
13 Jun 2025
A Crack in the Bark: Leveraging Public Knowledge to Remove Tree-Ring Watermarks
A Crack in the Bark: Leveraging Public Knowledge to Remove Tree-Ring Watermarks
Junhua Lin
Marc Juarez
289
1
0
12 Jun 2025
Lattice Climber Attack: Adversarial attacks for randomized mixtures of classifiers
Lattice Climber Attack: Adversarial attacks for randomized mixtures of classifiers
Lucas Gnecco-Heredia
Benjamin Négrevergne
Y. Chevaleyre
AAML
245
0
0
12 Jun 2025
SHIELD: Secure Hypernetworks for Incremental Expansion Learning Defense
Patryk Krukowski
Łukasz Gorczyca
Piotr Helm
Kamil Ksiazek
Przemysław Spurek
AAMLCLL
224
0
0
09 Jun 2025
PASS: Private Attributes Protection with Stochastic Data Substitution
PASS: Private Attributes Protection with Stochastic Data Substitution
Yizhuo Chen
Chun-Fu
Chen
Hsiang Hsu
Shaohan Hu
Tarek Abdelzaher
311
0
0
08 Jun 2025
1234...383940
Next