Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2012.03765
Cited By
v1
v2
v3 (latest)
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks
7 December 2020
Jinyuan Jia
Yupei Liu
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks"
50 / 55 papers shown
On Robustness of Linear Classifiers to Targeted Data Poisoning
Nakshatra Gupta
Sumanth Prabhu
Supratik Chakraborty
R Venkatesh
OOD
AAML
204
0
0
16 Nov 2025
Provably Robust Adaptation for Language-Empowered Foundation Models
Y. Lai
Xiaoyu Xue
Linghui Shen
Yulun Wu
Gaolei Li
Song Guo
Kai Zhou
Bin Xiao
AAML
198
1
0
09 Oct 2025
Defending Against Beta Poisoning Attacks in Machine Learning Models
Computer Science Symposium in Russia (CSR), 2025
Nilufer Gulciftci
M. Emre Gursoy
AAML
166
0
0
02 Aug 2025
Evading Data Provenance in Deep Neural Networks
Hongyu Zhu
Sichu Liang
Wenwen Wang
Zhuomeng Zhang
Fangqi Li
Shi-Lin Wang
AAML
306
2
0
01 Aug 2025
Cert-SSBD: Certified Backdoor Defense with Sample-Specific Smoothing Noises
Ting Qiao
Longji Xu
Xing Liu
Sixing Wu
Jianbing Li
Yiming Li
AAML
SILM
553
0
0
30 Apr 2025
AGNNCert: Defending Graph Neural Networks against Arbitrary Perturbations with Deterministic Certification
Jiate Li
Binghui Wang
AAML
389
3
0
02 Feb 2025
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of Artificial Mental Imagery
Ching-Chun Chang
Kai Gao
Shuying Xu
Anastasia Kordoni
Christopher Leckie
Isao Echizen
192
0
0
29 Sep 2024
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Lukas Gosch
Mahalakshmi Sabanayagam
Debarghya Ghoshdastidar
Stephan Günnemann
AAML
589
6
0
15 Jul 2024
Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses
Yuxin Yang
Qiang Li
Jinyuan Jia
Yuan Hong
Binghui Wang
AAML
FedML
257
22
0
12 Jul 2024
FullCert: Deterministic End-to-End Certification for Training and Inference of Neural Networks
Tobias Lorenz
Marta Kwiatkowska
Mario Fritz
AAML
191
3
0
17 Jun 2024
NBA: defensive distillation for backdoor removal via neural behavior alignment
Zonghao Ying
Bin Wu
AAML
204
14
0
16 Jun 2024
Mutual Information Guided Backdoor Mitigation for Pre-trained Encoders
Tingxu Han
Weisong Sun
Ziqi Ding
Chunrong Fang
Hanwei Qian
Jiaxun Li
Zhenyu Chen
Xiangyu Zhang
AAML
517
15
0
05 Jun 2024
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated AI-enabled Critical Infrastructure
Zehang Deng
Ruoxi Sun
Minhui Xue
Sheng Wen
S. Çamtepe
Surya Nepal
Yang Xiang
270
12
0
24 May 2024
FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models
Yanting Wang
Wei Zou
Jinyuan Jia
277
4
0
12 Apr 2024
Dialectical Alignment: Resolving the Tension of 3H and Security Threats of LLMs
Shu Yang
Jiayuan Su
Han Jiang
Mengdi Li
Keyuan Cheng
Muhammad Asif Ali
Lijie Hu
Haiyan Zhao
337
10
0
30 Mar 2024
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Guangyu Shen
Shengwei An
Shiwei Feng
Xiangzhe Xu
Kaiyuan Zhang
Shiqing Ma
Xiangyu Zhang
AAML
261
16
0
25 Mar 2024
A general approach to enhance the survivability of backdoor attacks by decision path coupling
Yufei Zhao
Dingji Wang
Bihuan Chen
Ziqian Chen
Xin Peng
AAML
250
0
0
05 Mar 2024
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu
Michael K. Reiter
Neil Zhenqiang Gong
AAML
360
3
0
22 Feb 2024
Attacking Byzantine Robust Aggregation in High Dimensions
Sarthak Choudhary
Aashish Kolluri
Prateek Saxena
AAML
249
4
0
22 Dec 2023
BadRL: Sparse Targeted Backdoor Attack Against Reinforcement Learning
Jing Cui
Yufei Han
Yuzhe Ma
Jianbin Jiao
Junge Zhang
AAML
271
28
0
19 Dec 2023
Defenses in Adversarial Machine Learning: A Survey
Baoyuan Wu
Shaokui Wei
Mingli Zhu
Meixi Zheng
Zihao Zhu
Ruotong Wang
Hongrui Chen
Danni Yuan
Li Liu
Qingshan Liu
AAML
348
28
0
13 Dec 2023
Mendata: A Framework to Purify Manipulated Training Data
Zonghao Huang
Neil Zhenqiang Gong
Michael K. Reiter
293
0
0
03 Dec 2023
Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift
AAAI Conference on Artificial Intelligence (AAAI), 2023
Shengwei An
Sheng-Yen Chou
Kaiyuan Zhang
Qiuling Xu
Guanhong Tao
...
Shuyang Cheng
Shiqing Ma
Pin-Yu Chen
Tsung-Yi Ho
Xiangyu Zhang
DiffM
AAML
464
47
0
27 Nov 2023
TextGuard: Provable Defense against Backdoor Attacks on Text Classification
Hengzhi Pei
Jinyuan Jia
Wenbo Guo
Yue Liu
Dawn Song
SILM
319
23
0
19 Nov 2023
CBD: A Certified Backdoor Detector Based on Local Dominant Probability
Neural Information Processing Systems (NeurIPS), 2023
Zhen Xiang
Zidi Xiong
Bo Li
AAML
387
25
0
26 Oct 2023
Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples
Neural Information Processing Systems (NeurIPS), 2023
Shaokui Wei
Ruotong Wang
H. Zha
Baoyuan Wu
TPM
241
55
0
20 Jul 2023
Systematic Testing of the Data-Poisoning Robustness of KNN
International Symposium on Software Testing and Analysis (ISSTA), 2023
Yannan Li
Jingbo Wang
Chao Wang
AAML
OOD
170
7
0
17 Jul 2023
Certifying the Fairness of KNN in the Presence of Dataset Bias
International Conference on Computer Aided Verification (CAV), 2023
Yann-Liang Li
Jingbo Wang
Chao Wang
FaML
239
7
0
17 Jul 2023
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks
Wenxiao Wang
Soheil Feizi
AAML
240
1
0
28 Jun 2023
Geometric Algorithms for
k
k
k
-NN Poisoning
Canadian Conference on Computational Geometry (CCCG), 2023
Diego Ihara Centurion
Karine Chubarian
Bohan Fan
Francesco Sgherzi
Thiruvenkadam S Radhakrishnan
Anastasios Sidiropoulos
Angelo Straight
FedML
187
1
0
21 Jun 2023
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks
Nils Lukas
Florian Kerschbaum
326
1
0
07 May 2023
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
USENIX Security Symposium (USENIX Security), 2023
Jinyuan Jia
Yupei Liu
Yuepeng Hu
Neil Zhenqiang Gong
156
18
0
26 Mar 2023
Detecting Backdoors in Pre-trained Encoders
Computer Vision and Pattern Recognition (CVPR), 2023
Shiwei Feng
Guanhong Tao
Shuyang Cheng
Guangyu Shen
Xiangzhe Xu
Yingqi Liu
Kaiyuan Zhang
Shiqing Ma
Xiangyu Zhang
380
85
0
23 Mar 2023
Temporal Robustness against Data Poisoning
Neural Information Processing Systems (NeurIPS), 2023
Wenxiao Wang
Soheil Feizi
AAML
OOD
441
16
0
07 Feb 2023
BackdoorBox: A Python Toolbox for Backdoor Learning
Yiming Li
Mengxi Ya
Yang Bai
Yong Jiang
Shutao Xia
AAML
311
52
0
01 Feb 2023
PECAN: A Deterministic Certified Defense Against Backdoor Attacks
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
AAML
368
4
0
27 Jan 2023
Training Data Influence Analysis and Estimation: A Survey
Machine-mediated learning (ML), 2022
Zayd Hammoudeh
Daniel Lowd
TDI
590
162
0
09 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
203
6
0
06 Dec 2022
Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Guanhong Tao
Zhenting Wang
Shuyang Cheng
Shiqing Ma
Shengwei An
Yingqi Liu
Guangyu Shen
Zhuo Zhang
Yunshu Mao
Xiangyu Zhang
SILM
266
18
0
29 Nov 2022
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Computer Vision and Pattern Recognition (CVPR), 2022
Jinghuai Zhang
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
AAML
468
32
0
15 Nov 2022
FLCert: Provably Secure Federated Learning against Poisoning Attacks
IEEE Transactions on Information Forensics and Security (IEEE TIFS), 2022
Xiaoyu Cao
Zaixi Zhang
Jinyuan Jia
Neil Zhenqiang Gong
FedML
OOD
360
86
0
02 Oct 2022
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
Conference on Computer and Communications Security (CCS), 2022
Chulin Xie
Yunhui Long
Pin-Yu Chen
Qinbin Li
Arash Nourian
Sanmi Koyejo
Bo Li
FedML
443
23
0
08 Sep 2022
Reducing Certified Regression to Certified Classification for General Poisoning Attacks
Zayd Hammoudeh
Daniel Lowd
AAML
302
12
0
29 Aug 2022
Lethal Dose Conjecture on Data Poisoning
Neural Information Processing Systems (NeurIPS), 2022
Wenxiao Wang
Alexander Levine
Soheil Feizi
FedML
216
17
0
05 Aug 2022
DECK: Model Hardening for Defending Pervasive Backdoors
Guanhong Tao
Yingqi Liu
Shuyang Cheng
Shengwei An
Zhuo Zhang
Qiuling Xu
Guangyu Shen
Xiangyu Zhang
AAML
352
7
0
18 Jun 2022
BagFlip: A Certified Defense against Data Poisoning
Neural Information Processing Systems (NeurIPS), 2022
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
AAML
273
27
0
26 May 2022
On Collective Robustness of Bagging Against Data Poisoning
International Conference on Machine Learning (ICML), 2022
Ruoxin Chen
Zenan Li
Jie Li
Chentao Wu
Junchi Yan
237
25
0
26 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
USENIX Security Symposium (USENIX Security), 2022
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
328
45
0
13 May 2022
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
ACM Computing Surveys (ACM CSUR), 2022
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Sebastiano Vascon
Werner Zellinger
Bernhard A. Moser
Alina Oprea
Battista Biggio
Marcello Pelillo
Fabio Roli
AAML
462
184
0
04 May 2022
Certifying Robustness to Programmable Data Bias in Decision Trees
Neural Information Processing Systems (NeurIPS), 2021
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
175
28
0
08 Oct 2021
1
2
Next
Page 1 of 2