Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2008.04495
Cited By
v1
v2
v3
v4
v5
v6
v7 (latest)
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
AAAI Conference on Artificial Intelligence (AAAI), 2020
11 August 2020
Jinyuan Jia
Xiaoyu Cao
Neil Zhenqiang Gong
SILM
Re-assign community
ArXiv (abs)
PDF
HTML
Github (10★)
Papers citing
"Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks"
50 / 68 papers shown
EmoRAG: Evaluating RAG Robustness to Symbolic Perturbations
Xinyun Zhou
Xinfeng Li
Yinan Peng
Ming Xu
X. Zhang
...
X. Jia
Kun Wang
Qingsong Wen
Xiaofeng Wang
Wei Dong
AAML
188
2
0
01 Dec 2025
Uncover and Unlearn Nuisances: Agnostic Fully Test-Time Adaptation
Machine-mediated learning (ML), 2025
Ponhvoan Srey
Yaxin Shi
Hangwei Qian
Jing Li
Ivor Tsang
TTA
250
0
0
16 Nov 2025
Provably Robust Adaptation for Language-Empowered Foundation Models
Y. Lai
Xiaoyu Xue
Linghui Shen
Yulun Wu
Gaolei Li
Song Guo
Kai Zhou
Bin Xiao
AAML
204
1
0
09 Oct 2025
Multi-level Certified Defense Against Poisoning Attacks in Offline Reinforcement Learning
International Conference on Learning Representations (ICLR), 2025
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
OffRL
AAML
293
4
0
27 May 2025
Practical Poisoning Attacks against Retrieval-Augmented Generation
Baolei Zhang
Yuxiao Chen
Minghong Fang
Zhuqing Liu
Lihai Nie
Tong Li
Zheli Liu
SILM
AAML
516
22
0
04 Apr 2025
Prototype Guided Backdoor Defense
Venkat Adithya Amula
Sunayana Samavedam
Saurabh Saini
Avani Gupta
Narayanan P J
AAML
353
1
0
26 Mar 2025
Principal Eigenvalue Regularization for Improved Worst-Class Certified Robustness of Smoothed Classifiers
Gaojie Jin
Tianjin Huang
Ronghui Mu
Xiaowei Huang
AAML
394
0
0
21 Mar 2025
Trust Under Siege: Label Spoofing Attacks against Machine Learning for Android Malware Detection
Tianwei Lan
Luca Demetrio
Farid Nait-Abdesselam
Yufei Han
Simone Aonzo
AAML
369
4
0
14 Mar 2025
Data Poisoning Attacks to Local Differential Privacy Protocols for Graphs
IEEE International Conference on Data Engineering (ICDE), 2024
Xi He
Kai Huang
Qingqing Ye
Haibo Hu
AAML
272
3
0
31 Dec 2024
Timber! Poisoning Decision Trees
Stefano Calzavara
Lorenzo Cazzaro
Massimo Vettori
AAML
378
1
0
01 Oct 2024
UTrace: Poisoning Forensics for Private Collaborative Learning
Evan Rose
Hidde Lycklama
Harsh Chaudhari
Niklas Britz
Anwar Hithnawi
Alina Oprea
541
2
0
23 Sep 2024
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Lukas Gosch
Mahalakshmi Sabanayagam
Debarghya Ghoshdastidar
Stephan Günnemann
AAML
640
6
0
15 Jul 2024
MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models
Yanting Wang
Hongye Fu
Wei Zou
Jinyuan Jia
AAML
483
7
0
28 Mar 2024
Mendata: A Framework to Purify Manipulated Training Data
Zonghao Huang
Neil Zhenqiang Gong
Michael K. Reiter
338
0
0
03 Dec 2023
ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach
Conference on Computer and Communications Security (CCS), 2023
Yuke Hu
Jian Lou
Jiaqi Liu
Wangze Ni
Feng Lin
Zhan Qin
Kui Ren
MU
442
27
0
03 Nov 2023
Poison is Not Traceless: Fully-Agnostic Detection of Poisoning Attacks
Xinglong Chang
Katharina Dost
Gill Dobbie
Jörg Simon Wicker
AAML
257
1
0
24 Oct 2023
Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Shawn Shan
Wenxin Ding
Josephine Passananti
Stanley Wu
Haitao Zheng
Ben Y. Zhao
SILM
DiffM
473
100
0
20 Oct 2023
Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks
AAAI Conference on Artificial Intelligence (AAAI), 2023
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
AAML
329
7
0
15 Aug 2023
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks
Nils Lukas
Florian Kerschbaum
352
1
0
07 May 2023
The Dataset Multiplicity Problem: How Unreliable Data Impacts Predictions
Conference on Fairness, Accountability and Transparency (FAccT), 2023
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
355
22
0
20 Apr 2023
Temporal Robustness against Data Poisoning
Neural Information Processing Systems (NeurIPS), 2023
Wenxiao Wang
Soheil Feizi
AAML
OOD
484
16
0
07 Feb 2023
Run-Off Election: Improved Provable Defense against Data Poisoning Attacks
International Conference on Machine Learning (ICML), 2023
Keivan Rezaei
Kiarash Banihashem
Atoosa Malemir Chegini
Soheil Feizi
AAML
529
22
0
05 Feb 2023
PECAN: A Deterministic Certified Defense Against Backdoor Attacks
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
AAML
387
4
0
27 Jan 2023
Stealthy Backdoor Attack for Code Models
IEEE Transactions on Software Engineering (TSE), 2023
Zhou Yang
Bowen Xu
Jie M. Zhang
Hong Jin Kang
Jieke Shi
Junda He
David Lo
AAML
359
96
0
06 Jan 2023
A Comprehensive Study of the Robustness for LiDAR-based 3D Object Detectors against Adversarial Attacks
International Journal of Computer Vision (IJCV), 2022
Yifan Zhang
Xianqiang Lyu
Yixuan Yuan
AAML
3DPC
409
47
0
20 Dec 2022
XRand: Differentially Private Defense against Explanation-Guided Attacks
AAAI Conference on Artificial Intelligence (AAAI), 2022
Truc D. T. Nguyen
Phung Lai
Nhathai Phan
My T. Thai
AAML
SILM
348
22
0
08 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
218
7
0
06 Dec 2022
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Computer Vision and Pattern Recognition (CVPR), 2022
Jinghuai Zhang
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
AAML
485
37
0
15 Nov 2022
COLLIDER: A Robust Training Framework for Backdoor Data
Asian Conference on Computer Vision (ACCV), 2022
H. M. Dolatabadi
S. Erfani
C. Leckie
AAML
254
8
0
13 Oct 2022
On Optimal Learning Under Targeted Data Poisoning
Neural Information Processing Systems (NeurIPS), 2022
Steve Hanneke
Amin Karbasi
Mohammad Mahmoody
Idan Mehalel
Shay Moran
AAML
FedML
225
11
0
06 Oct 2022
FLCert: Provably Secure Federated Learning against Poisoning Attacks
IEEE Transactions on Information Forensics and Security (IEEE TIFS), 2022
Xiaoyu Cao
Zaixi Zhang
Jinyuan Jia
Neil Zhenqiang Gong
FedML
OOD
396
86
0
02 Oct 2022
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
Conference on Computer and Communications Security (CCS), 2022
Chulin Xie
Yunhui Long
Pin-Yu Chen
Qinbin Li
Arash Nourian
Sanmi Koyejo
Bo Li
FedML
456
25
0
08 Sep 2022
Reducing Certified Regression to Certified Classification for General Poisoning Attacks
Zayd Hammoudeh
Daniel Lowd
AAML
337
12
0
29 Aug 2022
Lethal Dose Conjecture on Data Poisoning
Neural Information Processing Systems (NeurIPS), 2022
Wenxiao Wang
Alexander Levine
Soheil Feizi
FedML
240
17
0
05 Aug 2022
Certifying Data-Bias Robustness in Linear Regression
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
233
3
0
07 Jun 2022
Dropbear: Machine Learning Marketplaces made Trustworthy with Byzantine Model Agreement
A. Shamis
Peter R. Pietzuch
Antoine Delignat-Lavaud
Andrew Paverd
Manuel Costa
OOD
140
0
0
31 May 2022
BagFlip: A Certified Defense against Data Poisoning
Neural Information Processing Systems (NeurIPS), 2022
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
AAML
291
28
0
26 May 2022
On Collective Robustness of Bagging Against Data Poisoning
International Conference on Machine Learning (ICML), 2022
Ruoxin Chen
Zenan Li
Jie Li
Chentao Wu
Junchi Yan
258
25
0
26 May 2022
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning
Harsh Chaudhari
Matthew Jagielski
Alina Oprea
303
7
0
20 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
USENIX Security Symposium (USENIX Security), 2022
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
346
45
0
13 May 2022
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
International Conference on Machine Learning (ICML), 2022
Wenxiao Wang
Alexander Levine
Soheil Feizi
AAML
297
68
0
05 Feb 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Conference on Computer and Communications Security (CCS), 2022
Zayd Hammoudeh
Daniel Lowd
TDI
352
41
0
25 Jan 2022
How to Backdoor HyperNetwork in Personalized Federated Learning?
Phung Lai
Nhathai Phan
Issa M. Khalil
Abdallah Khreishah
Xintao Wu
AAML
FedML
329
0
0
18 Jan 2022
EIFFeL: Ensuring Integrity for Federated Learning
Conference on Computer and Communications Security (CCS), 2021
A. Chowdhury
Chuan Guo
S. Jha
Laurens van der Maaten
FedML
478
112
0
23 Dec 2021
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
Ashwinee Panda
Saeed Mahloujifar
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
FedML
AAML
298
121
0
12 Dec 2021
10 Security and Privacy Problems in Large Foundation Models
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
466
11
0
28 Oct 2021
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
371
64
0
13 Oct 2021
Certifying Robustness to Programmable Data Bias in Decision Trees
Neural Information Processing Systems (NeurIPS), 2021
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
195
29
0
08 Oct 2021
Adversarial Unlearning of Backdoors via Implicit Hypergradient
Yi Zeng
Si-An Chen
Won Park
Z. Morley Mao
Ming Jin
R. Jia
AAML
498
227
0
07 Oct 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
366
197
0
01 Aug 2021
1
2
Next
Page 1 of 2