ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.04495
  4. Cited By
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
v1v2v3v4v5v6v7 (latest)

Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks

AAAI Conference on Artificial Intelligence (AAAI), 2020
11 August 2020
Jinyuan Jia
Xiaoyu Cao
Neil Zhenqiang Gong
    SILM
ArXiv (abs)PDFHTMLGithub (10★)

Papers citing "Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks"

50 / 68 papers shown
EmoRAG: Evaluating RAG Robustness to Symbolic Perturbations
EmoRAG: Evaluating RAG Robustness to Symbolic Perturbations
Xinyun Zhou
Xinfeng Li
Yinan Peng
Ming Xu
X. Zhang
...
X. Jia
Kun Wang
Qingsong Wen
Xiaofeng Wang
Wei Dong
AAML
188
2
0
01 Dec 2025
Uncover and Unlearn Nuisances: Agnostic Fully Test-Time Adaptation
Uncover and Unlearn Nuisances: Agnostic Fully Test-Time AdaptationMachine-mediated learning (ML), 2025
Ponhvoan Srey
Yaxin Shi
Hangwei Qian
Jing Li
Ivor Tsang
TTA
250
0
0
16 Nov 2025
Provably Robust Adaptation for Language-Empowered Foundation Models
Provably Robust Adaptation for Language-Empowered Foundation Models
Y. Lai
Xiaoyu Xue
Linghui Shen
Yulun Wu
Gaolei Li
Song Guo
Kai Zhou
Bin Xiao
AAML
204
1
0
09 Oct 2025
Multi-level Certified Defense Against Poisoning Attacks in Offline Reinforcement Learning
Multi-level Certified Defense Against Poisoning Attacks in Offline Reinforcement LearningInternational Conference on Learning Representations (ICLR), 2025
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
OffRLAAML
293
4
0
27 May 2025
Practical Poisoning Attacks against Retrieval-Augmented Generation
Practical Poisoning Attacks against Retrieval-Augmented Generation
Baolei Zhang
Yuxiao Chen
Minghong Fang
Zhuqing Liu
Lihai Nie
Tong Li
Zheli Liu
SILMAAML
516
22
0
04 Apr 2025
Prototype Guided Backdoor Defense
Prototype Guided Backdoor Defense
Venkat Adithya Amula
Sunayana Samavedam
Saurabh Saini
Avani Gupta
Narayanan P J
AAML
353
1
0
26 Mar 2025
Principal Eigenvalue Regularization for Improved Worst-Class Certified Robustness of Smoothed Classifiers
Principal Eigenvalue Regularization for Improved Worst-Class Certified Robustness of Smoothed Classifiers
Gaojie Jin
Tianjin Huang
Ronghui Mu
Xiaowei Huang
AAML
394
0
0
21 Mar 2025
Trust Under Siege: Label Spoofing Attacks against Machine Learning for Android Malware Detection
Trust Under Siege: Label Spoofing Attacks against Machine Learning for Android Malware Detection
Tianwei Lan
Luca Demetrio
Farid Nait-Abdesselam
Yufei Han
Simone Aonzo
AAML
369
4
0
14 Mar 2025
Data Poisoning Attacks to Local Differential Privacy Protocols for Graphs
Data Poisoning Attacks to Local Differential Privacy Protocols for GraphsIEEE International Conference on Data Engineering (ICDE), 2024
Xi He
Kai Huang
Qingqing Ye
Haibo Hu
AAML
272
3
0
31 Dec 2024
Timber! Poisoning Decision Trees
Timber! Poisoning Decision Trees
Stefano Calzavara
Lorenzo Cazzaro
Massimo Vettori
AAML
378
1
0
01 Oct 2024
UTrace: Poisoning Forensics for Private Collaborative Learning
UTrace: Poisoning Forensics for Private Collaborative Learning
Evan Rose
Hidde Lycklama
Harsh Chaudhari
Niklas Britz
Anwar Hithnawi
Alina Oprea
541
2
0
23 Sep 2024
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Lukas Gosch
Mahalakshmi Sabanayagam
Debarghya Ghoshdastidar
Stephan Günnemann
AAML
640
6
0
15 Jul 2024
MMCert: Provable Defense against Adversarial Attacks to Multi-modal
  Models
MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models
Yanting Wang
Hongye Fu
Wei Zou
Jinyuan Jia
AAML
483
7
0
28 Mar 2024
Mendata: A Framework to Purify Manipulated Training Data
Mendata: A Framework to Purify Manipulated Training Data
Zonghao Huang
Neil Zhenqiang Gong
Michael K. Reiter
338
0
0
03 Dec 2023
ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware
  Approach
ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware ApproachConference on Computer and Communications Security (CCS), 2023
Yuke Hu
Jian Lou
Jiaqi Liu
Wangze Ni
Feng Lin
Zhan Qin
Kui Ren
MU
442
27
0
03 Nov 2023
Poison is Not Traceless: Fully-Agnostic Detection of Poisoning Attacks
Poison is Not Traceless: Fully-Agnostic Detection of Poisoning Attacks
Xinglong Chang
Katharina Dost
Gill Dobbie
Jörg Simon Wicker
AAML
257
1
0
24 Oct 2023
Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image
  Generative Models
Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Shawn Shan
Wenxin Ding
Josephine Passananti
Stanley Wu
Haitao Zheng
Ben Y. Zhao
SILMDiffM
473
100
0
20 Oct 2023
Enhancing the Antidote: Improved Pointwise Certifications against
  Poisoning Attacks
Enhancing the Antidote: Improved Pointwise Certifications against Poisoning AttacksAAAI Conference on Artificial Intelligence (AAAI), 2023
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
AAML
329
7
0
15 Aug 2023
Pick your Poison: Undetectability versus Robustness in Data Poisoning
  Attacks
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks
Nils Lukas
Florian Kerschbaum
352
1
0
07 May 2023
The Dataset Multiplicity Problem: How Unreliable Data Impacts
  Predictions
The Dataset Multiplicity Problem: How Unreliable Data Impacts PredictionsConference on Fairness, Accountability and Transparency (FAccT), 2023
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
355
22
0
20 Apr 2023
Temporal Robustness against Data Poisoning
Temporal Robustness against Data PoisoningNeural Information Processing Systems (NeurIPS), 2023
Wenxiao Wang
Soheil Feizi
AAMLOOD
484
16
0
07 Feb 2023
Run-Off Election: Improved Provable Defense against Data Poisoning
  Attacks
Run-Off Election: Improved Provable Defense against Data Poisoning AttacksInternational Conference on Machine Learning (ICML), 2023
Keivan Rezaei
Kiarash Banihashem
Atoosa Malemir Chegini
Soheil Feizi
AAML
529
22
0
05 Feb 2023
PECAN: A Deterministic Certified Defense Against Backdoor Attacks
PECAN: A Deterministic Certified Defense Against Backdoor Attacks
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
AAML
387
4
0
27 Jan 2023
Stealthy Backdoor Attack for Code Models
Stealthy Backdoor Attack for Code ModelsIEEE Transactions on Software Engineering (TSE), 2023
Zhou Yang
Bowen Xu
Jie M. Zhang
Hong Jin Kang
Jieke Shi
Junda He
David Lo
AAML
359
96
0
06 Jan 2023
A Comprehensive Study of the Robustness for LiDAR-based 3D Object
  Detectors against Adversarial Attacks
A Comprehensive Study of the Robustness for LiDAR-based 3D Object Detectors against Adversarial AttacksInternational Journal of Computer Vision (IJCV), 2022
Yifan Zhang
Xianqiang Lyu
Yixuan Yuan
AAML3DPC
409
47
0
20 Dec 2022
XRand: Differentially Private Defense against Explanation-Guided Attacks
XRand: Differentially Private Defense against Explanation-Guided AttacksAAAI Conference on Artificial Intelligence (AAAI), 2022
Truc D. T. Nguyen
Phung Lai
Nhathai Phan
My T. Thai
AAMLSILM
348
22
0
08 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
218
7
0
06 Dec 2022
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive
  Learning
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningComputer Vision and Pattern Recognition (CVPR), 2022
Jinghuai Zhang
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
AAML
485
37
0
15 Nov 2022
COLLIDER: A Robust Training Framework for Backdoor Data
COLLIDER: A Robust Training Framework for Backdoor DataAsian Conference on Computer Vision (ACCV), 2022
H. M. Dolatabadi
S. Erfani
C. Leckie
AAML
254
8
0
13 Oct 2022
On Optimal Learning Under Targeted Data Poisoning
On Optimal Learning Under Targeted Data PoisoningNeural Information Processing Systems (NeurIPS), 2022
Steve Hanneke
Amin Karbasi
Mohammad Mahmoody
Idan Mehalel
Shay Moran
AAMLFedML
225
11
0
06 Oct 2022
FLCert: Provably Secure Federated Learning against Poisoning Attacks
FLCert: Provably Secure Federated Learning against Poisoning AttacksIEEE Transactions on Information Forensics and Security (IEEE TIFS), 2022
Xiaoyu Cao
Zaixi Zhang
Jinyuan Jia
Neil Zhenqiang Gong
FedMLOOD
396
86
0
02 Oct 2022
Unraveling the Connections between Privacy and Certified Robustness in
  Federated Learning Against Poisoning Attacks
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning AttacksConference on Computer and Communications Security (CCS), 2022
Chulin Xie
Yunhui Long
Pin-Yu Chen
Qinbin Li
Arash Nourian
Sanmi Koyejo
Bo Li
FedML
456
25
0
08 Sep 2022
Reducing Certified Regression to Certified Classification for General
  Poisoning Attacks
Reducing Certified Regression to Certified Classification for General Poisoning Attacks
Zayd Hammoudeh
Daniel Lowd
AAML
337
12
0
29 Aug 2022
Lethal Dose Conjecture on Data Poisoning
Lethal Dose Conjecture on Data PoisoningNeural Information Processing Systems (NeurIPS), 2022
Wenxiao Wang
Alexander Levine
Soheil Feizi
FedML
240
17
0
05 Aug 2022
Certifying Data-Bias Robustness in Linear Regression
Certifying Data-Bias Robustness in Linear Regression
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
233
3
0
07 Jun 2022
Dropbear: Machine Learning Marketplaces made Trustworthy with Byzantine
  Model Agreement
Dropbear: Machine Learning Marketplaces made Trustworthy with Byzantine Model Agreement
A. Shamis
Peter R. Pietzuch
Antoine Delignat-Lavaud
Andrew Paverd
Manuel Costa
OOD
140
0
0
31 May 2022
BagFlip: A Certified Defense against Data Poisoning
BagFlip: A Certified Defense against Data PoisoningNeural Information Processing Systems (NeurIPS), 2022
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
AAML
291
28
0
26 May 2022
On Collective Robustness of Bagging Against Data Poisoning
On Collective Robustness of Bagging Against Data PoisoningInternational Conference on Machine Learning (ICML), 2022
Ruoxin Chen
Zenan Li
Jie Li
Chentao Wu
Junchi Yan
258
25
0
26 May 2022
SafeNet: The Unreasonable Effectiveness of Ensembles in Private
  Collaborative Learning
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning
Harsh Chaudhari
Matthew Jagielski
Alina Oprea
303
7
0
20 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive LearningUSENIX Security Symposium (USENIX Security), 2022
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
346
45
0
13 May 2022
Improved Certified Defenses against Data Poisoning with (Deterministic)
  Finite Aggregation
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite AggregationInternational Conference on Machine Learning (ICML), 2022
Wenxiao Wang
Alexander Levine
Soheil Feizi
AAML
297
68
0
05 Feb 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence
  Estimation
Identifying a Training-Set Attack's Target Using Renormalized Influence EstimationConference on Computer and Communications Security (CCS), 2022
Zayd Hammoudeh
Daniel Lowd
TDI
352
41
0
25 Jan 2022
How to Backdoor HyperNetwork in Personalized Federated Learning?
How to Backdoor HyperNetwork in Personalized Federated Learning?
Phung Lai
Nhathai Phan
Issa M. Khalil
Abdallah Khreishah
Xintao Wu
AAMLFedML
329
0
0
18 Jan 2022
EIFFeL: Ensuring Integrity for Federated Learning
EIFFeL: Ensuring Integrity for Federated LearningConference on Computer and Communications Security (CCS), 2021
A. Chowdhury
Chuan Guo
S. Jha
Laurens van der Maaten
FedML
478
112
0
23 Dec 2021
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with
  Sparsification
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
Ashwinee Panda
Saeed Mahloujifar
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
FedMLAAML
298
121
0
12 Dec 2021
10 Security and Privacy Problems in Large Foundation Models
10 Security and Privacy Problems in Large Foundation Models
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
466
11
0
28 Oct 2021
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
371
64
0
13 Oct 2021
Certifying Robustness to Programmable Data Bias in Decision Trees
Certifying Robustness to Programmable Data Bias in Decision TreesNeural Information Processing Systems (NeurIPS), 2021
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
195
29
0
08 Oct 2021
Adversarial Unlearning of Backdoors via Implicit Hypergradient
Adversarial Unlearning of Backdoors via Implicit Hypergradient
Yi Zeng
Si-An Chen
Won Park
Z. Morley Mao
Ming Jin
R. Jia
AAML
498
227
0
07 Oct 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised
  Learning
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILMSSL
366
197
0
01 Aug 2021
12
Next
Page 1 of 2