ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.04495
  4. Cited By
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks

Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks

11 August 2020
Jinyuan Jia
Xiaoyu Cao
Neil Zhenqiang Gong
    SILM
ArXivPDFHTML

Papers citing "Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks"

24 / 24 papers shown
Title
Cert-SSB: Toward Certified Sample-Specific Backdoor Defense
Cert-SSB: Toward Certified Sample-Specific Backdoor Defense
Ting Qiao
Y. Wang
Xing Liu
Sixing Wu
Jianbing Li
Yiming Li
AAML
SILM
66
0
0
30 Apr 2025
Timber! Poisoning Decision Trees
Timber! Poisoning Decision Trees
Stefano Calzavara
Lorenzo Cazzaro
Massimo Vettori
AAML
25
0
0
01 Oct 2024
A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks
A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks
Hengzhu Liu
Ping Xiong
Tianqing Zhu
Philip S. Yu
29
6
0
10 Jun 2024
Learning from Uncertain Data: From Possible Worlds to Possible Models
Learning from Uncertain Data: From Possible Worlds to Possible Models
Jiongli Zhu
Su Feng
Boris Glavic
Babak Salimi
27
0
0
28 May 2024
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated
  AI-enabled Critical Infrastructure
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated AI-enabled Critical Infrastructure
Zehang Deng
Ruoxi Sun
Minhui Xue
Sheng Wen
S. Çamtepe
Surya Nepal
Yang Xiang
35
1
0
24 May 2024
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu
Michael K. Reiter
Neil Zhenqiang Gong
AAML
33
2
0
22 Feb 2024
Enhancing the Antidote: Improved Pointwise Certifications against
  Poisoning Attacks
Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
AAML
20
3
0
15 Aug 2023
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
Jinyuan Jia
Yupei Liu
Yuepeng Hu
Neil Zhenqiang Gong
15
13
0
26 Mar 2023
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive
  Learning
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Jinghuai Zhang
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
AAML
35
20
0
15 Nov 2022
On Optimal Learning Under Targeted Data Poisoning
On Optimal Learning Under Targeted Data Poisoning
Steve Hanneke
Amin Karbasi
Mohammad Mahmoody
Idan Mehalel
Shay Moran
AAML
FedML
25
7
0
06 Oct 2022
On the Robustness of Random Forest Against Untargeted Data Poisoning: An
  Ensemble-Based Approach
On the Robustness of Random Forest Against Untargeted Data Poisoning: An Ensemble-Based Approach
M. Anisetti
C. Ardagna
Alessandro Balestrucci
Nicola Bena
Ernesto Damiani
C. Yeun
AAML
OOD
22
10
0
28 Sep 2022
Unraveling the Connections between Privacy and Certified Robustness in
  Federated Learning Against Poisoning Attacks
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
Chulin Xie
Yunhui Long
Pin-Yu Chen
Qinbin Li
Arash Nourian
Sanmi Koyejo
Bo Li
FedML
35
13
0
08 Sep 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
25
34
0
13 May 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence
  Estimation
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
18
28
0
25 Jan 2022
How to Backdoor HyperNetwork in Personalized Federated Learning?
How to Backdoor HyperNetwork in Personalized Federated Learning?
Phung Lai
Nhathai Phan
Issa M. Khalil
Abdallah Khreishah
Xintao Wu
AAML
FedML
23
0
0
18 Jan 2022
Certifying Robustness to Programmable Data Bias in Decision Trees
Certifying Robustness to Programmable Data Bias in Decision Trees
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
24
21
0
08 Oct 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised
  Learning
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
24
151
0
01 Aug 2021
Data Poisoning Attacks to Deep Learning Based Recommender Systems
Data Poisoning Attacks to Deep Learning Based Recommender Systems
Hai Huang
Jiaming Mu
Neil Zhenqiang Gong
Qi Li
Bin Liu
Mingwei Xu
AAML
17
129
0
07 Jan 2021
Robust and Verifiable Information Embedding Attacks to Deep Neural
  Networks via Error-Correcting Codes
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
27
5
0
26 Oct 2020
Increasing-Margin Adversarial (IMA) Training to Improve Adversarial
  Robustness of Neural Networks
Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks
Linhai Ma
Liang Liang
AAML
18
18
0
19 May 2020
Certified Robustness of Community Detection against Adversarial
  Structural Perturbation via Randomized Smoothing
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
83
83
0
09 Feb 2020
Data Poisoning Attacks to Local Differential Privacy Protocols
Data Poisoning Attacks to Local Differential Privacy Protocols
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
25
76
0
05 Nov 2019
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
179
1,032
0
29 Nov 2018
1