ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.03765
  4. Cited By
Certified Robustness of Nearest Neighbors against Data Poisoning and
  Backdoor Attacks

Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks

7 December 2020
Jinyuan Jia
Yupei Liu
Xiaoyu Cao
Neil Zhenqiang Gong
    AAML
ArXivPDFHTML

Papers citing "Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks"

12 / 12 papers shown
Title
Cert-SSB: Toward Certified Sample-Specific Backdoor Defense
Cert-SSB: Toward Certified Sample-Specific Backdoor Defense
Ting Qiao
Y. Wang
Xing Liu
Sixing Wu
Jianbing Li
Yiming Li
AAML
SILM
66
0
0
30 Apr 2025
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated
  AI-enabled Critical Infrastructure
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated AI-enabled Critical Infrastructure
Zehang Deng
Ruoxi Sun
Minhui Xue
Sheng Wen
S. Çamtepe
Surya Nepal
Yang Xiang
39
1
0
24 May 2024
Dialectical Alignment: Resolving the Tension of 3H and Security Threats
  of LLMs
Dialectical Alignment: Resolving the Tension of 3H and Security Threats of LLMs
Shu Yang
Jiayuan Su
Han Jiang
Mengdi Li
Keyuan Cheng
Muhammad Asif Ali
Lijie Hu
Di Wang
35
5
0
30 Mar 2024
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu
Michael K. Reiter
Neil Zhenqiang Gong
AAML
33
2
0
22 Feb 2024
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
Jinyuan Jia
Yupei Liu
Yuepeng Hu
Neil Zhenqiang Gong
15
13
0
26 Mar 2023
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive
  Learning
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Jinghuai Zhang
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
AAML
35
20
0
15 Nov 2022
Unraveling the Connections between Privacy and Certified Robustness in
  Federated Learning Against Poisoning Attacks
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
Chulin Xie
Yunhui Long
Pin-Yu Chen
Qinbin Li
Arash Nourian
Sanmi Koyejo
Bo Li
FedML
35
13
0
08 Sep 2022
DECK: Model Hardening for Defending Pervasive Backdoors
DECK: Model Hardening for Defending Pervasive Backdoors
Guanhong Tao
Yingqi Liu
Shuyang Cheng
Shengwei An
Zhuo Zhang
Qiuling Xu
Guangyu Shen
Xiangyu Zhang
AAML
18
7
0
18 Jun 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
25
34
0
13 May 2022
Certifying Robustness to Programmable Data Bias in Decision Trees
Certifying Robustness to Programmable Data Bias in Decision Trees
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
27
21
0
08 Oct 2021
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
179
1,032
0
29 Nov 2018
1