Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.06206
Cited By
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning
10 May 2024
Yujie Zhang
Neil Zhenqiang Gong
Michael K. Reiter
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning"
3 / 3 papers shown
Title
FLCert: Provably Secure Federated Learning against Poisoning Attacks
Xiaoyu Cao
Zaixi Zhang
Jinyuan Jia
Neil Zhenqiang Gong
FedML
OOD
71
59
0
02 Oct 2022
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
FedML
104
610
0
27 Dec 2020
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
177
1,031
0
29 Nov 2018
1