Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2301.09508
Cited By
BayBFed: Bayesian Backdoor Defense for Federated Learning
23 January 2023
Kavita Kumari
Phillip Rieger
Hossein Fereidooni
Murtuza Jadliwala
A. Sadeghi
AAML
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"BayBFed: Bayesian Backdoor Defense for Federated Learning"
20 / 20 papers shown
Title
TrojanDam: Detection-Free Backdoor Defense in Federated Learning through Proactive Model Robustification utilizing OOD Data
Yanbo Dai
Songze Li
Zihan Gan
Xueluan Gong
AAML
FedML
35
0
0
22 Apr 2025
SMTFL: Secure Model Training to Untrusted Participants in Federated Learning
Zhihui Zhao
Xiaorong Dong
Yimo Ren
Jianhua Wang
Dan Yu
Hongsong Zhu
Yongle Chen
77
0
0
24 Feb 2025
Do We Really Need to Design New Byzantine-robust Aggregation Rules?
Minghong Fang
Seyedsina Nabavirazavi
Zhuqing Liu
Wei Sun
S. Iyengar
Haibo Yang
AAML
OOD
76
6
0
29 Jan 2025
SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning (Full Version)
Phillip Rieger
Alessandro Pegoraro
Kavita Kumari
Tigist Abera
Jonathan Knauer
A. Sadeghi
AAML
43
2
0
11 Jan 2025
Gradient Purification: Defense Against Poisoning Attack in Decentralized Federated Learning
Bin Li
Xiaoye Miao
Yongheng Shang
Xinkui Zhao
AAML
44
0
0
08 Jan 2025
FedBlock: A Blockchain Approach to Federated Learning against Backdoor Attacks
D. Nguyen
Phi Le Nguyen
T. Nguyen
Hieu H. Pham
D. Tran
FedML
24
0
0
05 Nov 2024
"No Matter What You Do": Purifying GNN Models via Backdoor Unlearning
Jiale Zhang
Chengcheng Zhu
Bosen Rao
Hao Sui
Xiaobing Sun
Bing Chen
Chunyi Zhou
Shouling Ji
AAML
30
0
0
02 Oct 2024
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning
Zhangchen Xu
Fengqing Jiang
Luyao Niu
Jinyuan Jia
Bo Li
Radha Poovendran
FedML
38
1
0
31 May 2024
BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federated Learning
Songze Li
Yanbo Dai
AAML
FedML
30
7
0
31 May 2024
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning
Yujie Zhang
Neil Zhenqiang Gong
Michael K. Reiter
FedML
35
1
0
10 May 2024
The last Dance : Robust backdoor attack via diffusion models and bayesian approach
Orson Mengara
DiffM
32
4
0
05 Feb 2024
FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning Attacks in Federated Learning
Hossein Fereidooni
Alessandro Pegoraro
Phillip Rieger
Alexandra Dmitrienko
Ahmad-Reza Sadeghi
AAML
13
12
0
07 Dec 2023
FLTracer: Accurate Poisoning Attack Provenance in Federated Learning
Xinyu Zhang
Qingyu Liu
Zhongjie Ba
Yuan Hong
Tianhang Zheng
Feng Lin
Liwang Lu
Kui Ren
AAML
26
10
0
20 Oct 2023
FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks
Jorge Castillo
Phillip Rieger
Hossein Fereidooni
Qian Chen
Ahmad Sadeghi
FedML
AAML
28
8
0
03 Oct 2023
FedSecurity: Benchmarking Attacks and Defenses in Federated Learning and Federated LLMs
Shanshan Han
Baturalp Buyukates
Zijian Hu
Han Jin
Weizhao Jin
...
Qifan Zhang
Yuhui Zhang
Carlee Joe-Wong
Salman Avestimehr
Chaoyang He
SILM
21
12
0
08 Jun 2023
Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations
T. Krauß
Alexandra Dmitrienko
AAML
16
4
0
06 Jun 2023
Get Rid Of Your Trail: Remotely Erasing Backdoors in Federated Learning
Manaar Alam
Hithem Lamri
Michail Maniatakos
FedML
AAML
MU
19
14
0
20 Apr 2023
CrowdGuard: Federated Backdoor Detection in Federated Learning
Phillip Rieger
T. Krauß
Markus Miettinen
Alexandra Dmitrienko
Ahmad-Reza Sadeghi Technical University Darmstadt
AAML
FedML
22
22
0
14 Oct 2022
MUDGUARD: Taming Malicious Majorities in Federated Learning using Privacy-Preserving Byzantine-Robust Clustering
Rui Wang
Xingkai Wang
H. Chen
Jérémie Decouchant
S. Picek
Z. Liu
K. Liang
29
1
0
22 Aug 2022
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
FedML
106
611
0
27 Dec 2020
1