ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.00160
  4. Cited By
Secure Federated Learning against Model Poisoning Attacks via Client
  Filtering

Secure Federated Learning against Model Poisoning Attacks via Client Filtering

31 March 2023
D. Yaldiz
Tuo Zhang
Salman Avestimehr
    AAML
    FedML
ArXivPDFHTML

Papers citing "Secure Federated Learning against Model Poisoning Attacks via Client Filtering"

9 / 9 papers shown
Title
Secure Cluster-Based Hierarchical Federated Learning in Vehicular Networks
Secure Cluster-Based Hierarchical Federated Learning in Vehicular Networks
M. S. HaghighiFard
Sinem Coleri
AAML
33
0
0
02 May 2025
Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated Learning
Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated Learning
Runhua Xu
Shiqi Gao
Chao Li
J. Joshi
Jianxin Li
43
2
0
08 Feb 2025
PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy
  Leakage for Federated Learning
PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning
Sizai Hou
Songze Li
Tayyebeh Jahani-Nezhad
Giuseppe Caire
FedML
34
1
0
12 Jul 2024
No Vandalism: Privacy-Preserving and Byzantine-Robust Federated Learning
No Vandalism: Privacy-Preserving and Byzantine-Robust Federated Learning
Zhibo Xing
Zijian Zhang
Ziáng Zhang
Jiamou Liu
Liehuang Zhu
Giovanni Russello
FedML
55
2
0
03 Jun 2024
Secure Hierarchical Federated Learning in Vehicular Networks Using
  Dynamic Client Selection and Anomaly Detection
Secure Hierarchical Federated Learning in Vehicular Networks Using Dynamic Client Selection and Anomaly Detection
M. S. HaghighiFard
Sinem Coleri
AAML
44
0
0
25 May 2024
RFLPA: A Robust Federated Learning Framework against Poisoning Attacks
  with Secure Aggregation
RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation
Peihua Mai
Ran Yan
Yan Pang
FedML
43
5
0
24 May 2024
Ensembler: Combating model inversion attacks using model ensemble during
  collaborative inference
Ensembler: Combating model inversion attacks using model ensemble during collaborative inference
Dancheng Liu
Jinjun Xiong
MIACV
FedML
AAML
40
0
0
19 Jan 2024
FedMultimodal: A Benchmark For Multimodal Federated Learning
FedMultimodal: A Benchmark For Multimodal Federated Learning
Tiantian Feng
Digbalay Bose
Tuo Zhang
Rajat Hebbar
Anil Ramakrishna
Rahul Gupta
Mi Zhang
Salman Avestimehr
Shrikanth Narayanan
32
48
0
15 Jun 2023
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
FedML
117
611
0
27 Dec 2020
1