ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.06206
  4. Cited By
Concealing Backdoor Model Updates in Federated Learning by
  Trigger-Optimized Data Poisoning

Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning

10 May 2024
Yujie Zhang
Neil Zhenqiang Gong
Michael K. Reiter
    FedML
ArXivPDFHTML

Papers citing "Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning"

3 / 3 papers shown
Title
FLCert: Provably Secure Federated Learning against Poisoning Attacks
FLCert: Provably Secure Federated Learning against Poisoning Attacks
Xiaoyu Cao
Zaixi Zhang
Jinyuan Jia
Neil Zhenqiang Gong
FedML
OOD
71
59
0
02 Oct 2022
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
FedML
104
610
0
27 Dec 2020
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
177
1,031
0
29 Nov 2018
1