ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16403
  4. Cited By
Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach

Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach

22 May 2025
Huazi Pan
Yanjun Zhang
Leo Yu Zhang
Scott Adams
Abbas Kouzani
Suiyang Khoo
    FedML
ArXivPDFHTML

Papers citing "Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach"

19 / 19 papers shown
Title
AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks
  Through Local Update Amplification
AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification
Zirui Gong
Liyue Shen
Yanjun Zhang
Leo Yu Zhang
Jingwei Wang
Guangdong Bai
Yong Xiang
AAML
53
7
0
13 Nov 2023
Denial-of-Service or Fine-Grained Control: Towards Flexible Model
  Poisoning Attacks on Federated Learning
Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning
Hangtao Zhang
Zeming Yao
L. Zhang
Shengshan Hu
Chao Chen
Alan Liew
Zhetao Li
48
12
0
21 Apr 2023
FL-Defender: Combating Targeted Attacks in Federated Learning
FL-Defender: Combating Targeted Attacks in Federated Learning
N. Jebreel
J. Domingo-Ferrer
AAML
FedML
65
58
0
02 Jul 2022
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
FedML
139
626
0
27 Dec 2020
Learning from History for Byzantine Robust Optimization
Learning from History for Byzantine Robust Optimization
Sai Praneeth Karimireddy
Lie He
Martin Jaggi
FedML
AAML
61
179
0
18 Dec 2020
PrivColl: Practical Privacy-Preserving Collaborative Machine Learning
PrivColl: Practical Privacy-Preserving Collaborative Machine Learning
Yanjun Zhang
Guangdong Bai
Xue Li
Caitlin I. Curtis
Chong Chen
R. Ko
FedML
26
33
0
14 Jul 2020
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Minghong Fang
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
OOD
FedML
94
1,093
0
26 Nov 2019
Can You Really Backdoor Federated Learning?
Can You Really Backdoor Federated Learning?
Ziteng Sun
Peter Kairouz
A. Suresh
H. B. McMahan
FedML
58
565
0
18 Nov 2019
Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product
  Manipulation
Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation
Cong Xie
Oluwasanmi Koyejo
Indranil Gupta
FedML
AAML
25
253
0
10 Mar 2019
A Little Is Enough: Circumventing Defenses For Distributed Learning
A Little Is Enough: Circumventing Defenses For Distributed Learning
Moran Baruch
Gilad Baruch
Yoav Goldberg
FedML
31
496
0
16 Feb 2019
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
252
1,044
0
29 Nov 2018
Universal Multi-Party Poisoning Attacks
Universal Multi-Party Poisoning Attacks
Saeed Mahloujifar
Mohammad Mahmoody
Ameer Mohammed
AAML
34
44
0
10 Sep 2018
How To Backdoor Federated Learning
How To Backdoor Federated Learning
Eugene Bagdasaryan
Andreas Veit
Yiqing Hua
D. Estrin
Vitaly Shmatikov
SILM
FedML
71
1,892
0
02 Jul 2018
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for
  Regression Learning
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
74
757
0
01 Apr 2018
Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates
Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates
Dong Yin
Yudong Chen
Kannan Ramchandran
Peter L. Bartlett
OOD
FedML
72
1,483
0
05 Mar 2018
The Hidden Vulnerability of Distributed Learning in Byzantium
The Hidden Vulnerability of Distributed Learning in Byzantium
El-Mahdi El-Mhamdi
R. Guerraoui
Sébastien Rouault
AAML
FedML
57
743
0
22 Feb 2018
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
89
628
0
29 Aug 2017
Learning Feature Pyramids for Human Pose Estimation
Learning Feature Pyramids for Human Pose Estimation
Wei Yang
Shuang Li
Wanli Ouyang
Hongsheng Li
Xiaogang Wang
3DH
59
491
0
03 Aug 2017
Communication-Efficient Learning of Deep Networks from Decentralized
  Data
Communication-Efficient Learning of Deep Networks from Decentralized Data
H. B. McMahan
Eider Moore
Daniel Ramage
S. Hampson
Blaise Agüera y Arcas
FedML
234
17,328
0
17 Feb 2016
1