ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.11815
  4. Cited By
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

26 November 2019
Minghong Fang
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
    AAML
    OOD
    FedML
ArXivPDFHTML

Papers citing "Local Model Poisoning Attacks to Byzantine-Robust Federated Learning"

50 / 151 papers shown
Title
Communication Compression for Byzantine Robust Learning: New Efficient
  Algorithms and Improved Rates
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard A. Gorbunov
Peter Richtárik
37
5
0
15 Oct 2023
Enabling Quartile-based Estimated-Mean Gradient Aggregation As Baseline
  for Federated Image Classifications
Enabling Quartile-based Estimated-Mean Gradient Aggregation As Baseline for Federated Image Classifications
Yusen Wu
Jamie Deng
Hao Chen
Phuong Nguyen
Yelena Yesha
FedML
23
0
0
21 Sep 2023
Byzantine-Robust Federated Learning with Variance Reduction and
  Differential Privacy
Byzantine-Robust Federated Learning with Variance Reduction and Differential Privacy
Zikai Zhang
Rui Hu
30
11
0
07 Sep 2023
A Survey for Federated Learning Evaluations: Goals and Measures
A Survey for Federated Learning Evaluations: Goals and Measures
Di Chai
Leye Wang
Liu Yang
Junxue Zhang
Kai Chen
Qian Yang
ELM
FedML
17
21
0
23 Aug 2023
A Four-Pronged Defense Against Byzantine Attacks in Federated Learning
A Four-Pronged Defense Against Byzantine Attacks in Federated Learning
Wei Wan
Shengshan Hu
Minghui Li
Jianrong Lu
Longling Zhang
Leo Yu Zhang
Hai Jin
AAML
FedML
29
20
0
07 Aug 2023
Compressed Private Aggregation for Scalable and Robust Federated Learning over Massive Networks
Compressed Private Aggregation for Scalable and Robust Federated Learning over Massive Networks
Natalie Lang
Nir Shlezinger
Rafael G. L. DÓliveira
S. E. Rouayheb
FedML
70
4
0
01 Aug 2023
High Dimensional Distributed Gradient Descent with Arbitrary Number of
  Byzantine Attackers
High Dimensional Distributed Gradient Descent with Arbitrary Number of Byzantine Attackers
Puning Zhao
Zhiguo Wan
OOD
FedML
38
4
0
25 Jul 2023
A Survey of What to Share in Federated Learning: Perspectives on Model
  Utility, Privacy Leakage, and Communication Efficiency
A Survey of What to Share in Federated Learning: Perspectives on Model Utility, Privacy Leakage, and Communication Efficiency
Jiawei Shao
Zijian Li
Wenqiang Sun
Tailin Zhou
Yuchang Sun
Lumin Liu
Zehong Lin
Yuyi Mao
Jun Zhang
FedML
37
23
0
20 Jul 2023
FedDefender: Client-Side Attack-Tolerant Federated Learning
FedDefender: Client-Side Attack-Tolerant Federated Learning
Sungwon Park
Sungwon Han
Fangzhao Wu
Sundong Kim
Bin Zhu
Xing Xie
Meeyoung Cha
FedML
AAML
25
20
0
18 Jul 2023
Hiding in Plain Sight: Differential Privacy Noise Exploitation for
  Evasion-resilient Localized Poisoning Attacks in Multiagent Reinforcement
  Learning
Hiding in Plain Sight: Differential Privacy Noise Exploitation for Evasion-resilient Localized Poisoning Attacks in Multiagent Reinforcement Learning
Md Tamjid Hossain
Hung M. La
AAML
16
0
0
01 Jul 2023
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
Weiming Zhuang
Chen Chen
Lingjuan Lyu
C. L. P. Chen
Yaochu Jin
Lingjuan Lyu
AIFin
AI4CE
99
85
0
27 Jun 2023
A First Order Meta Stackelberg Method for Robust Federated Learning
A First Order Meta Stackelberg Method for Robust Federated Learning
Yunian Pan
Tao Li
Henger Li
Tianyi Xu
Zizhan Zheng
Quanyan Zhu
FedML
29
10
0
23 Jun 2023
Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey
  of Vulnerabilities, Datasets, and Defenses
Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey of Vulnerabilities, Datasets, and Defenses
M. Ferrag
Othmane Friha
B. Kantarci
Norbert Tihanyi
Lucas C. Cordeiro
Merouane Debbah
Djallel Hamouda
Muna Al-Hawawreh
K. Choo
23
43
0
17 Jun 2023
Avoid Adversarial Adaption in Federated Learning by Multi-Metric
  Investigations
Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations
T. Krauß
Alexandra Dmitrienko
AAML
19
4
0
06 Jun 2023
A Framework for Incentivized Collaborative Learning
A Framework for Incentivized Collaborative Learning
Xinran Wang
Qi Le
Ahmad Faraz Khan
Jie Ding
A. Anwar
FedML
37
4
0
26 May 2023
PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
  Shared Generative Adversarial Networks For Data Privacy
PS-FedGAN: An Efficient Federated Learning Framework Based on Partially Shared Generative Adversarial Networks For Data Privacy
Achintha Wijesinghe
Songyang Zhang
Zhi Ding
FedML
24
7
0
19 May 2023
Attacks on Robust Distributed Learning Schemes via Sensitivity Curve
  Maximization
Attacks on Robust Distributed Learning Schemes via Sensitivity Curve Maximization
Christian A. Schroth
Stefan Vlaski
A. Zoubir
FedML
53
1
0
27 Apr 2023
Blockchain-based Federated Learning with SMPC Model Verification Against
  Poisoning Attack for Healthcare Systems
Blockchain-based Federated Learning with SMPC Model Verification Against Poisoning Attack for Healthcare Systems
Aditya Pribadi Kalapaaking
Ibrahim Khalil
X. Yi
19
41
0
26 Apr 2023
Protecting Federated Learning from Extreme Model Poisoning Attacks via
  Multidimensional Time Series Anomaly Detection
Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection
Edoardo Gabrielli
Dimitri Belli
Vittorio Miori
Gabriele Tolomei
AAML
13
4
0
29 Mar 2023
A Survey of Trustworthy Federated Learning with Perspectives on
  Security, Robustness, and Privacy
A Survey of Trustworthy Federated Learning with Perspectives on Security, Robustness, and Privacy
Yifei Zhang
Dun Zeng
Jinglong Luo
Zenglin Xu
Irwin King
FedML
84
47
0
21 Feb 2023
Revisiting Personalized Federated Learning: Robustness Against Backdoor
  Attacks
Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks
Zeyu Qin
Liuyi Yao
Daoyuan Chen
Yaliang Li
Bolin Ding
Minhao Cheng
FedML
35
25
0
03 Feb 2023
BayBFed: Bayesian Backdoor Defense for Federated Learning
BayBFed: Bayesian Backdoor Defense for Federated Learning
Kavita Kumari
Phillip Rieger
Hossein Fereidooni
Murtuza Jadliwala
A. Sadeghi
AAML
FedML
21
31
0
23 Jan 2023
Poisoning Attacks and Defenses in Federated Learning: A Survey
Poisoning Attacks and Defenses in Federated Learning: A Survey
S. Sagar
Chang-Sun Li
S. W. Loke
Jinho D. Choi
OOD
FedML
18
9
0
14 Jan 2023
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for
  Federated Learning
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAML
FedML
32
0
0
28 Dec 2022
Skefl: Single-Key Homomorphic Encryption for Secure Federated Learning
Skefl: Single-Key Homomorphic Encryption for Secure Federated Learning
Dongfang Zhao
FedML
17
0
0
21 Dec 2022
FairRoad: Achieving Fairness for Recommender Systems with Optimized
  Antidote Data
FairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data
Minghong Fang
Jia-Wei Liu
Michinari Momma
Yi Sun
19
4
0
13 Dec 2022
Security Analysis of SplitFed Learning
Security Analysis of SplitFed Learning
M. A. Khan
Virat Shejwalkar
Amir Houmansadr
Fatima M. Anwar
FedML
13
11
0
04 Dec 2022
Castell: Scalable Joint Probability Estimation of Multi-dimensional Data
  Randomized with Local Differential Privacy
Castell: Scalable Joint Probability Estimation of Multi-dimensional Data Randomized with Local Differential Privacy
H. Kikuchi
16
2
0
03 Dec 2022
Federated Learning Attacks and Defenses: A Survey
Federated Learning Attacks and Defenses: A Survey
Yao Chen
Yijie Gui
Hong Lin
Wensheng Gan
Yongdong Wu
FedML
38
29
0
27 Nov 2022
FedCut: A Spectral Analysis Framework for Reliable Detection of
  Byzantine Colluders
FedCut: A Spectral Analysis Framework for Reliable Detection of Byzantine Colluders
Hanlin Gu
Lixin Fan
Xingxing Tang
Qiang Yang
AAML
FedML
20
1
0
24 Nov 2022
Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning
  Attacks
Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks
Naoya Tezuka
H. Ochiai
Yuwei Sun
Hiroshi Esaki
AAML
29
4
0
07 Nov 2022
Robust Distributed Learning Against Both Distributional Shifts and
  Byzantine Attacks
Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks
Guanqiang Zhou
Ping Xu
Yue Wang
Zhi Tian
OOD
FedML
23
4
0
29 Oct 2022
Security-Preserving Federated Learning via Byzantine-Sensitive Triplet
  Distance
Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance
Youngjoon Lee
Sangwoo Park
Joonhyuk Kang
FedML
38
7
0
29 Oct 2022
Robustness of Locally Differentially Private Graph Analysis Against
  Poisoning
Robustness of Locally Differentially Private Graph Analysis Against Poisoning
Jacob Imola
A. Chowdhury
Kamalika Chaudhuri
AAML
20
6
0
25 Oct 2022
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
  Learning
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Kaiyuan Zhang
Guanhong Tao
Qiuling Xu
Shuyang Cheng
Shengwei An
...
Shiwei Feng
Guangyu Shen
Pin-Yu Chen
Shiqing Ma
Xiangyu Zhang
FedML
40
51
0
23 Oct 2022
FedRecover: Recovering from Poisoning Attacks in Federated Learning
  using Historical Information
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information
Xiaoyu Cao
Jinyuan Jia
Zaixi Zhang
Neil Zhenqiang Gong
FedML
MU
AAML
15
73
0
20 Oct 2022
Federated Learning based on Defending Against Data Poisoning Attacks in
  IoT
Federated Learning based on Defending Against Data Poisoning Attacks in IoT
Jiayin Li
Wenzhong Guo
Xingshuo Han
Jianping Cai
Ximeng Liu
AAML
75
1
0
14 Sep 2022
Unraveling the Connections between Privacy and Certified Robustness in
  Federated Learning Against Poisoning Attacks
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
Chulin Xie
Yunhui Long
Pin-Yu Chen
Qinbin Li
Arash Nourian
Sanmi Koyejo
Bo Li
FedML
35
13
0
08 Sep 2022
Network-Level Adversaries in Federated Learning
Network-Level Adversaries in Federated Learning
Giorgio Severi
Matthew Jagielski
Gokberk Yar
Yuxuan Wang
Alina Oprea
Cristina Nita-Rotaru
FedML
18
17
0
27 Aug 2022
MUDGUARD: Taming Malicious Majorities in Federated Learning using
  Privacy-Preserving Byzantine-Robust Clustering
MUDGUARD: Taming Malicious Majorities in Federated Learning using Privacy-Preserving Byzantine-Robust Clustering
Rui Wang
Xingkai Wang
H. Chen
Jérémie Decouchant
S. Picek
Z. Liu
K. Liang
29
1
0
22 Aug 2022
Byzantines can also Learn from History: Fall of Centered Clipping in
  Federated Learning
Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning
Kerem Ozfatura
Emre Ozfatura
Alptekin Kupcu
Deniz Gunduz
AAML
FedML
28
13
0
21 Aug 2022
Federated Learning for Medical Applications: A Taxonomy, Current Trends,
  Challenges, and Future Research Directions
Federated Learning for Medical Applications: A Taxonomy, Current Trends, Challenges, and Future Research Directions
A. Rauniyar
D. Hagos
Debesh Jha
J. E. Haakegaard
Ulas Bagci
D. Rawat
Vladimir Vlassov
OOD
41
91
0
05 Aug 2022
FLDetector: Defending Federated Learning Against Model Poisoning Attacks
  via Detecting Malicious Clients
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
Zaixi Zhang
Xiaoyu Cao
Jin Jia
Neil Zhenqiang Gong
AAML
FedML
13
214
0
19 Jul 2022
MUD-PQFed: Towards Malicious User Detection in Privacy-Preserving
  Quantized Federated Learning
MUD-PQFed: Towards Malicious User Detection in Privacy-Preserving Quantized Federated Learning
Hua Ma
Qun Li
Yifeng Zheng
Zhi Zhang
Xiaoning Liu
Yan Gao
S. Al-Sarawi
Derek Abbott
FedML
26
3
0
19 Jul 2022
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications
Ali Raza
Shujun Li
K. Tran
L. Koehl
Kim Duc Tran
AAML
25
3
0
18 Jul 2022
Suppressing Poisoning Attacks on Federated Learning for Medical Imaging
Suppressing Poisoning Attacks on Federated Learning for Medical Imaging
Naif Alkhunaizi
Dmitry Kamzolov
Martin Takávc
Karthik Nandakumar
OOD
15
9
0
15 Jul 2022
Enhanced Security and Privacy via Fragmented Federated Learning
Enhanced Security and Privacy via Fragmented Federated Learning
N. Jebreel
J. Domingo-Ferrer
Alberto Blanco-Justicia
David Sánchez
FedML
18
26
0
13 Jul 2022
Federated and Transfer Learning: A Survey on Adversaries and Defense
  Mechanisms
Federated and Transfer Learning: A Survey on Adversaries and Defense Mechanisms
Ehsan Hallaji
R. Razavi-Far
M. Saif
AAML
FedML
19
13
0
05 Jul 2022
Backdoor Attack is a Devil in Federated GAN-based Medical Image
  Synthesis
Backdoor Attack is a Devil in Federated GAN-based Medical Image Synthesis
Ruinan Jin
Xiaoxiao Li
AAML
FedML
MedIm
31
12
0
02 Jul 2022
DECK: Model Hardening for Defending Pervasive Backdoors
DECK: Model Hardening for Defending Pervasive Backdoors
Guanhong Tao
Yingqi Liu
Shuyang Cheng
Shengwei An
Zhuo Zhang
Qiuling Xu
Guangyu Shen
Xiangyu Zhang
AAML
18
7
0
18 Jun 2022
Previous
1234
Next