ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.13057
  4. Cited By
Robbing the Fed: Directly Obtaining Private Data in Federated Learning
  with Modified Models

Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models

25 October 2021
Liam H. Fowl
Jonas Geiping
W. Czaja
Micah Goldblum
Tom Goldstein
    FedML
ArXivPDFHTML

Papers citing "Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models"

37 / 87 papers shown
Title
Privacy and Fairness in Federated Learning: on the Perspective of
  Trade-off
Privacy and Fairness in Federated Learning: on the Perspective of Trade-off
Huiqiang Chen
Tianqing Zhu
Tao Zhang
Wanlei Zhou
Philip S. Yu
FedML
29
43
0
25 Jun 2023
Privacy Preserving Bayesian Federated Learning in Heterogeneous Settings
Privacy Preserving Bayesian Federated Learning in Heterogeneous Settings
Disha Makhija
Joydeep Ghosh
Nhat Ho
FedML
24
2
0
13 Jun 2023
SRATTA : Sample Re-ATTribution Attack of Secure Aggregation in Federated
  Learning
SRATTA : Sample Re-ATTribution Attack of Secure Aggregation in Federated Learning
Tanguy Marchand
Regis Loeb
Ulysse Marteau-Ferey
Jean Ogier du Terrail
Arthur Pignet
FedML
42
4
0
13 Jun 2023
FedSecurity: Benchmarking Attacks and Defenses in Federated Learning and
  Federated LLMs
FedSecurity: Benchmarking Attacks and Defenses in Federated Learning and Federated LLMs
Shanshan Han
Baturalp Buyukates
Zijian Hu
Han Jin
Weizhao Jin
...
Qifan Zhang
Yuhui Zhang
Carlee Joe-Wong
Salman Avestimehr
Chaoyang He
SILM
21
12
0
08 Jun 2023
Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated
  Learning
Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning
Kostadin Garov
Dimitar I. Dimitrov
Nikola Jovanović
Martin Vechev
AAML
FedML
34
7
0
05 Jun 2023
The Resource Problem of Using Linear Layer Leakage Attack in Federated
  Learning
The Resource Problem of Using Linear Layer Leakage Attack in Federated Learning
Joshua C. Zhao
A. Elkordy
Atul Sharma
Yahya H. Ezzeldin
A. Avestimehr
S. Bagchi
FedML
35
12
0
27 Mar 2023
LOKI: Large-scale Data Reconstruction Attack against Federated Learning
  through Model Manipulation
LOKI: Large-scale Data Reconstruction Attack against Federated Learning through Model Manipulation
Joshua C. Zhao
Atul Sharma
A. Elkordy
Yahya H. Ezzeldin
Salman Avestimehr
S. Bagchi
AAML
FedML
30
28
0
21 Mar 2023
Manipulating Transfer Learning for Property Inference
Manipulating Transfer Learning for Property Inference
Yulong Tian
Fnu Suya
Anshuman Suri
Fengyuan Xu
David E. Evans
AAML
29
6
0
21 Mar 2023
Client-specific Property Inference against Secure Aggregation in
  Federated Learning
Client-specific Property Inference against Secure Aggregation in Federated Learning
Raouf Kerkouche
G. Ács
Mario Fritz
FedML
57
9
0
07 Mar 2023
Active Membership Inference Attack under Local Differential Privacy in
  Federated Learning
Active Membership Inference Attack under Local Differential Privacy in Federated Learning
Truc D. T. Nguyen
Phung Lai
K. Tran
Nhathai Phan
My T. Thai
FedML
16
18
0
24 Feb 2023
WW-FL: Secure and Private Large-Scale Federated Learning
WW-FL: Secure and Private Large-Scale Federated Learning
F. Marx
T. Schneider
Ajith Suresh
Tobias Wehrle
Christian Weinert
Hossein Yalame
FedML
17
2
0
20 Feb 2023
Private, fair and accurate: Training large-scale, privacy-preserving AI
  models in medical imaging
Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging
Soroosh Tayebi Arasteh
Alexander Ziller
Christiane Kuhl
Marcus R. Makowski
S. Nebelung
R. Braren
Daniel Rueckert
Daniel Truhn
Georgios Kaissis
MedIm
34
17
0
03 Feb 2023
Reconstructing Individual Data Points in Federated Learning Hardened
  with Differential Privacy and Secure Aggregation
Reconstructing Individual Data Points in Federated Learning Hardened with Differential Privacy and Secure Aggregation
Franziska Boenisch
Adam Dziedzic
R. Schuster
Ali Shahin Shamsabadi
Ilia Shumailov
Nicolas Papernot
FedML
17
20
0
09 Jan 2023
DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics
DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics
Renjie Pi
Weizhong Zhang
Yueqi Xie
Jiahui Gao
Xiaoyu Wang
Sunghun Kim
Qifeng Chen
DD
39
26
0
20 Nov 2022
M-to-N Backdoor Paradigm: A Multi-Trigger and Multi-Target Attack to
  Deep Learning Models
M-to-N Backdoor Paradigm: A Multi-Trigger and Multi-Target Attack to Deep Learning Models
Linshan Hou
Zhongyun Hua
Yuhong Li
Yifeng Zheng
Leo Yu Zhang
AAML
18
2
0
03 Nov 2022
Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in
  Federated Learning
Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning
Ruihan Wu
Xiangyu Chen
Chuan Guo
Kilian Q. Weinberger
FedML
12
26
0
19 Oct 2022
ScionFL: Efficient and Robust Secure Quantized Aggregation
ScionFL: Efficient and Robust Secure Quantized Aggregation
Y. Ben-Itzhak
Helen Mollering
Benny Pinkas
T. Schneider
Ajith Suresh
Oleksandr Tkachenko
S. Vargaftik
Christian Weinert
Hossein Yalame
Avishay Yanai
30
6
0
13 Oct 2022
CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated
  Learning
CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated Learning
Samuel Maddock
Alexandre Sablayrolles
Pierre Stock
FedML
12
22
0
06 Oct 2022
TabLeak: Tabular Data Leakage in Federated Learning
TabLeak: Tabular Data Leakage in Federated Learning
Mark Vero
Mislav Balunović
Dimitar I. Dimitrov
Martin Vechev
FedML
21
7
0
04 Oct 2022
Concealing Sensitive Samples against Gradient Leakage in Federated
  Learning
Concealing Sensitive Samples against Gradient Leakage in Federated Learning
Jing Wu
Munawar Hayat
Min Zhou
Mehrtash Harandi
FedML
11
9
0
13 Sep 2022
Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated
  Learning using Independent Component Analysis
Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis
Sanjay Kariyappa
Chuan Guo
Kiwan Maeng
Wenjie Xiong
G. E. Suh
Moinuddin K. Qureshi
Hsien-Hsin S. Lee
FedML
13
29
0
12 Sep 2022
Accelerated Federated Learning with Decoupled Adaptive Optimization
Accelerated Federated Learning with Decoupled Adaptive Optimization
Jiayin Jin
Jiaxiang Ren
Yang Zhou
Lingjuan Lyu
Ji Liu
Dejing Dou
AI4CE
FedML
19
51
0
14 Jul 2022
Data Leakage in Federated Averaging
Data Leakage in Federated Averaging
Dimitar I. Dimitrov
Mislav Balunović
Nikola Konstantinov
Martin Vechev
FedML
19
28
0
24 Jun 2022
Decoupled Federated Learning for ASR with Non-IID Data
Decoupled Federated Learning for ASR with Non-IID Data
Hanjing Zhu
Jindong Wang
Gaofeng Cheng
Pengyuan Zhang
Yonghong Yan
23
10
0
18 Jun 2022
Gradient Obfuscation Gives a False Sense of Security in Federated
  Learning
Gradient Obfuscation Gives a False Sense of Security in Federated Learning
Kai Yue
Richeng Jin
Chau-Wai Wong
D. Baron
H. Dai
FedML
28
46
0
08 Jun 2022
Secure Federated Clustering
Secure Federated Clustering
Songze Li
Sizai Hou
Baturalp Buyukates
A. Avestimehr
FedML
23
9
0
31 May 2022
On the (In)security of Peer-to-Peer Decentralized Machine Learning
On the (In)security of Peer-to-Peer Decentralized Machine Learning
Dario Pasquini
Mathilde Raynal
Carmela Troncoso
OOD
FedML
35
19
0
17 May 2022
AdaBest: Minimizing Client Drift in Federated Learning via Adaptive Bias
  Estimation
AdaBest: Minimizing Client Drift in Federated Learning via Adaptive Bias Estimation
Farshid Varno
Marzie Saghayi
Laya Rafiee
Sharut Gupta
Stan Matwin
Mohammad Havaei
FedML
34
30
0
27 Apr 2022
Perfectly Accurate Membership Inference by a Dishonest Central Server in
  Federated Learning
Perfectly Accurate Membership Inference by a Dishonest Central Server in Federated Learning
Georg Pichler
Marco Romanelli
L. Rey Vega
Pablo Piantanida
FedML
23
10
0
30 Mar 2022
Preserving Privacy and Security in Federated Learning
Preserving Privacy and Security in Federated Learning
Truc D. T. Nguyen
My T. Thai
FedML
16
49
0
07 Feb 2022
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning
  for Language Models
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Liam H. Fowl
Jonas Geiping
Steven Reich
Yuxin Wen
Wojtek Czaja
Micah Goldblum
Tom Goldstein
FedML
71
56
0
29 Jan 2022
TOFU: Towards Obfuscated Federated Updates by Encoding Weight Updates
  into Gradients from Proxy Data
TOFU: Towards Obfuscated Federated Updates by Encoding Weight Updates into Gradients from Proxy Data
Isha Garg
M. Nagaraj
Kaushik Roy
FedML
18
1
0
21 Jan 2022
When the Curious Abandon Honesty: Federated Learning Is Not Private
When the Curious Abandon Honesty: Federated Learning Is Not Private
Franziska Boenisch
Adam Dziedzic
R. Schuster
Ali Shahin Shamsabadi
Ilia Shumailov
Nicolas Papernot
FedML
AAML
69
181
0
06 Dec 2021
Eluding Secure Aggregation in Federated Learning via Model Inconsistency
Eluding Secure Aggregation in Federated Learning via Model Inconsistency
Dario Pasquini
Danilo Francati
G. Ateniese
FedML
14
100
0
14 Nov 2021
A Field Guide to Federated Optimization
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
184
411
0
14 Jul 2021
RoFL: Robustness of Secure Federated Learning
RoFL: Robustness of Secure Federated Learning
Hidde Lycklama
Lukas Burkhalter
Alexander Viand
Nicolas Küchler
Anwar Hithnawi
FedML
24
55
0
07 Jul 2021
Improved Baselines with Momentum Contrastive Learning
Improved Baselines with Momentum Contrastive Learning
Xinlei Chen
Haoqi Fan
Ross B. Girshick
Kaiming He
SSL
267
3,369
0
09 Mar 2020
Previous
12