ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.13057
  4. Cited By
Robbing the Fed: Directly Obtaining Private Data in Federated Learning
  with Modified Models

Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models

25 October 2021
Liam H. Fowl
Jonas Geiping
W. Czaja
Micah Goldblum
Tom Goldstein
    FedML
ArXivPDFHTML

Papers citing "Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models"

50 / 87 papers shown
Title
Cutting Through Privacy: A Hyperplane-Based Data Reconstruction Attack in Federated Learning
Cutting Through Privacy: A Hyperplane-Based Data Reconstruction Attack in Federated Learning
Francesco Diana
André Nusser
Chuan Xu
Giovanni Neglia
22
0
0
15 May 2025
Empirical Calibration and Metric Differential Privacy in Language Models
Empirical Calibration and Metric Differential Privacy in Language Models
Pedro Faustini
Natasha Fernandes
Annabelle McIver
Mark Dras
62
0
0
18 Mar 2025
FedEM: A Privacy-Preserving Framework for Concurrent Utility Preservation in Federated Learning
Mingcong Xu
Xiaojin Zhang
Wei Chen
Hai Jin
FedML
43
0
0
08 Mar 2025
GRAIN: Exact Graph Reconstruction from Gradients
Maria Drencheva
Ivo Petrov
Maximilian Baader
Dimitar I. Dimitrov
Martin Vechev
FedML
44
0
0
03 Mar 2025
A Sample-Level Evaluation and Generative Framework for Model Inversion Attacks
A Sample-Level Evaluation and Generative Framework for Model Inversion Attacks
Haoyang Li
Li Bai
Qingqing Ye
Haibo Hu
Yaxin Xiao
Huadi Zheng
Jianliang Xu
64
0
0
26 Feb 2025
CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling
Kaiyuan Zhang
Siyuan Cheng
Guangyu Shen
Bruno Ribeiro
Shengwei An
Pin-Yu Chen
X. Zhang
Ninghui Li
92
1
0
28 Jan 2025
Gradient Inversion Attack on Graph Neural Networks
Gradient Inversion Attack on Graph Neural Networks
Divya Anand Sinha
Yezi Liu
Ruijie Du
Yanning Shen
FedML
61
0
0
29 Nov 2024
Attribute Inference Attacks for Federated Regression Tasks
Attribute Inference Attacks for Federated Regression Tasks
Francesco Diana
Othmane Marfoq
Chuan Xu
Giovanni Neglia
F. Giroire
Eoin Thomas
AAML
163
1
0
19 Nov 2024
FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses
FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses
Isaac Baglin
Xiatian Zhu
Simon Hadfield
FedML
27
1
0
05 Nov 2024
Federated Black-Box Adaptation for Semantic Segmentation
Federated Black-Box Adaptation for Semantic Segmentation
Jay N. Paranjape
S. Sikder
S. Vedula
Vishal M. Patel
FedML
32
0
0
31 Oct 2024
Federated Learning Nodes Can Reconstruct Peers' Image Data
Federated Learning Nodes Can Reconstruct Peers' Image Data
Ethan Wilson
Kai Yue
Chau-Wai Wong
H. Dai
FedML
22
1
0
07 Oct 2024
Privacy Attack in Federated Learning is Not Easy: An Experimental Study
Privacy Attack in Federated Learning is Not Easy: An Experimental Study
Hangyu Zhu
Liyuan Huang
Zhenping Xie
FedML
26
0
0
28 Sep 2024
In-depth Analysis of Privacy Threats in Federated Learning for Medical
  Data
In-depth Analysis of Privacy Threats in Federated Learning for Medical Data
B. Das
M. H. Amini
Yanzhao Wu
29
0
0
27 Sep 2024
Perfect Gradient Inversion in Federated Learning: A New Paradigm from
  the Hidden Subset Sum Problem
Perfect Gradient Inversion in Federated Learning: A New Paradigm from the Hidden Subset Sum Problem
Qiongxiu Li
Lixia Luo
Agnese Gini
Changlong Ji
Zhanhao Hu
Xiao-Li Li
Chengfang Fang
Jie Shi
Xiaolin Hu
FedML
29
3
0
21 Sep 2024
Understanding Data Reconstruction Leakage in Federated Learning from a
  Theoretical Perspective
Understanding Data Reconstruction Leakage in Federated Learning from a Theoretical Perspective
Zifan Wang
Binghui Zhang
Meng Pang
Yuan Hong
Binghui Wang
FedML
41
0
0
22 Aug 2024
Efficient Byzantine-Robust and Provably Privacy-Preserving Federated
  Learning
Efficient Byzantine-Robust and Provably Privacy-Preserving Federated Learning
Chenfei Nie
Qiang Li
Yuxin Yang
Yuede Ji
Binghui Wang
40
1
0
29 Jul 2024
Harvesting Private Medical Images in Federated Learning Systems with
  Crafted Models
Harvesting Private Medical Images in Federated Learning Systems with Crafted Models
Shanghao Shi
Md Shahedul Haque
Abhijeet Parida
M. Linguraru
Y. T. Hou
Syed Muhammad Anwar
W. Lou
FedML
33
3
0
13 Jul 2024
QBI: Quantile-based Bias Initialization for Efficient Private Data
  Reconstruction in Federated Learning
QBI: Quantile-based Bias Initialization for Efficient Private Data Reconstruction in Federated Learning
Micha V. Nowak
Tim P. Bott
David Khachaturov
Frank Puppe
Adrian Krenzer
Amar Hekalo
FedML
19
1
0
26 Jun 2024
Breaking Secure Aggregation: Label Leakage from Aggregated Gradients in
  Federated Learning
Breaking Secure Aggregation: Label Leakage from Aggregated Gradients in Federated Learning
Zhibo Wang
Zhiwei Chang
Jiahui Hu
Xiaoyi Pang
Jiacheng Du
Yongle Chen
Kui Ren
FedML
24
1
0
22 Jun 2024
Byzantine-Robust Decentralized Federated Learning
Byzantine-Robust Decentralized Federated Learning
Minghong Fang
Zifan Zhang
Hairi
Prashant Khanduri
Jia Liu
Songtao Lu
Yuchen Liu
Neil Zhenqiang Gong
AAML
FedML
OOD
38
18
0
14 Jun 2024
Privacy Challenges in Meta-Learning: An Investigation on Model-Agnostic
  Meta-Learning
Privacy Challenges in Meta-Learning: An Investigation on Model-Agnostic Meta-Learning
Mina Rafiei
Mohammadmahdi Maheri
Hamid R. Rabiee
27
0
0
01 Jun 2024
DAGER: Exact Gradient Inversion for Large Language Models
DAGER: Exact Gradient Inversion for Large Language Models
Ivo Petrov
Dimitar I. Dimitrov
Maximilian Baader
Mark Niklas Muller
Martin Vechev
FedML
55
3
0
24 May 2024
Dealing Doubt: Unveiling Threat Models in Gradient Inversion Attacks
  under Federated Learning, A Survey and Taxonomy
Dealing Doubt: Unveiling Threat Models in Gradient Inversion Attacks under Federated Learning, A Survey and Taxonomy
Yichuan Shi
Olivera Kotevska
Viktor Reshniak
Abhishek Singh
Ramesh Raskar
AAML
37
1
0
16 May 2024
GI-SMN: Gradient Inversion Attack against Federated Learning without
  Prior Knowledge
GI-SMN: Gradient Inversion Attack against Federated Learning without Prior Knowledge
Jin Qian
Kaimin Wei
Yongdong Wu
Jilian Zhang
Jipeng Chen
Huan Bao
31
1
0
06 May 2024
Privacy Backdoors: Enhancing Membership Inference through Poisoning
  Pre-trained Models
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models
Yuxin Wen
Leo Marchyok
Sanghyun Hong
Jonas Geiping
Tom Goldstein
Nicholas Carlini
SILM
AAML
26
9
0
01 Apr 2024
Privacy Backdoors: Stealing Data with Corrupted Pretrained Models
Privacy Backdoors: Stealing Data with Corrupted Pretrained Models
Shanglun Feng
Florian Tramèr
SILM
38
14
0
30 Mar 2024
Leak and Learn: An Attacker's Cookbook to Train Using Leaked Data from
  Federated Learning
Leak and Learn: An Attacker's Cookbook to Train Using Leaked Data from Federated Learning
Joshua C. Zhao
Ahaan Dabholkar
Atul Sharma
Saurabh Bagchi
FedML
28
2
0
26 Mar 2024
Secure Aggregation is Not Private Against Membership Inference Attacks
Secure Aggregation is Not Private Against Membership Inference Attacks
K. Ngo
Johan Ostman
Giuseppe Durisi
Alexandre Graell i Amat
FedML
27
2
0
26 Mar 2024
Adaptive Hybrid Masking Strategy for Privacy-Preserving Face Recognition
  Against Model Inversion Attack
Adaptive Hybrid Masking Strategy for Privacy-Preserving Face Recognition Against Model Inversion Attack
Yinggui Wang
Yuanqing Huang
Jianshu Li
Le Yang
Kai Song
Lei Wang
AAML
PICV
48
0
0
14 Mar 2024
Visual Privacy Auditing with Diffusion Models
Visual Privacy Auditing with Diffusion Models
Kristian Schwethelm
Johannes Kaiser
Moritz Knolle
Daniel Rueckert
Daniel Rueckert
Alexander Ziller
DiffM
AAML
35
0
0
12 Mar 2024
SPEAR:Exact Gradient Inversion of Batches in Federated Learning
SPEAR:Exact Gradient Inversion of Batches in Federated Learning
Dimitar I. Dimitrov
Maximilian Baader
Mark Niklas Muller
Martin Vechev
FedML
26
5
0
06 Mar 2024
Inf2Guard: An Information-Theoretic Framework for Learning
  Privacy-Preserving Representations against Inference Attacks
Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Sayedeh Leila Noorbakhsh
Binghui Zhang
Yuan Hong
Binghui Wang
AAML
23
8
0
04 Mar 2024
Analysis of Privacy Leakage in Federated Large Language Models
Analysis of Privacy Leakage in Federated Large Language Models
Minh Nhat Vu
Truc D. T. Nguyen
Tre' R. Jeter
My T. Thai
34
6
0
02 Mar 2024
Bounding Reconstruction Attack Success of Adversaries Without Data
  Priors
Bounding Reconstruction Attack Success of Adversaries Without Data Priors
Alexander Ziller
Anneliese Riess
Kristian Schwethelm
Tamara T. Mueller
Daniel Rueckert
Georgios Kaissis
MIACV
AAML
36
1
0
20 Feb 2024
Data Reconstruction Attacks and Defenses: A Systematic Evaluation
Data Reconstruction Attacks and Defenses: A Systematic Evaluation
Sheng Liu
Zihan Wang
Yuxiao Chen
Qi Lei
AAML
MIACV
59
4
0
13 Feb 2024
Clients Collaborate: Flexible Differentially Private Federated Learning with Guaranteed Improvement of Utility-Privacy Trade-off
Clients Collaborate: Flexible Differentially Private Federated Learning with Guaranteed Improvement of Utility-Privacy Trade-off
Yuecheng Li
Lele Fu
Tong Wang
Jian Lou
Bin Chen
Lei Yang
Zibin Zheng
Zibin Zheng
Chuan Chen
FedML
70
4
0
10 Feb 2024
Revisiting Gradient Pruning: A Dual Realization for Defending against
  Gradient Attacks
Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks
Lulu Xue
Shengshan Hu
Rui-Qing Zhao
Leo Yu Zhang
Shengqing Hu
Lichao Sun
Dezhong Yao
AAML
6
2
0
30 Jan 2024
Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer
  Inputs of Language Models in Federated Learning
Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning
Jianwei Li
Sheng Liu
Qi Lei
PILM
SILM
AAML
20
4
0
10 Dec 2023
Reconciling AI Performance and Data Reconstruction Resilience for
  Medical Imaging
Reconciling AI Performance and Data Reconstruction Resilience for Medical Imaging
Alexander Ziller
Tamara T. Mueller
Simon Stieger
Leonhard F. Feiner
Johannes Brandt
R. Braren
Daniel Rueckert
Georgios Kaissis
58
1
0
05 Dec 2023
OASIS: Offsetting Active Reconstruction Attacks in Federated Learning
OASIS: Offsetting Active Reconstruction Attacks in Federated Learning
Tre' R. Jeter
Truc D. T. Nguyen
Raed Alharbi
My T. Thai
AAML
24
0
0
23 Nov 2023
Scale-MIA: A Scalable Model Inversion Attack against Secure Federated
  Learning via Latent Space Reconstruction
Scale-MIA: A Scalable Model Inversion Attack against Secure Federated Learning via Latent Space Reconstruction
Shanghao Shi
Ning Wang
Yang Xiao
Chaoyu Zhang
Yi Shi
Y. T. Hou
W. Lou
13
7
0
10 Nov 2023
Maximum Knowledge Orthogonality Reconstruction with Gradients in
  Federated Learning
Maximum Knowledge Orthogonality Reconstruction with Gradients in Federated Learning
Feng Wang
Senem Velipasalar
M. C. Gursoy
17
2
0
30 Oct 2023
RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Adversarial Data Manipulation
RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Adversarial Data Manipulation
Dzung Pham
Shreyas Kulkarni
Amir Houmansadr
30
0
0
29 Oct 2023
FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering
FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering
Md. Rafi Ur Rashid
Vishnu Asutosh Dasu
Kang Gu
Najrin Sultana
Shagufta Mehnaz
AAML
FedML
44
10
0
24 Oct 2023
Source Inference Attacks: Beyond Membership Inference Attacks in
  Federated Learning
Source Inference Attacks: Beyond Membership Inference Attacks in Federated Learning
Hongsheng Hu
Xuyun Zhang
Z. Salcic
Lichao Sun
K. Choo
Gillian Dobbie
11
16
0
30 Sep 2023
Understanding Deep Gradient Leakage via Inversion Influence Functions
Understanding Deep Gradient Leakage via Inversion Influence Functions
Haobo Zhang
Junyuan Hong
Yuyang Deng
M. Mahdavi
Jiayu Zhou
FedML
61
6
0
22 Sep 2023
Client-side Gradient Inversion Against Federated Learning from Poisoning
Client-side Gradient Inversion Against Federated Learning from Poisoning
Jiaheng Wei
Yanjun Zhang
Leo Yu Zhang
Chao Chen
Shirui Pan
Kok-Leong Ong
Jinchao Zhang
Yang Xiang
AAML
20
3
0
14 Sep 2023
Samplable Anonymous Aggregation for Private Federated Data Analysis
Samplable Anonymous Aggregation for Private Federated Data Analysis
Kunal Talwar
Shan Wang
Audra McMillan
Vojta Jina
Vitaly Feldman
...
Congzheng Song
Karl Tarbe
Sebastian Vogt
L. Winstrom
Shundong Zhou
FedML
32
13
0
27 Jul 2023
Private Federated Learning with Autotuned Compression
Private Federated Learning with Autotuned Compression
Enayat Ullah
Christopher A. Choquette-Choo
Peter Kairouz
Sewoong Oh
FedML
15
6
0
20 Jul 2023
Heterogeneous Federated Learning: State-of-the-art and Research
  Challenges
Heterogeneous Federated Learning: State-of-the-art and Research Challenges
Mang Ye
Xiuwen Fang
Bo Du
PongChi Yuen
Dacheng Tao
FedML
AAML
39
244
0
20 Jul 2023
12
Next