ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.04706
  4. Cited By
Bayesian Framework for Gradient Leakage

Bayesian Framework for Gradient Leakage

8 November 2021
Mislav Balunović
Dimitar I. Dimitrov
Robin Staab
Martin Vechev
    FedML
ArXivPDFHTML

Papers citing "Bayesian Framework for Gradient Leakage"

27 / 27 papers shown
Title
A Sample-Level Evaluation and Generative Framework for Model Inversion Attacks
A Sample-Level Evaluation and Generative Framework for Model Inversion Attacks
Haoyang Li
Li Bai
Qingqing Ye
Haibo Hu
Yaxin Xiao
Huadi Zheng
Jianliang Xu
64
0
0
26 Feb 2025
Exploring User-level Gradient Inversion with a Diffusion Prior
Exploring User-level Gradient Inversion with a Diffusion Prior
Zhuohang Li
Andrew Lowy
Jing Liu
T. Koike-Akino
Bradley Malin
K. Parsons
Ye Wang
DiffM
28
0
0
11 Sep 2024
Understanding Data Reconstruction Leakage in Federated Learning from a
  Theoretical Perspective
Understanding Data Reconstruction Leakage in Federated Learning from a Theoretical Perspective
Zifan Wang
Binghui Zhang
Meng Pang
Yuan Hong
Binghui Wang
FedML
36
0
0
22 Aug 2024
Efficient Byzantine-Robust and Provably Privacy-Preserving Federated
  Learning
Efficient Byzantine-Robust and Provably Privacy-Preserving Federated Learning
Chenfei Nie
Qiang Li
Yuxin Yang
Yuede Ji
Binghui Wang
40
1
0
29 Jul 2024
BACON: Bayesian Optimal Condensation Framework for Dataset Distillation
BACON: Bayesian Optimal Condensation Framework for Dataset Distillation
Zheng Zhou
Hong Zhao
Guangliang Cheng
Xiangtai Li
Shuchang Lyu
Wenquan Feng
Qi Zhao
DD
47
0
0
03 Jun 2024
Seeing the Forest through the Trees: Data Leakage from Partial
  Transformer Gradients
Seeing the Forest through the Trees: Data Leakage from Partial Transformer Gradients
Weijun Li
Qiongkai Xu
Mark Dras
PILM
32
1
0
03 Jun 2024
SPEAR:Exact Gradient Inversion of Batches in Federated Learning
SPEAR:Exact Gradient Inversion of Batches in Federated Learning
Dimitar I. Dimitrov
Maximilian Baader
Mark Niklas Muller
Martin Vechev
FedML
18
5
0
06 Mar 2024
Inf2Guard: An Information-Theoretic Framework for Learning
  Privacy-Preserving Representations against Inference Attacks
Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Sayedeh Leila Noorbakhsh
Binghui Zhang
Yuan Hong
Binghui Wang
AAML
23
8
0
04 Mar 2024
Data Reconstruction Attacks and Defenses: A Systematic Evaluation
Data Reconstruction Attacks and Defenses: A Systematic Evaluation
Sheng Liu
Zihan Wang
Yuxiao Chen
Qi Lei
AAML
MIACV
59
4
0
13 Feb 2024
Towards Eliminating Hard Label Constraints in Gradient Inversion Attacks
Towards Eliminating Hard Label Constraints in Gradient Inversion Attacks
Yanbo Wang
Jian Liang
R. He
AAML
14
5
0
05 Feb 2024
GI-PIP: Do We Require Impractical Auxiliary Dataset for Gradient
  Inversion Attacks?
GI-PIP: Do We Require Impractical Auxiliary Dataset for Gradient Inversion Attacks?
Yu Sun
Gaojian Xiong
Xianxun Yao
Kailang Ma
Jian Cui
18
3
0
22 Jan 2024
From Principle to Practice: Vertical Data Minimization for Machine
  Learning
From Principle to Practice: Vertical Data Minimization for Machine Learning
Robin Staab
Nikola Jovanović
Mislav Balunović
Martin Vechev
34
5
0
17 Nov 2023
Understanding Deep Gradient Leakage via Inversion Influence Functions
Understanding Deep Gradient Leakage via Inversion Influence Functions
Haobo Zhang
Junyuan Hong
Yuyang Deng
M. Mahdavi
Jiayu Zhou
FedML
59
6
0
22 Sep 2023
Expressive variational quantum circuits provide inherent privacy in
  federated learning
Expressive variational quantum circuits provide inherent privacy in federated learning
Niraj Kumar
Jamie Heredge
Changhao Li
Shaltiel Eloul
Shree Hari Sureshbabu
Marco Pistoia
FedML
54
8
0
22 Sep 2023
Privacy Preserving Federated Learning with Convolutional Variational
  Bottlenecks
Privacy Preserving Federated Learning with Convolutional Variational Bottlenecks
Daniel Scheliga
Patrick Mäder
M. Seeland
FedML
AAML
18
5
0
08 Sep 2023
Privacy and Fairness in Federated Learning: on the Perspective of
  Trade-off
Privacy and Fairness in Federated Learning: on the Perspective of Trade-off
Huiqiang Chen
Tianqing Zhu
Tao Zhang
Wanlei Zhou
Philip S. Yu
FedML
27
43
0
25 Jun 2023
Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated
  Learning
Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning
Kostadin Garov
Dimitar I. Dimitrov
Nikola Jovanović
Martin Vechev
AAML
FedML
34
7
0
05 Jun 2023
Surrogate Model Extension (SME): A Fast and Accurate Weight Update
  Attack on Federated Learning
Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning
Junyi Zhu
Ruicong Yao
Matthew B. Blaschko
FedML
8
9
0
31 May 2023
Reconstructing Training Data from Model Gradient, Provably
Reconstructing Training Data from Model Gradient, Provably
Zihan Wang
Jason D. Lee
Qi Lei
FedML
14
24
0
07 Dec 2022
Refiner: Data Refining against Gradient Leakage Attacks in Federated
  Learning
Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning
Mingyuan Fan
Cen Chen
Chengyu Wang
Ximeng Liu
Wenmeng Zhou
Jun Huang
AAML
FedML
32
0
0
05 Dec 2022
TabLeak: Tabular Data Leakage in Federated Learning
TabLeak: Tabular Data Leakage in Federated Learning
Mark Vero
Mislav Balunović
Dimitar I. Dimitrov
Martin Vechev
FedML
21
7
0
04 Oct 2022
Accelerated Federated Learning with Decoupled Adaptive Optimization
Accelerated Federated Learning with Decoupled Adaptive Optimization
Jiayin Jin
Jiaxiang Ren
Yang Zhou
Lingjuan Lyu
Ji Liu
Dejing Dou
AI4CE
FedML
19
51
0
14 Jul 2022
Data Leakage in Federated Averaging
Data Leakage in Federated Averaging
Dimitar I. Dimitrov
Mislav Balunović
Nikola Konstantinov
Martin Vechev
FedML
14
28
0
24 Jun 2022
A Survey on Gradient Inversion: Attacks, Defenses and Future Directions
A Survey on Gradient Inversion: Attacks, Defenses and Future Directions
Rui Zhang
Song Guo
Junxiao Wang
Xin Xie
Dacheng Tao
27
36
0
15 Jun 2022
Gradient Obfuscation Gives a False Sense of Security in Federated
  Learning
Gradient Obfuscation Gives a False Sense of Security in Federated Learning
Kai Yue
Richeng Jin
Chau-Wai Wong
D. Baron
H. Dai
FedML
26
46
0
08 Jun 2022
Trusted AI in Multi-agent Systems: An Overview of Privacy and Security
  for Distributed Learning
Trusted AI in Multi-agent Systems: An Overview of Privacy and Security for Distributed Learning
Chuan Ma
Jun Li
Kang Wei
Bo Liu
Ming Ding
Long Yuan
Zhu Han
H. Vincent Poor
47
42
0
18 Feb 2022
LAMP: Extracting Text from Gradients with Language Model Priors
LAMP: Extracting Text from Gradients with Language Model Priors
Mislav Balunović
Dimitar I. Dimitrov
Nikola Jovanović
Martin Vechev
11
56
0
17 Feb 2022
1