Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1911.07135
Cited By
The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks
17 November 2019
Yuheng Zhang
R. Jia
Hengzhi Pei
Wenxiao Wang
Bo-wen Li
D. Song
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks"
21 / 71 papers shown
Title
Survey: Leakage and Privacy at Inference Time
Marija Jegorova
Chaitanya Kaul
Charlie Mayor
Alison Q. OÑeil
Alexander Weir
Roderick Murray-Smith
Sotirios A. Tsaftaris
PILM
MIACV
17
71
0
04 Jul 2021
GraphMI: Extracting Private Graph Data from Graph Neural Networks
Zaixi Zhang
Qi Liu
Zhenya Huang
Hao Wang
Chengqiang Lu
Chuanren Liu
Enhong Chen
17
68
0
05 Jun 2021
Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity
Mathias Parisot
Balázs Pejó
Dayana Spagnuelo
MIACV
19
33
0
27 Apr 2021
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
21
82
0
26 Apr 2021
See through Gradients: Image Batch Recovery via GradInversion
Hongxu Yin
Arun Mallya
Arash Vahdat
J. Álvarez
Jan Kautz
Pavlo Molchanov
FedML
25
459
0
15 Apr 2021
Privacy and Trust Redefined in Federated Machine Learning
Pavlos Papadopoulos
Will Abramson
A. Hall
Nikolaos Pitropakis
William J. Buchanan
31
42
0
29 Mar 2021
DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation
Boxin Wang
Fan Wu
Yunhui Long
Luka Rimanic
Ce Zhang
Bo-wen Li
FedML
29
63
0
20 Mar 2021
Unleashing the Tiger: Inference Attacks on Split Learning
Dario Pasquini
G. Ateniese
M. Bernaschi
FedML
18
147
0
04 Dec 2020
Feature Inference Attack on Model Predictions in Vertical Federated Learning
Xinjian Luo
Yuncheng Wu
Xiaokui Xiao
Beng Chin Ooi
FedML
AAML
11
218
0
20 Oct 2020
R-GAP: Recursive Gradient Attack on Privacy
Junyi Zhu
Matthew Blaschko
FedML
6
132
0
15 Oct 2020
Knowledge-Enriched Distributional Model Inversion Attacks
Si-An Chen
Mostafa Kahla
R. Jia
Guo-Jun Qi
16
93
0
08 Oct 2020
Malicious Network Traffic Detection via Deep Learning: An Information Theoretic View
Erick Galinkin
AAML
13
0
0
16 Sep 2020
Improving Robustness to Model Inversion Attacks via Mutual Information Regularization
Tianhao Wang
Yuheng Zhang
R. Jia
19
74
0
11 Sep 2020
Membership Leakage in Label-Only Exposures
Zheng Li
Yang Zhang
23
237
0
30 Jul 2020
Privacy-preserving Artificial Intelligence Techniques in Biomedicine
Reihaneh Torkzadehmahani
Reza Nasirigerdeh
David B. Blumenthal
T. Kacprowski
M. List
...
Harald H. H. W. Schmidt
A. Schwalber
Christof Tschohl
Andrea Wohner
Jan Baumbach
13
59
0
22 Jul 2020
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
27
213
0
15 Jul 2020
ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function Secret Sharing
T. Ryffel
Pierre Tholoniat
D. Pointcheval
Francis R. Bach
FedML
12
94
0
08 Jun 2020
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Sanjay Kariyappa
A. Prakash
Moinuddin K. Qureshi
AAML
18
146
0
06 May 2020
Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning
Xinjian Luo
Xiangqi Zhu
FedML
62
25
0
27 Apr 2020
Machine Unlearning: Linear Filtration for Logit-based Classifiers
Thomas Baumhauer
Pascal Schöttle
Matthias Zeppelzauer
MU
104
130
0
07 Feb 2020
G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators
Yunhui Long
Boxin Wang
Zhuolin Yang
B. Kailkhura
Aston Zhang
C.A. Gunter
Bo-wen Li
14
72
0
21 Jun 2019
Previous
1
2