Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.06449
Cited By
Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks
15 June 2019
Felipe A. Mejia
Paul Gamble
Z. Hampel-Arias
M. Lomnitz
Nina Lopatina
Lucas Tindall
M. Barrios
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks"
5 / 5 papers shown
Title
Reconstructing Training Data from Trained Neural Networks
Niv Haim
Gal Vardi
Gilad Yehudai
Ohad Shamir
Michal Irani
40
132
0
15 Jun 2022
Privacy Leakage of Adversarial Training Models in Federated Learning Systems
Jingyang Zhang
Yiran Chen
Hai Helen Li
FedML
PICV
27
15
0
21 Feb 2022
Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations
Amin Ghiasi
Hamid Kazemi
Steven Reich
Chen Zhu
Micah Goldblum
Tom Goldstein
34
15
0
31 Jan 2022
On the human-recognizability phenomenon of adversarially trained deep image classifiers
Jonathan W. Helland
Nathan M. VanHoudnos
AAML
22
4
0
18 Dec 2020
Conditional Image Synthesis With Auxiliary Classifier GANs
Augustus Odena
C. Olah
Jonathon Shlens
GAN
229
3,190
0
30 Oct 2016
1