ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.06449
  4. Cited By
Robust or Private? Adversarial Training Makes Models More Vulnerable to
  Privacy Attacks

Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks

15 June 2019
Felipe A. Mejia
Paul Gamble
Z. Hampel-Arias
M. Lomnitz
Nina Lopatina
Lucas Tindall
M. Barrios
    SILM
ArXivPDFHTML

Papers citing "Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks"

5 / 5 papers shown
Title
Reconstructing Training Data from Trained Neural Networks
Reconstructing Training Data from Trained Neural Networks
Niv Haim
Gal Vardi
Gilad Yehudai
Ohad Shamir
Michal Irani
40
132
0
15 Jun 2022
Privacy Leakage of Adversarial Training Models in Federated Learning
  Systems
Privacy Leakage of Adversarial Training Models in Federated Learning Systems
Jingyang Zhang
Yiran Chen
Hai Helen Li
FedML
PICV
27
15
0
21 Feb 2022
Plug-In Inversion: Model-Agnostic Inversion for Vision with Data
  Augmentations
Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations
Amin Ghiasi
Hamid Kazemi
Steven Reich
Chen Zhu
Micah Goldblum
Tom Goldstein
34
15
0
31 Jan 2022
On the human-recognizability phenomenon of adversarially trained deep
  image classifiers
On the human-recognizability phenomenon of adversarially trained deep image classifiers
Jonathan W. Helland
Nathan M. VanHoudnos
AAML
22
4
0
18 Dec 2020
Conditional Image Synthesis With Auxiliary Classifier GANs
Conditional Image Synthesis With Auxiliary Classifier GANs
Augustus Odena
C. Olah
Jonathon Shlens
GAN
238
3,190
0
30 Oct 2016
1