ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.03915
  4. Cited By
Defending Model Inversion and Membership Inference Attacks via
  Prediction Purification

Defending Model Inversion and Membership Inference Attacks via Prediction Purification

8 May 2020
Ziqi Yang
Bin Shao
Bohan Xuan
E. Chang
Fan Zhang
    AAML
ArXivPDFHTML

Papers citing "Defending Model Inversion and Membership Inference Attacks via Prediction Purification"

11 / 11 papers shown
Title
Introducing Model Inversion Attacks on Automatic Speaker Recognition
Introducing Model Inversion Attacks on Automatic Speaker Recognition
Karla Pizzi
Franziska Boenisch
U. Sahin
Konstantin Böttinger
18
3
0
09 Jan 2023
On the utility and protection of optimization with differential privacy
  and classic regularization techniques
On the utility and protection of optimization with differential privacy and classic regularization techniques
Eugenio Lomurno
Matteo matteucci
15
9
0
07 Sep 2022
Private, Efficient, and Accurate: Protecting Models Trained by
  Multi-party Learning with Differential Privacy
Private, Efficient, and Accurate: Protecting Models Trained by Multi-party Learning with Differential Privacy
Wenqiang Ruan
Ming Xu
Wenjing Fang
Li Wang
Lei Wang
Wei Han
32
12
0
18 Aug 2022
One Parameter Defense -- Defending against Data Inference Attacks via
  Differential Privacy
One Parameter Defense -- Defending against Data Inference Attacks via Differential Privacy
Dayong Ye
Sheng Shen
Tianqing Zhu
B. Liu
Wanlei Zhou
MIACV
14
61
0
13 Mar 2022
MIAShield: Defending Membership Inference Attacks via Preemptive
  Exclusion of Members
MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members
Ismat Jarin
Birhanu Eshete
24
9
0
02 Mar 2022
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute
  Inference Attacks on Classification Models
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Shagufta Mehnaz
S. V. Dibbo
Ehsanul Kabir
Ninghui Li
E. Bertino
MIACV
29
60
0
23 Jan 2022
Survey: Leakage and Privacy at Inference Time
Survey: Leakage and Privacy at Inference Time
Marija Jegorova
Chaitanya Kaul
Charlie Mayor
Alison Q. OÑeil
Alexander Weir
Roderick Murray-Smith
Sotirios A. Tsaftaris
PILM
MIACV
17
71
0
04 Jul 2021
Membership Inference Attacks on Machine Learning: A Survey
Membership Inference Attacks on Machine Learning: A Survey
Hongsheng Hu
Z. Salcic
Lichao Sun
Gillian Dobbie
Philip S. Yu
Xuyun Zhang
MIACV
30
412
0
14 Mar 2021
TransMIA: Membership Inference Attacks Using Transfer Shadow Training
TransMIA: Membership Inference Attacks Using Transfer Shadow Training
Seira Hidano
Takao Murakami
Yusuke Kawamoto
MIACV
23
13
0
30 Nov 2020
Improving Robustness to Model Inversion Attacks via Mutual Information
  Regularization
Improving Robustness to Model Inversion Attacks via Mutual Information Regularization
Tianhao Wang
Yuheng Zhang
R. Jia
14
74
0
11 Sep 2020
Membership Leakage in Label-Only Exposures
Membership Leakage in Label-Only Exposures
Zheng Li
Yang Zhang
15
237
0
30 Jul 2020
1