ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.10594
  4. Cited By
MemGuard: Defending against Black-Box Membership Inference Attacks via
  Adversarial Examples

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

23 September 2019
Jinyuan Jia
Ahmed Salem
Michael Backes
Yang Zhang
Neil Zhenqiang Gong
ArXivPDFHTML

Papers citing "MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples"

15 / 65 papers shown
Title
A Review of Confidentiality Threats Against Embedded Neural Network
  Models
A Review of Confidentiality Threats Against Embedded Neural Network Models
Raphael Joud
Pierre-Alain Moëllic
Rémi Bernhard
J. Rigaud
28
6
0
04 May 2021
Exploiting Explanations for Model Inversion Attacks
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
21
82
0
26 Apr 2021
Membership Inference Attacks on Machine Learning: A Survey
Membership Inference Attacks on Machine Learning: A Survey
Hongsheng Hu
Z. Salcic
Lichao Sun
Gillian Dobbie
Philip S. Yu
Xuyun Zhang
MIACV
35
412
0
14 Mar 2021
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Xinlei He
Yang Zhang
13
51
0
08 Feb 2021
Practical Blind Membership Inference Attack via Differential Comparisons
Practical Blind Membership Inference Attack via Differential Comparisons
Bo Hui
Yuchen Yang
Haolin Yuan
Philippe Burlina
Neil Zhenqiang Gong
Yinzhi Cao
MIACV
30
119
0
05 Jan 2021
TransMIA: Membership Inference Attacks Using Transfer Shadow Training
TransMIA: Membership Inference Attacks Using Transfer Shadow Training
Seira Hidano
Takao Murakami
Yusuke Kawamoto
MIACV
23
13
0
30 Nov 2020
A Distributed Privacy-Preserving Learning Dynamics in General Social
  Networks
A Distributed Privacy-Preserving Learning Dynamics in General Social Networks
Youming Tao
Shuzhen Chen
Feng Li
Dongxiao Yu
Jiguo Yu
Hao Sheng
FedML
19
3
0
15 Nov 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural
  Networks via Error-Correcting Codes
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
27
5
0
26 Oct 2020
Membership Leakage in Label-Only Exposures
Membership Leakage in Label-Only Exposures
Zheng Li
Yang Zhang
23
237
0
30 Jul 2020
A Survey of Privacy Attacks in Machine Learning
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
33
213
0
15 Jul 2020
Revisiting Membership Inference Under Realistic Assumptions
Revisiting Membership Inference Under Realistic Assumptions
Bargav Jayaraman
Lingxiao Wang
Katherine Knipmeyer
Quanquan Gu
David E. Evans
16
147
0
21 May 2020
Learn to Forget: Machine Unlearning via Neuron Masking
Learn to Forget: Machine Unlearning via Neuron Masking
Yang Liu
Zhuo Ma
Ximeng Liu
Jian-wei Liu
Zhongyuan Jiang
Jianfeng Ma
Philip Yu
K. Ren
MU
20
61
0
24 Mar 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
196
358
0
24 Mar 2020
Dynamic Backdoor Attacks Against Machine Learning Models
Dynamic Backdoor Attacks Against Machine Learning Models
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
21
269
0
07 Mar 2020
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,837
0
08 Jul 2016
Previous
12