ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.01246
  4. Cited By
ML-Leaks: Model and Data Independent Membership Inference Attacks and
  Defenses on Machine Learning Models
v1v2 (latest)

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

4 June 2018
A. Salem
Yang Zhang
Mathias Humbert
Pascal Berrang
Mario Fritz
Michael Backes
    MIACVMIALM
ArXiv (abs)PDFHTML

Papers citing "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"

50 / 519 papers shown
Sharing Models or Coresets: A Study based on Membership Inference Attack
Sharing Models or Coresets: A Study based on Membership Inference Attack
Hanlin Lu
Wei-Han Lee
T. He
Maroun Touma
Kevin S. Chan
MIACVFedML
159
18
0
06 Jul 2020
Reducing Risk of Model Inversion Using Privacy-Guided Training
Reducing Risk of Model Inversion Using Privacy-Guided Training
Abigail Goldsteen
Gilad Ezov
Ariel Farkash
148
5
0
29 Jun 2020
On the Effectiveness of Regularization Against Membership Inference
  Attacks
On the Effectiveness of Regularization Against Membership Inference Attacks
Yigitcan Kaya
Sanghyun Hong
Tudor Dumitras
179
31
0
09 Jun 2020
Sponge Examples: Energy-Latency Attacks on Neural Networks
Sponge Examples: Energy-Latency Attacks on Neural Networks
Ilia Shumailov
Yiren Zhao
Daniel Bates
Nicolas Papernot
Robert D. Mullins
Ross J. Anderson
SILM
241
157
0
05 Jun 2020
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving
  Improvements
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving ImprovementsAsia-Pacific Computer Systems Architecture Conference (ACSA), 2020
Xiaoyi Chen
A. Salem
Dingfan Chen
Michael Backes
Shiqing Ma
Qingni Shen
Zhonghai Wu
Yang Zhang
SILM
236
300
0
01 Jun 2020
On the Difficulty of Membership Inference Attacks
On the Difficulty of Membership Inference Attacks
Shahbaz Rezaei
Xin Liu
MIACV
168
15
0
27 May 2020
Revisiting Membership Inference Under Realistic Assumptions
Revisiting Membership Inference Under Realistic Assumptions
Bargav Jayaraman
Lingxiao Wang
Katherine Knipmeyer
Quanquan Gu
David Evans
238
160
0
21 May 2020
An Overview of Privacy in Machine Learning
An Overview of Privacy in Machine Learning
Emiliano De Cristofaro
SILM
163
98
0
18 May 2020
DAMIA: Leveraging Domain Adaptation as a Defense against Membership
  Inference Attacks
DAMIA: Leveraging Domain Adaptation as a Defense against Membership Inference Attacks
Hongwei Huang
Weiqi Luo
Guoqiang Zeng
J. Weng
Yue Zhang
Anjia Yang
AAML
163
26
0
16 May 2020
Defending Model Inversion and Membership Inference Attacks via
  Prediction Purification
Defending Model Inversion and Membership Inference Attacks via Prediction Purification
Ziqi Yang
Bin Shao
Bohan Xuan
E. Chang
Fan Zhang
AAML
146
79
0
08 May 2020
When Machine Unlearning Jeopardizes Privacy
When Machine Unlearning Jeopardizes PrivacyConference on Computer and Communications Security (CCS), 2020
Min Chen
Zhikun Zhang
Tianhao Wang
Michael Backes
Mathias Humbert
Yang Zhang
MIACV
291
288
0
05 May 2020
Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning
Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated LearningACM Transactions on Knowledge Discovery from Data (TKDD), 2020
Xinjian Luo
Xiangqi Zhu
FedML
659
30
0
27 Apr 2020
Privacy in Deep Learning: A Survey
Privacy in Deep Learning: A Survey
Fatemehsadat Mirshghallah
Mohammadkazem Taram
Praneeth Vepakomma
Abhishek Singh
Ramesh Raskar
H. Esmaeilzadeh
FedML
441
148
0
25 Apr 2020
DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution
  Environments
DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution EnvironmentsACM SIGMOBILE International Conference on Mobile Systems, Applications, and Services (MobiSys), 2020
Fan Mo
Ali Shahin Shamsabadi
Kleomenis Katevas
Soteris Demetriou
Ilias Leontiadis
Andrea Cavallaro
Hamed Haddadi
FedML
164
214
0
12 Apr 2020
Information Leakage in Embedding Models
Information Leakage in Embedding ModelsConference on Computer and Communications Security (CCS), 2020
Congzheng Song
A. Raghunathan
MIACV
437
321
0
31 Mar 2020
Learn to Forget: Machine Unlearning via Neuron Masking
Learn to Forget: Machine Unlearning via Neuron MaskingIEEE Transactions on Dependable and Secure Computing (TDSC), 2020
Yang Liu
Zhuo Ma
Ximeng Liu
Jian Liu
Zhongyuan Jiang
Jianfeng Ma
Philip Yu
K. Ren
MU
220
80
0
24 Mar 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Systematic Evaluation of Privacy Risks of Machine Learning ModelsUSENIX Security Symposium (USENIX Security), 2020
Liwei Song
Prateek Mittal
MIACV
684
456
0
24 Mar 2020
Dynamic Backdoor Attacks Against Machine Learning Models
Dynamic Backdoor Attacks Against Machine Learning ModelsEuropean Symposium on Security and Privacy (EuroS&P), 2020
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
329
306
0
07 Mar 2020
Membership Inference Attacks and Defenses in Classification Models
Membership Inference Attacks and Defenses in Classification Models
Jiacheng Li
Ninghui Li
Bruno Ribeiro
171
39
0
27 Feb 2020
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient
  Shaping
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Sanghyun Hong
Varun Chandrasekaran
Yigitcan Kaya
Tudor Dumitras
Nicolas Papernot
AAML
207
150
0
26 Feb 2020
Approximate Data Deletion from Machine Learning Models
Approximate Data Deletion from Machine Learning ModelsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2020
Zachary Izzo
Mary Anne Smart
Kamalika Chaudhuri
James Zou
MU
258
316
0
24 Feb 2020
Optimizing Privacy-Preserving Outsourced Convolutional Neural Network
  Predictions
Optimizing Privacy-Preserving Outsourced Convolutional Neural Network PredictionsIEEE Transactions on Dependable and Secure Computing (TDSC), 2020
Minghui Li
Sherman S. M. Chow
Shengshan Hu
Yuejing Yan
Minxin Du
Peng Kuang
288
53
0
22 Feb 2020
Data and Model Dependencies of Membership Inference Attack
Data and Model Dependencies of Membership Inference Attack
Shakila Mahjabin Tonni
Dinusha Vatsalan
F. Farokhi
Dali Kaafar
Zhigang Lu
Gioacchino Tangari
324
23
0
17 Feb 2020
Modelling and Quantifying Membership Information Leakage in Machine
  Learning
Modelling and Quantifying Membership Information Leakage in Machine Learning
F. Farokhi
M. Kâafar
AAMLFedMLMIACV
223
26
0
29 Jan 2020
Privacy for All: Demystify Vulnerability Disparity of Differential
  Privacy against Membership Inference Attack
Privacy for All: Demystify Vulnerability Disparity of Differential Privacy against Membership Inference Attack
Bo Zhang
Ruotong Yu
Haipei Sun
Yanying Li
Jun Xu
Wendy Hui Wang
AAML
129
14
0
24 Jan 2020
On the Resilience of Biometric Authentication Systems against Random
  Inputs
On the Resilience of Biometric Authentication Systems against Random InputsNetwork and Distributed System Security Symposium (NDSS), 2020
Benjamin Zi Hao Zhao
Hassan Jameel Asghar
M. Kâafar
AAML
267
26
0
13 Jan 2020
Membership Inference Attacks Against Object Detection Models
Membership Inference Attacks Against Object Detection Models
Yeachan Park
Myung-joo Kang
MIACV
96
6
0
12 Jan 2020
Privacy Attacks on Network Embeddings
Privacy Attacks on Network Embeddings
Michael Ellers
Michael Cochez
Tobias Schumacher
M. Strohmaier
Florian Lemmerich
AAML
169
14
0
23 Dec 2019
Segmentations-Leak: Membership Inference Attacks and Defenses in
  Semantic Image Segmentation
Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image SegmentationEuropean Conference on Computer Vision (ECCV), 2019
Yang He
Shadi Rahimian
Bernt Schiele
Mario Fritz
MIACV
171
57
0
20 Dec 2019
Analyzing Information Leakage of Updates to Natural Language Models
Analyzing Information Leakage of Updates to Natural Language ModelsConference on Computer and Communications Security (CCS), 2019
Santiago Zanella Béguelin
Lukas Wutschitz
Shruti Tople
Victor Rühle
Andrew Paverd
O. Ohrimenko
Boris Köpf
Marc Brockschmidt
ELMMIACVFedMLPILMKELM
377
135
0
17 Dec 2019
Towards Security Threats of Deep Learning Systems: A Survey
Towards Security Threats of Deep Learning Systems: A Survey
Yingzhe He
Guozhu Meng
Kai Chen
Xingbo Hu
Jinwen He
AAMLELM
247
15
0
28 Nov 2019
Survey of Attacks and Defenses on Edge-Deployed Neural Networks
Survey of Attacks and Defenses on Edge-Deployed Neural NetworksIEEE Conference on High Performance Extreme Computing (HPEC), 2019
Mihailo Isakov
V. Gadepally
K. Gettings
Michel A. Kinsy
AAML
133
32
0
27 Nov 2019
Effects of Differential Privacy and Data Skewness on Membership
  Inference Vulnerability
Effects of Differential Privacy and Data Skewness on Membership Inference VulnerabilityInternational Conference on Trust, Privacy and Security in Intelligent Systems and Applications (ICPSISA), 2019
Stacey Truex
Ling Liu
Mehmet Emre Gursoy
Wenqi Wei
Lei Yu
MIACV
154
56
0
21 Nov 2019
Privacy Leakage Avoidance with Switching Ensembles
Privacy Leakage Avoidance with Switching EnsemblesIEEE Military Communications Conference (MILCOM), 2019
R. Izmailov
Peter Lin
Chris Mesterharm
S. Basu
139
2
0
18 Nov 2019
Revocable Federated Learning: A Benchmark of Federated Forest
Revocable Federated Learning: A Benchmark of Federated Forest
Yang Liu
Zhuo Ma
Ximeng Liu
Zhuzhu Wang
Siqi Ma
Ken Ren
FedMLMU
159
11
0
08 Nov 2019
Reducing audio membership inference attack accuracy to chance: 4
  defenses
Reducing audio membership inference attack accuracy to chance: 4 defenses
M. Lomnitz
Nina Lopatina
Paul Gamble
Z. Hampel-Arias
Lucas Tindall
Felipe A. Mejia
M. Barrios
AAML
105
0
0
31 Oct 2019
Quantifying (Hyper) Parameter Leakage in Machine Learning
Quantifying (Hyper) Parameter Leakage in Machine LearningIEEE International Conference on Multimedia Big Data (ICMBD), 2019
Vasisht Duddu
D. V. Rao
AAMLMIACVFedML
137
5
0
31 Oct 2019
Fault Tolerance of Neural Networks in Adversarial Settings
Fault Tolerance of Neural Networks in Adversarial SettingsJournal of Intelligent & Fuzzy Systems (JIFS), 2019
Vasisht Duddu
N. Pillai
D. V. Rao
V. Balas
SILMAAML
186
12
0
30 Oct 2019
Robust Membership Encoding: Inference Attacks and Copyright Protection
  for Deep Learning
Robust Membership Encoding: Inference Attacks and Copyright Protection for Deep Learning
Congzheng Song
Reza Shokri
MIACV
94
5
0
27 Sep 2019
Alleviating Privacy Attacks via Causal Learning
Alleviating Privacy Attacks via Causal LearningInternational Conference on Machine Learning (ICML), 2019
Shruti Tople
Amit Sharma
A. Nori
MIACVOOD
221
32
0
27 Sep 2019
MemGuard: Defending against Black-Box Membership Inference Attacks via
  Adversarial Examples
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial ExamplesConference on Computer and Communications Security (CCS), 2019
Jinyuan Jia
Ahmed Salem
Michael Backes
Yang Zhang
Neil Zhenqiang Gong
359
438
0
23 Sep 2019
Defending against Machine Learning based Inference Attacks via
  Adversarial Examples: Opportunities and Challenges
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges
Jinyuan Jia
Neil Zhenqiang Gong
AAMLSILM
167
20
0
17 Sep 2019
GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative
  Models
GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models
Dingfan Chen
Ning Yu
Yang Zhang
Mario Fritz
198
52
0
09 Sep 2019
High Accuracy and High Fidelity Extraction of Neural Networks
High Accuracy and High Fidelity Extraction of Neural NetworksUSENIX Security Symposium (USENIX Security), 2019
Matthew Jagielski
Nicholas Carlini
David Berthelot
Alexey Kurakin
Nicolas Papernot
MLAUMIACV
338
425
0
03 Sep 2019
White-box vs Black-box: Bayes Optimal Strategies for Membership
  Inference
White-box vs Black-box: Bayes Optimal Strategies for Membership InferenceInternational Conference on Machine Learning (ICML), 2019
Alexandre Sablayrolles
Matthijs Douze
Yann Ollivier
Cordelia Schmid
Edouard Grave
MIACV
197
420
0
29 Aug 2019
On Inferring Training Data Attributes in Machine Learning Models
On Inferring Training Data Attributes in Machine Learning Models
Benjamin Zi Hao Zhao
Hassan Jameel Asghar
Raghav Bhaskar
M. Kâafar
TDIMIACV
129
12
0
28 Aug 2019
Stolen Memories: Leveraging Model Memorization for Calibrated White-Box
  Membership Inference
Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership InferenceUSENIX Security Symposium (USENIX Security), 2019
Klas Leino
Matt Fredrikson
MIACV
304
307
0
27 Jun 2019
Adversarial training approach for local data debiasing
Adversarial training approach for local data debiasing
Ulrich Aïvodji
F. Bidet
Sébastien Gambs
Rosin Claude Ngueveu
Alain Tapp
189
10
0
19 Jun 2019
Membership Privacy for Machine Learning Models Through Knowledge
  Transfer
Membership Privacy for Machine Learning Models Through Knowledge Transfer
Virat Shejwalkar
Amir Houmansadr
168
12
0
15 Jun 2019
Robust or Private? Adversarial Training Makes Models More Vulnerable to
  Privacy Attacks
Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks
Felipe A. Mejia
Paul Gamble
Z. Hampel-Arias
M. Lomnitz
Nina Lopatina
Lucas Tindall
M. Barrios
SILM
144
21
0
15 Jun 2019
Previous
123...10119
Next