ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.01246
  4. Cited By
ML-Leaks: Model and Data Independent Membership Inference Attacks and
  Defenses on Machine Learning Models
v1v2 (latest)

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

4 June 2018
A. Salem
Yang Zhang
Mathias Humbert
Pascal Berrang
Mario Fritz
Michael Backes
    MIACVMIALM
ArXiv (abs)PDFHTML

Papers citing "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"

50 / 518 papers shown
Title
Graph Unlearning
Graph UnlearningConference on Computer and Communications Security (CCS), 2021
Min Chen
Zhikun Zhang
Tianhao Wang
Michael Backes
Mathias Humbert
Yang Zhang
MU
166
184
0
27 Mar 2021
The Influence of Dropout on Membership Inference in Differentially
  Private Models
The Influence of Dropout on Membership Inference in Differentially Private Models
Erick Galinkin
MIACV
85
8
0
16 Mar 2021
Membership Inference Attacks on Machine Learning: A Survey
Membership Inference Attacks on Machine Learning: A SurveyACM Computing Surveys (CSUR), 2021
Hongsheng Hu
Z. Salcic
Lichao Sun
Gillian Dobbie
Philip S. Yu
Xuyun Zhang
MIACV
279
568
0
14 Mar 2021
On the (In)Feasibility of Attribute Inference Attacks on Machine
  Learning Models
On the (In)Feasibility of Attribute Inference Attacks on Machine Learning ModelsEuropean Symposium on Security and Privacy (EuroS&P), 2021
Benjamin Zi Hao Zhao
Aviral Agrawal
Catisha Coburn
Hassan Jameel Asghar
Raghav Bhaskar
M. Kâafar
Darren Webb
Peter Dickinson
MIACV
118
54
0
12 Mar 2021
Defending Medical Image Diagnostics against Privacy Attacks using
  Generative Methods
Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
William Paul
Yinzhi Cao
Miaomiao Zhang
Philippe Burlina
AAMLMedIm
246
15
0
04 Mar 2021
DPlis: Boosting Utility of Differentially Private Deep Learning via
  Randomized Smoothing
DPlis: Boosting Utility of Differentially Private Deep Learning via Randomized SmoothingProceedings on Privacy Enhancing Technologies (PoPETs), 2021
Wenxiao Wang
Tianhao Wang
Lun Wang
Nanqing Luo
Pan Zhou
Basel Alomair
R. Jia
205
17
0
02 Mar 2021
Differential Privacy and Byzantine Resilience in SGD: Do They Add Up?
Differential Privacy and Byzantine Resilience in SGD: Do They Add Up?ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing (PODC), 2021
R. Guerraoui
Nirupam Gupta
Rafael Pinot
Sébastien Rouault
John Stephan
216
35
0
16 Feb 2021
Machine Learning Based Cyber Attacks Targeting on Controlled
  Information: A Survey
Machine Learning Based Cyber Attacks Targeting on Controlled Information: A SurveyACM Computing Surveys (CSUR), 2021
Yuantian Miao
Chao Chen
Lei Pan
Qing-Long Han
Jun Zhang
Yang Xiang
AAML
199
73
0
16 Feb 2021
Membership Inference Attacks are Easier on Difficult Problems
Membership Inference Attacks are Easier on Difficult ProblemsIEEE International Conference on Computer Vision (ICCV), 2021
Avital Shafran
Shmuel Peleg
Yedid Hoshen
MIACV
150
24
0
15 Feb 2021
Node-Level Membership Inference Attacks Against Graph Neural Networks
Node-Level Membership Inference Attacks Against Graph Neural Networks
Xinlei He
Rui Wen
Yixin Wu
Michael Backes
Yun Shen
Yang Zhang
204
112
0
10 Feb 2021
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Quantifying and Mitigating Privacy Risks of Contrastive LearningConference on Computer and Communications Security (CCS), 2021
Xinlei He
Yang Zhang
239
58
0
08 Feb 2021
On Utility and Privacy in Synthetic Genomic Data
On Utility and Privacy in Synthetic Genomic DataNetwork and Distributed System Security Symposium (NDSS), 2021
Bristena Oprisanu
Georgi Ganev
Emiliano De Cristofaro
184
20
0
05 Feb 2021
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
  Learning Models
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning ModelsUSENIX Security Symposium (USENIX Security), 2021
Yugeng Liu
Rui Wen
Xinlei He
A. Salem
Zhikun Zhang
Michael Backes
Emiliano De Cristofaro
Mario Fritz
Yang Zhang
AAML
180
152
0
04 Feb 2021
Membership Inference Attack on Graph Neural Networks
Membership Inference Attack on Graph Neural NetworksInternational Conference on Trust, Privacy and Security in Intelligent Systems and Applications (ICPSISA), 2021
Iyiola E. Olatunji
Wolfgang Nejdl
Megha Khosla
AAML
285
126
0
17 Jan 2021
Training Data Leakage Analysis in Language Models
Training Data Leakage Analysis in Language Models
Huseyin A. Inan
Osman Ramadan
Lukas Wutschitz
Daniel Jones
Victor Rühle
James Withers
Robert Sim
MIACVPILM
221
11
0
14 Jan 2021
Model Extraction and Defenses on Generative Adversarial Networks
Model Extraction and Defenses on Generative Adversarial Networks
Hailong Hu
Jun Pang
SILMMIACV
145
15
0
06 Jan 2021
Practical Blind Membership Inference Attack via Differential Comparisons
Practical Blind Membership Inference Attack via Differential ComparisonsNetwork and Distributed System Security Symposium (NDSS), 2021
Bo Hui
Yuchen Yang
Haolin Yuan
Philippe Burlina
Neil Zhenqiang Gong
Yinzhi Cao
MIACV
308
131
0
05 Jan 2021
Robust Machine Learning Systems: Challenges, Current Trends,
  Perspectives, and the Road Ahead
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road AheadIEEE design & test (DT), 2020
Mohamed Bennai
Mahum Naseer
T. Theocharides
C. Kyrkou
O. Mutlu
Lois Orosa
Jungwook Choi
OOD
214
108
0
04 Jan 2021
Federated Unlearning
Federated Unlearning
Gaoyang Liu
Xiaoqiang Ma
Yang Yang
Chen Wang
Jiangchuan Liu
MU
276
69
0
27 Dec 2020
FedServing: A Federated Prediction Serving Framework Based on Incentive
  Mechanism
FedServing: A Federated Prediction Serving Framework Based on Incentive MechanismIEEE Conference on Computer Communications (INFOCOM), 2020
Jiasi Weng
Jian Weng
Hongwei Huang
Chengjun Cai
Cong Wang
FedML
135
31
0
19 Dec 2020
TransMIA: Membership Inference Attacks Using Transfer Shadow Training
TransMIA: Membership Inference Attacks Using Transfer Shadow TrainingIEEE International Joint Conference on Neural Network (IJCNN), 2020
Seira Hidano
Takao Murakami
Yusuke Kawamoto
MIACV
207
16
0
30 Nov 2020
Use the Spear as a Shield: A Novel Adversarial Example based
  Privacy-Preserving Technique against Membership Inference Attacks
Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference AttacksIEEE Transactions on Emerging Topics in Computing (IEEE TETC), 2020
Mingfu Xue
Chengxiang Yuan
Can He
Zhiyu Wu
Yushu Zhang
Yanfeng Guo
Weiqiang Liu
MIACV
106
14
0
27 Nov 2020
When Machine Learning Meets Privacy: A Survey and Outlook
When Machine Learning Meets Privacy: A Survey and OutlookACM Computing Surveys (ACM CSUR), 2020
B. Liu
Ming Ding
Sina shaham
W. Rahayu
F. Farokhi
Zihuai Lin
240
318
0
24 Nov 2020
Synthetic Data -- Anonymisation Groundhog Day
Synthetic Data -- Anonymisation Groundhog DayUSENIX Security Symposium (USENIX Security), 2020
Theresa Stadler
Bristena Oprisanu
Carmela Troncoso
494
194
0
13 Nov 2020
On the Privacy Risks of Algorithmic Fairness
On the Privacy Risks of Algorithmic Fairness
Hong Chang
Reza Shokri
FaML
450
127
0
07 Nov 2020
FaceLeaks: Inference Attacks against Transfer Learning Models via
  Black-box Queries
FaceLeaks: Inference Attacks against Transfer Learning Models via Black-box Queries
Seng Pei Liew
Tsubasa Takahashi
MIACVFedML
153
10
0
27 Oct 2020
Exploring the Security Boundary of Data Reconstruction via Neuron
  Exclusivity Analysis
Exploring the Security Boundary of Data Reconstruction via Neuron Exclusivity AnalysisUSENIX Security Symposium (USENIX Security), 2020
Xudong Pan
Mi Zhang
Yifan Yan
Jiaming Zhu
Zhemin Yang
AAML
183
24
0
26 Oct 2020
A Differentially Private Text Perturbation Method Using a Regularized
  Mahalanobis Metric
A Differentially Private Text Perturbation Method Using a Regularized Mahalanobis Metric
Zekun Xu
Abhinav Aggarwal
Oluwaseyi Feyisetan
Nathanael Teissier
210
64
0
22 Oct 2020
Feature Inference Attack on Model Predictions in Vertical Federated
  Learning
Feature Inference Attack on Model Predictions in Vertical Federated LearningIEEE International Conference on Data Engineering (ICDE), 2020
Xinjian Luo
Yuncheng Wu
Xiaokui Xiao
Beng Chin Ooi
FedMLAAML
203
266
0
20 Oct 2020
Image Obfuscation for Privacy-Preserving Machine Learning
Image Obfuscation for Privacy-Preserving Machine Learning
Mathilde Raynal
R. Achanta
Mathias Humbert
181
14
0
20 Oct 2020
Security and Privacy Considerations for Machine Learning Models Deployed
  in the Government and Public Sector (white paper)
Security and Privacy Considerations for Machine Learning Models Deployed in the Government and Public Sector (white paper)
Nader Sehatbakhsh
E. Daw
O. Savas
Amin Hassanzadeh
I. Mcculloh
SILM
90
1
0
12 Oct 2020
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural
  Networks
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks
A. Salem
Michael Backes
Yang Zhang
101
37
0
07 Oct 2020
GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep
  Learning
GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep LearningACM Symposium on Applied Computing (SAC), 2020
Vasisht Duddu
A. Boutet
Virat Shejwalkar
GNN
169
4
0
02 Oct 2020
Quantifying Privacy Leakage in Graph Embedding
Quantifying Privacy Leakage in Graph EmbeddingInternational Conference on Mobile and Ubiquitous Systems: Networking and Services (MobiQuitous), 2020
Vasisht Duddu
A. Boutet
Virat Shejwalkar
MIACV
187
145
0
02 Oct 2020
On Primes, Log-Loss Scores and (No) Privacy
On Primes, Log-Loss Scores and (No) Privacy
Abhinav Aggarwal
Zekun Xu
Oluwaseyi Feyisetan
Nathanael Teissier
MIACV
102
0
0
17 Sep 2020
Privacy Analysis of Deep Learning in the Wild: Membership Inference
  Attacks against Transfer Learning
Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning
Yang Zou
Zhikun Zhang
Michael Backes
Yang Zhang
MIACV
94
33
0
10 Sep 2020
Local and Central Differential Privacy for Robustness and Privacy in
  Federated Learning
Local and Central Differential Privacy for Robustness and Privacy in Federated LearningNetwork and Distributed System Security Symposium (NDSS), 2020
Mohammad Naseri
Jamie Hayes
Emiliano De Cristofaro
FedML
242
189
0
08 Sep 2020
A Comprehensive Analysis of Information Leakage in Deep Transfer
  Learning
A Comprehensive Analysis of Information Leakage in Deep Transfer Learning
Cen Chen
Bingzhe Wu
Minghui Qiu
Li Wang
Jun Zhou
PILM
79
12
0
04 Sep 2020
Enclave-Aware Compartmentalization and Secure Sharing with Sirius
Enclave-Aware Compartmentalization and Secure Sharing with Sirius
Zahra Tarkhani
Anil Madhavapeddy
132
2
0
03 Sep 2020
Sampling Attacks: Amplification of Membership Inference Attacks by
  Repeated Queries
Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries
Shadi Rahimian
Tribhuvanesh Orekondy
Mario Fritz
MIACV
83
36
0
01 Sep 2020
Against Membership Inference Attack: Pruning is All You Need
Against Membership Inference Attack: Pruning is All You NeedInternational Joint Conference on Artificial Intelligence (IJCAI), 2020
Yijue Wang
Chenghong Wang
Zigeng Wang
Shangli Zhou
Hang Liu
J. Bi
Caiwen Ding
Sanguthevar Rajasekaran
MIACV
210
52
0
28 Aug 2020
Not one but many Tradeoffs: Privacy Vs. Utility in Differentially
  Private Machine Learning
Not one but many Tradeoffs: Privacy Vs. Utility in Differentially Private Machine Learning
Benjamin Zi Hao Zhao
M. Kâafar
N. Kourtellis
112
33
0
20 Aug 2020
Data Minimization for GDPR Compliance in Machine Learning Models
Data Minimization for GDPR Compliance in Machine Learning ModelsAI and Ethics (AE), 2020
Abigail Goldsteen
Gilad Ezov
Ron Shmelkin
Micha Moffie
Ariel Farkash
138
71
0
06 Aug 2020
The Price of Tailoring the Index to Your Data: Poisoning Attacks on
  Learned Index Structures
The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures
Evgenios M. Kornaropoulos
Silei Ren
R. Tamassia
AAML
133
23
0
01 Aug 2020
Membership Leakage in Label-Only Exposures
Membership Leakage in Label-Only ExposuresConference on Computer and Communications Security (CCS), 2020
Zheng Li
Yang Zhang
220
289
0
30 Jul 2020
Label-Only Membership Inference Attacks
Label-Only Membership Inference AttacksInternational Conference on Machine Learning (ICML), 2020
Christopher A. Choquette-Choo
Florian Tramèr
Nicholas Carlini
Nicolas Papernot
MIACVMIALM
494
589
0
28 Jul 2020
Anonymizing Machine Learning Models
Anonymizing Machine Learning Models
Abigail Goldsteen
Gilad Ezov
Ron Shmelkin
Micha Moffie
Ariel Farkash
MIACV
143
7
0
26 Jul 2020
How Does Data Augmentation Affect Privacy in Machine Learning?
How Does Data Augmentation Affect Privacy in Machine Learning?
Da Yu
Huishuai Zhang
Wei Chen
Jian Yin
Tie-Yan Liu
MU
185
1
0
21 Jul 2020
A Survey of Privacy Attacks in Machine Learning
A Survey of Privacy Attacks in Machine LearningACM Computing Surveys (ACM CSUR), 2020
M. Rigaki
Sebastian Garcia
PILMAAML
263
280
0
15 Jul 2020
Sharing Models or Coresets: A Study based on Membership Inference Attack
Sharing Models or Coresets: A Study based on Membership Inference Attack
Hanlin Lu
Wei-Han Lee
T. He
Maroun Touma
Kevin S. Chan
MIACVFedML
145
18
0
06 Jul 2020
Previous
123...101189
Next