ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.08305
  4. Cited By
Membership Inference Attack Susceptibility of Clinical Language Models

Membership Inference Attack Susceptibility of Clinical Language Models

16 April 2021
Abhyuday N. Jagannatha
Bhanu Pratap Singh Rawat
Hong-ye Yu
    MIACV
ArXiv (abs)PDFHTML

Papers citing "Membership Inference Attack Susceptibility of Clinical Language Models"

28 / 28 papers shown
Title
Leverage Unlearning to Sanitize LLMs
Leverage Unlearning to Sanitize LLMs
Antoine Boutet
Lucas Magnana
MUMedIm
178
0
0
24 Oct 2025
Exploring Membership Inference Vulnerabilities in Clinical Large Language Models
Exploring Membership Inference Vulnerabilities in Clinical Large Language Models
Alexander Nemecek
Zebin Yun
Zahra Rahmani
Yaniv Harel
Vipin Chaudhary
Mahmood Sharif
Erman Ayday
102
0
0
21 Oct 2025
The Model's Language Matters: A Comparative Privacy Analysis of LLMs
The Model's Language Matters: A Comparative Privacy Analysis of LLMs
Abhishek K. Mishra
Antoine Boutet
Lucas Magnana
PILM
184
0
0
09 Oct 2025
Membership Inference Attack against Large Language Model-based Recommendation Systems: A New Distillation-based Paradigm
Membership Inference Attack against Large Language Model-based Recommendation Systems: A New Distillation-based Paradigm
Li Cuihong
Huang Xiaowen
Yin Chuanhuan
Sang Jitao
100
0
0
16 Sep 2025
LoRA-Leak: Membership Inference Attacks Against LoRA Fine-tuned Language Models
LoRA-Leak: Membership Inference Attacks Against LoRA Fine-tuned Language Models
Delong Ran
Xinlei He
Tianshuo Cong
Anyu Wang
Cunliang Kong
Xiaoyun Wang
MIALMPILM
201
1
1
24 Jul 2025
Entropy-Memorization Law: Evaluating Memorization Difficulty of Data in LLMs
Entropy-Memorization Law: Evaluating Memorization Difficulty of Data in LLMs
Yizhan Huang
Zhe Yang
Meifang Chen
Huang Nianchen
Jianping Zhang
Michael R. Lyu
244
1
0
08 Jul 2025
SoK: The Privacy Paradox of Large Language Models: Advancements, Privacy Risks, and Mitigation
SoK: The Privacy Paradox of Large Language Models: Advancements, Privacy Risks, and MitigationACM Asia Conference on Computer and Communications Security (AsiaCCS), 2025
Yashothara Shanmugarasa
Ming Ding
M. Chamikara
Thierry Rakotoarivelo
PILMAILaw
362
6
0
15 Jun 2025
SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks
SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks
Kaiyuan Zhang
Siyuan Cheng
Hanxi Guo
Yuetian Chen
Zian Su
...
Yuntao Du
Charles Fleming
Jayanth Srinivasa
Xiangyu Zhang
Ninghui Li
AAML
352
4
0
12 Jun 2025
Fragments to Facts: Partial-Information Fragment Inference from LLMs
Fragments to Facts: Partial-Information Fragment Inference from LLMs
Lucas Rosenblatt
Bin Han
Robert Wolfe
Bill Howe
AAML
265
0
0
20 May 2025
Can Differentially Private Fine-tuning LLMs Protect Against Privacy Attacks?
Can Differentially Private Fine-tuning LLMs Protect Against Privacy Attacks?Database Security (DBSec), 2025
Hao Du
Shang Liu
Yang Cao
AAML
401
0
0
28 Apr 2025
Membership Inference Attacks on Large-Scale Models: A Survey
Membership Inference Attacks on Large-Scale Models: A Survey
Hengyu Wu
Yang Cao
MIALM
787
7
0
25 Mar 2025
Synthetic Data Privacy Metrics
Synthetic Data Privacy Metrics
Amy Steier
Lipika Ramaswamy
Andre Manoel
Alexa Haushalter
263
4
0
08 Jan 2025
Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions
Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future DirectionsPacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), 2024
Hao Du
Shang Liu
Lele Zheng
Yang Cao
Atsuyoshi Nakamura
Lei Chen
AAML
414
12
0
21 Dec 2024
Does Data Contamination Detection Work (Well) for LLMs? A Survey and Evaluation on Detection Assumptions
Does Data Contamination Detection Work (Well) for LLMs? A Survey and Evaluation on Detection AssumptionsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2024
Yujuan Fu
Özlem Uzuner
Meliha Yetisgen
Fei Xia
358
16
0
24 Oct 2024
Detecting Training Data of Large Language Models via Expectation Maximization
Detecting Training Data of Large Language Models via Expectation Maximization
Gyuwan Kim
Yang Li
Evangelia Spiliopoulou
Jie Ma
Miguel Ballesteros
William Yang Wang
MIALM
622
9
2
10 Oct 2024
Undesirable Memorization in Large Language Models: A Survey
Undesirable Memorization in Large Language Models: A Survey
Ali Satvaty
Suzan Verberne
Fatih Turkmen
ELMPILM
508
22
0
03 Oct 2024
Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method
Pretraining Data Detection for Large Language Models: A Divergence-based Calibration MethodConference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Weichao Zhang
Ruqing Zhang
Jiafeng Guo
Maarten de Rijke
Yixing Fan
Xueqi Cheng
369
42
0
23 Sep 2024
Detecting Pretraining Data from Large Language Models
Detecting Pretraining Data from Large Language ModelsInternational Conference on Learning Representations (ICLR), 2023
Weijia Shi
Anirudh Ajith
Mengzhou Xia
Yangsibo Huang
Daogao Liu
Terra Blevins
Danqi Chen
Luke Zettlemoyer
MIALM
373
299
0
25 Oct 2023
Gradient-Free Privacy Leakage in Federated Language Models through Selective Weight Tampering
Gradient-Free Privacy Leakage in Federated Language Models through Selective Weight Tampering
Md Rafi Ur Rashid
Vishnu Asutosh Dasu
Kang Gu
Najrin Sultana
Shagufta Mehnaz
AAMLFedML
464
14
0
24 Oct 2023
Did the Neurons Read your Book? Document-level Membership Inference for
  Large Language Models
Did the Neurons Read your Book? Document-level Membership Inference for Large Language ModelsUSENIX Security Symposium (USENIX Security), 2023
Matthieu Meeus
Shubham Jain
Marek Rei
Yves-Alexandre de Montjoye
MIALM
319
58
0
23 Oct 2023
Assessing Privacy Risks in Language Models: A Case Study on
  Summarization Tasks
Assessing Privacy Risks in Language Models: A Case Study on Summarization Tasks
Ruixiang Tang
Gord Lueck
Rodolfo Quispe
Huseyin A. Inan
Janardhan Kulkarni
Helen Zhou
194
9
0
20 Oct 2023
Identifying and Mitigating Privacy Risks Stemming from Language Models:
  A Survey
Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey
Victoria Smith
Ali Shahin Shamsabadi
Carolyn Ashurst
Adrian Weller
PILM
372
41
0
27 Sep 2023
Training Data Extraction From Pre-trained Language Models: A Survey
Training Data Extraction From Pre-trained Language Models: A Survey
Shotaro Ishihara
245
52
0
25 May 2023
Analyzing Leakage of Personally Identifiable Information in Language
  Models
Analyzing Leakage of Personally Identifiable Information in Language ModelsIEEE Symposium on Security and Privacy (IEEE S&P), 2023
Nils Lukas
A. Salem
Robert Sim
Shruti Tople
Lukas Wutschitz
Santiago Zanella Béguelin
PILM
582
313
0
01 Feb 2023
Swing Distillation: A Privacy-Preserving Knowledge Distillation
  Framework
Swing Distillation: A Privacy-Preserving Knowledge Distillation Framework
Junzhuo Li
Xinwei Wu
Weilong Dong
Shuangzhi Wu
Chao Bian
Deyi Xiong
260
5
0
16 Dec 2022
Differentially Private Decoding in Large Language Models
Differentially Private Decoding in Large Language Models
Jimit Majmudar
Christophe Dupuy
Charith Peris
S. Smaili
Rahul Gupta
R. Zemel
156
39
0
26 May 2022
Quantifying Privacy Risks of Masked Language Models Using Membership
  Inference Attacks
Quantifying Privacy Risks of Masked Language Models Using Membership Inference AttacksConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Fatemehsadat Mireshghallah
Kartik Goyal
Archit Uniyal
Taylor Berg-Kirkpatrick
Reza Shokri
MIALM
372
201
0
08 Mar 2022
Membership Inference Attacks on Machine Learning: A Survey
Membership Inference Attacks on Machine Learning: A SurveyACM Computing Surveys (CSUR), 2021
Hongsheng Hu
Z. Salcic
Lichao Sun
Gillian Dobbie
Philip S. Yu
Xuyun Zhang
MIACV
279
568
0
14 Mar 2021
1