ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.03929
  4. Cited By
Quantifying Privacy Risks of Masked Language Models Using Membership
  Inference Attacks
v1v2 (latest)

Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks

8 March 2022
Fatemehsadat Mireshghallah
Kartik Goyal
Archit Uniyal
Taylor Berg-Kirkpatrick
Reza Shokri
    MIALM
ArXiv (abs)PDFHTML

Papers citing "Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks"

22 / 122 papers shown
Title
"Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak
  Prompts on Large Language Models
"Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models
Xinyue Shen
Zhenpeng Chen
Michael Backes
Yun Shen
Yang Zhang
SILM
237
367
0
07 Aug 2023
What can we learn from Data Leakage and Unlearning for Law?
What can we learn from Data Leakage and Unlearning for Law?
Jaydeep Borkar
PILMMU
123
14
0
19 Jul 2023
Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft
  Prompting and Calibrated Confidence Estimation
Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confidence Estimation
Zhexin Zhang
Jiaxin Wen
Minlie Huang
74
42
0
10 Jul 2023
Membership Inference Attacks against Language Models via Neighbourhood
  Comparison
Membership Inference Attacks against Language Models via Neighbourhood Comparison
Justus Mattern
Fatemehsadat Mireshghallah
Zhijing Jin
Bernhard Schölkopf
Mrinmaya Sachan
Taylor Berg-Kirkpatrick
MIALM
172
225
0
29 May 2023
Training Data Extraction From Pre-trained Language Models: A Survey
Training Data Extraction From Pre-trained Language Models: A Survey
Shotaro Ishihara
157
51
0
25 May 2023
Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation
  into Input Regurgitation and Prompt-Induced Sanitization
Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation into Input Regurgitation and Prompt-Induced Sanitization
Aman Priyanshu
Supriti Vijay
Ayush Kumar
Rakshit Naidu
Fatemehsadat Mireshghallah
SILM
161
27
0
24 May 2023
Trade-Offs Between Fairness and Privacy in Language Modeling
Trade-Offs Between Fairness and Privacy in Language Modeling
Cleo Matzken
Steffen Eger
Ivan Habernal
SILM
186
6
0
24 May 2023
Watermarking Text Data on Large Language Models for Dataset Copyright
Watermarking Text Data on Large Language Models for Dataset Copyright
Yixin Liu
Hongsheng Hu
Xun Chen
Xuyun Zhang
Lichao Sun
WaLM
131
27
0
22 May 2023
The "code'' of Ethics:A Holistic Audit of AI Code Generators
The "code'' of Ethics:A Holistic Audit of AI Code Generators
Wanlun Ma
Yiliao Song
Minhui Xue
Sheng Wen
Yang Xiang
81
11
0
22 May 2023
Smaller Language Models are Better Black-box Machine-Generated Text
  Detectors
Smaller Language Models are Better Black-box Machine-Generated Text Detectors
Niloofar Mireshghallah
Justus Mattern
Sicun Gao
Reza Shokri
Taylor Berg-Kirkpatrick
DeLMO
221
51
0
17 May 2023
Dual Use Concerns of Generative AI and Large Language Models
Dual Use Concerns of Generative AI and Large Language Models
A. Grinbaum
Laurynas Adomaitis
MedImAI4CE
101
21
0
13 May 2023
Does Prompt-Tuning Language Model Ensure Privacy?
Does Prompt-Tuning Language Model Ensure Privacy?
Shangyu Xie
Wei Dai
Esha Ghosh
Sambuddha Roy
Dan Schwartz
Kim Laine
SILM
129
4
0
07 Apr 2023
Complex QA and language models hybrid architectures, Survey
Complex QA and language models hybrid architectures, Survey
Xavier Daull
P. Bellot
Emmanuel Bruno
Vincent Martin
Elisabeth Murisasco
ELM
370
17
0
17 Feb 2023
Bounding Training Data Reconstruction in DP-SGD
Bounding Training Data Reconstruction in DP-SGD
Jamie Hayes
Saeed Mahloujifar
Borja Balle
AAMLFedML
181
53
0
14 Feb 2023
Targeted Attack on GPT-Neo for the SATML Language Model Data Extraction
  Challenge
Targeted Attack on GPT-Neo for the SATML Language Model Data Extraction Challenge
Ali Al-Kaswan
Maliheh Izadi
Arie van Deursen
SILM
71
10
0
13 Feb 2023
Analyzing Leakage of Personally Identifiable Information in Language
  Models
Analyzing Leakage of Personally Identifiable Information in Language Models
Nils Lukas
A. Salem
Robert Sim
Shruti Tople
Lukas Wutschitz
Santiago Zanella Béguelin
PILM
283
280
0
01 Feb 2023
Membership Inference Attacks and Generalization: A Causal Perspective
Membership Inference Attacks and Generalization: A Causal Perspective
Teodora Baluta
Shiqi Shen
S. Hitarth
Shruti Tople
Prateek Saxena
OODMIACV
132
24
0
18 Sep 2022
A Blessing of Dimensionality in Membership Inference through
  Regularization
A Blessing of Dimensionality in Membership Inference through Regularization
Jasper Tan
Daniel LeJeune
Blake Mason
Hamid Javadi
Richard G. Baraniuk
111
20
0
27 May 2022
Memorization in NLP Fine-tuning Methods
Memorization in NLP Fine-tuning Methods
Fatemehsadat Mireshghallah
Archit Uniyal
Tianhao Wang
David Evans
Taylor Berg-Kirkpatrick
AAML
198
46
0
25 May 2022
Memorization Without Overfitting: Analyzing the Training Dynamics of
  Large Language Models
Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models
Kushal Tirumala
Aram H. Markosyan
Luke Zettlemoyer
Armen Aghajanyan
TDI
183
222
0
22 May 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
179
128
0
31 Mar 2022
Parameters or Privacy: A Provable Tradeoff Between Overparameterization
  and Membership Inference
Parameters or Privacy: A Provable Tradeoff Between Overparameterization and Membership Inference
Jasper Tan
Blake Mason
Hamid Javadi
Richard G. Baraniuk
FedML
137
20
0
02 Feb 2022
Previous
123