ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.08440
  4. Cited By
On the Importance of Difficulty Calibration in Membership Inference
  Attacks

On the Importance of Difficulty Calibration in Membership Inference Attacks

15 November 2021
Lauren Watson
Chuan Guo
Graham Cormode
Alex Sablayrolles
ArXivPDFHTML

Papers citing "On the Importance of Difficulty Calibration in Membership Inference Attacks"

24 / 24 papers shown
Title
Measuring Déjà vu Memorization Efficiently
Measuring Déjà vu Memorization Efficiently
Narine Kokhlikyan
Bargav Jayaraman
Florian Bordes
Chuan Guo
Kamalika Chaudhuri
30
1
0
08 Apr 2025
Is My Text in Your AI Model? Gradient-based Membership Inference Test applied to LLMs
Gonzalo Mancera
Daniel DeAlcala
Julian Fierrez
Ruben Tolosana
Aythami Morales
48
1
0
10 Mar 2025
The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Matthieu Meeus
Lukas Wutschitz
Santiago Zanella Béguelin
Shruti Tople
Reza Shokri
80
0
0
24 Feb 2025
Understanding and Mitigating Membership Inference Risks of Neural Ordinary Differential Equations
Understanding and Mitigating Membership Inference Risks of Neural Ordinary Differential Equations
Sanghyun Hong
Fan Wu
A. Gruber
Kookjin Lee
42
0
0
12 Jan 2025
Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method
Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method
Weichao Zhang
Ruqing Zhang
Jiafeng Guo
Maarten de Rijke
Yixing Fan
Xueqi Cheng
32
7
0
23 Sep 2024
Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding
Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding
Cheng Wang
Yiwei Wang
Bryan Hooi
Yujun Cai
Nanyun Peng
Kai-Wei Chang
42
2
0
05 Sep 2024
Recent Advances in Attack and Defense Approaches of Large Language
  Models
Recent Advances in Attack and Defense Approaches of Large Language Models
Jing Cui
Yishi Xu
Zhewei Huang
Shuchang Zhou
Jianbin Jiao
Junge Zhang
PILM
AAML
54
1
0
05 Sep 2024
Noisy Neighbors: Efficient membership inference attacks against LLMs
Noisy Neighbors: Efficient membership inference attacks against LLMs
Filippo Galli
Luca Melis
Tommaso Cucinotta
44
7
0
24 Jun 2024
Data Reconstruction: When You See It and When You Don't
Data Reconstruction: When You See It and When You Don't
Edith Cohen
Haim Kaplan
Yishay Mansour
Shay Moran
Kobbi Nissim
Uri Stemmer
Eliad Tsfadia
AAML
42
2
0
24 May 2024
Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Jingyang Zhang
Jingwei Sun
Eric C. Yeats
Ouyang Yang
Martin Kuo
Jianyi Zhang
Hao Frank Yang
Hai Li
43
41
0
03 Apr 2024
Watermarking Makes Language Models Radioactive
Watermarking Makes Language Models Radioactive
Tom Sander
Pierre Fernandez
Alain Durmus
Matthijs Douze
Teddy Furon
WaLM
35
11
0
22 Feb 2024
White-box Membership Inference Attacks against Diffusion Models
White-box Membership Inference Attacks against Diffusion Models
Yan Pang
Tianhao Wang
Xu Kang
Mengdi Huai
Yang Zhang
AAML
DiffM
31
22
0
11 Aug 2023
Membership inference attack with relative decision boundary distance
Membership inference attack with relative decision boundary distance
Jiacheng Xu
Chengxiang Tan
21
1
0
07 Jun 2023
A Note On Interpreting Canary Exposure
A Note On Interpreting Canary Exposure
Matthew Jagielski
16
4
0
31 May 2023
Measuring Forgetting of Memorized Training Examples
Measuring Forgetting of Memorized Training Examples
Matthew Jagielski
Om Thakkar
Florian Tramèr
Daphne Ippolito
Katherine Lee
...
Eric Wallace
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Chiyuan Zhang
TDI
50
102
0
30 Jun 2022
Membership Inference Attack Using Self Influence Functions
Membership Inference Attack Using Self Influence Functions
Gilad Cohen
Raja Giryes
TDI
25
12
0
26 May 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
30
106
0
31 Mar 2022
An Efficient Subpopulation-based Membership Inference Attack
An Efficient Subpopulation-based Membership Inference Attack
Shahbaz Rezaei
Xin Liu
MIACV
16
5
0
04 Mar 2022
Deduplicating Training Data Mitigates Privacy Risks in Language Models
Deduplicating Training Data Mitigates Privacy Risks in Language Models
Nikhil Kandpal
Eric Wallace
Colin Raffel
PILM
MU
28
274
0
14 Feb 2022
Membership Inference Attacks From First Principles
Membership Inference Attacks From First Principles
Nicholas Carlini
Steve Chien
Milad Nasr
Shuang Song
Andreas Terzis
Florian Tramèr
MIACV
MIALM
22
639
0
07 Dec 2021
Opacus: User-Friendly Differential Privacy Library in PyTorch
Opacus: User-Friendly Differential Privacy Library in PyTorch
Ashkan Yousefpour
I. Shilov
Alexandre Sablayrolles
Davide Testuggine
Karthik Prasad
...
Sayan Gosh
Akash Bharadwaj
Jessica Zhao
Graham Cormode
Ilya Mironov
VLM
152
349
0
25 Sep 2021
Federated Learning with Buffered Asynchronous Aggregation
Federated Learning with Buffered Asynchronous Aggregation
John Nguyen
Kshitiz Malik
Hongyuan Zhan
Ashkan Yousefpour
Michael G. Rabbat
Mani Malek
Dzmitry Huba
FedML
21
288
0
11 Jun 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
290
1,814
0
14 Dec 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
196
358
0
24 Mar 2020
1