ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.03363
  4. Cited By
Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding
v1v2 (latest)

Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding

International Conference on Computational Linguistics (COLING), 2024
5 September 2024
Cheng Wang
Yiwei Wang
Bryan Hooi
Yujun Cai
Nanyun Peng
Kai-Wei Chang
ArXiv (abs)PDFHTML

Papers citing "Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding"

29 / 29 papers shown
False Sense of Security: Why Probing-based Malicious Input Detection Fails to Generalize
False Sense of Security: Why Probing-based Malicious Input Detection Fails to Generalize
Cheng Wang
Zeming Wei
Qin Liu
Muhao Chen
AAML
180
1
0
04 Sep 2025
SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks
SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks
Kaiyuan Zhang
Siyuan Cheng
Hanxi Guo
Yuetian Chen
Zian Su
...
Yuntao Du
Charles Fleming
Jayanth Srinivasa
Xiangyu Zhang
Ninghui Li
AAML
372
6
0
12 Jun 2025
Exploring the limits of strong membership inference attacks on large language models
Exploring the limits of strong membership inference attacks on large language models
Jamie Hayes
Ilia Shumailov
Christopher A. Choquette-Choo
Matthew Jagielski
G. Kaissis
...
Matthieu Meeus
Yves-Alexandre de Montjoye
Franziska Boenisch
Adam Dziedzic
A. Feder Cooper
340
11
0
24 May 2025
On Membership Inference Attacks in Knowledge Distillation
On Membership Inference Attacks in Knowledge Distillation
Ziyao Cui
Minxing Zhang
Jian Pei
249
2
0
17 May 2025
Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models
Scaling Up Membership Inference: When and How Attacks Succeed on Large Language ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2024
Haritz Puerto
Martin Gubri
Sangdoo Yun
Seong Joon Oh
MIALM
1.8K
18
3
31 Oct 2024
ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods
ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods
Roy Xie
Junlin Wang
Ruomin Huang
Minxing Zhang
Rong Ge
Jian Pei
Neil Zhenqiang Gong
Bhuwan Dhingra
MIALM
576
38
0
23 Jun 2024
Enhancing Contextual Understanding in Large Language Models through
  Contrastive Decoding
Enhancing Contextual Understanding in Large Language Models through Contrastive DecodingNorth American Chapter of the Association for Computational Linguistics (NAACL), 2024
Zheng Zhao
Emilio Monti
Jens Lehmann
H. Assem
215
48
0
04 May 2024
Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Jingyang Zhang
Jingwei Sun
Eric C. Yeats
Ouyang Yang
Martin Kuo
Jianyi Zhang
Hao Frank Yang
Hai "Helen" Li
675
78
0
03 Apr 2024
DE-COP: Detecting Copyrighted Content in Language Models Training Data
DE-COP: Detecting Copyrighted Content in Language Models Training Data
André V. Duarte
Xuandong Zhao
Arlindo L. Oliveira
Lei Li
377
65
0
15 Feb 2024
Low-Cost High-Power Membership Inference Attacks
Low-Cost High-Power Membership Inference AttacksInternational Conference on Machine Learning (ICML), 2023
Sajjad Zarifzadeh
Philippe Liu
Reza Shokri
332
74
0
06 Dec 2023
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Albert Gu
Tri Dao
Mamba
558
5,168
0
01 Dec 2023
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination
  for each Benchmark
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each BenchmarkConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Oscar Sainz
Jon Ander Campos
Iker García-Ferrero
Julen Etxaniz
Oier López de Lacalle
Eneko Agirre
233
260
0
27 Oct 2023
Proving Test Set Contamination in Black Box Language Models
Proving Test Set Contamination in Black Box Language ModelsInternational Conference on Learning Representations (ICLR), 2023
Yonatan Oren
Nicole Meister
Niladri Chatterji
Faisal Ladhak
Tatsunori B. Hashimoto
HILM
359
197
0
26 Oct 2023
Privacy-Preserving In-Context Learning with Differentially Private
  Few-Shot Generation
Privacy-Preserving In-Context Learning with Differentially Private Few-Shot GenerationInternational Conference on Learning Representations (ICLR), 2023
Xinyu Tang
Richard Shin
Huseyin A. Inan
Andre Manoel
Fatemehsadat Mireshghallah
Zinan Lin
Sivakanth Gopi
Janardhan Kulkarni
Robert Sim
394
91
0
21 Sep 2023
Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and
  Vulnerabilities
Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities
Maximilian Mozes
Xuanli He
Bennett Kleinberg
Lewis D. Griffin
221
107
0
24 Aug 2023
Scalable Membership Inference Attacks via Quantile Regression
Scalable Membership Inference Attacks via Quantile RegressionNeural Information Processing Systems (NeurIPS), 2023
Martín Bertrán
Shuai Tang
Michael Kearns
Jamie Morgenstern
Aaron Roth
Zhiwei Steven Wu
MIACV
257
68
0
07 Jul 2023
Membership Inference Attacks against Language Models via Neighbourhood
  Comparison
Membership Inference Attacks against Language Models via Neighbourhood ComparisonAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Justus Mattern
Fatemehsadat Mireshghallah
Zhijing Jin
Bernhard Schölkopf
Mrinmaya Sachan
Taylor Berg-Kirkpatrick
MIALM
471
269
0
29 May 2023
Trusting Your Evidence: Hallucinate Less with Context-aware Decoding
Trusting Your Evidence: Hallucinate Less with Context-aware DecodingNorth American Chapter of the Association for Computational Linguistics (NAACL), 2023
Weijia Shi
Xiaochuang Han
M. Lewis
Yulia Tsvetkov
Luke Zettlemoyer
Scott Yih
HILM
234
290
0
24 May 2023
Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4
Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Kent K. Chang
Mackenzie Cramer
Sandeep Soni
David Bamman
RALM
582
161
0
28 Apr 2023
Pythia: A Suite for Analyzing Large Language Models Across Training and
  Scaling
Pythia: A Suite for Analyzing Large Language Models Across Training and ScalingInternational Conference on Machine Learning (ICML), 2023
Stella Biderman
Hailey Schoelkopf
Quentin G. Anthony
Herbie Bradley
Kyle O'Brien
...
USVSN Sai Prashanth
Edward Raff
Aviya Skowron
Lintang Sutawika
Oskar van der Wal
384
1,621
0
03 Apr 2023
LLaMA: Open and Efficient Foundation Language Models
LLaMA: Open and Efficient Foundation Language Models
Hugo Touvron
Thibaut Lavril
Gautier Izacard
Xavier Martinet
Marie-Anne Lachaux
...
Faisal Azhar
Aurelien Rodriguez
Armand Joulin
Edouard Grave
Guillaume Lample
ALMPILM
5.1K
17,636
0
27 Feb 2023
Extracting Training Data from Diffusion Models
Extracting Training Data from Diffusion ModelsUSENIX Security Symposium (USENIX Security), 2023
Nicholas Carlini
Jamie Hayes
Milad Nasr
Matthew Jagielski
Vikash Sehwag
Florian Tramèr
Borja Balle
Daphne Ippolito
Eric Wallace
DiffM
473
800
0
30 Jan 2023
GPT-NeoX-20B: An Open-Source Autoregressive Language Model
GPT-NeoX-20B: An Open-Source Autoregressive Language Model
Sid Black
Stella Biderman
Eric Hallahan
Quentin G. Anthony
Leo Gao
...
Shivanshu Purohit
Laria Reynolds
J. Tow
Benqi Wang
Samuel Weinbach
362
949
0
14 Apr 2022
Quantifying Privacy Risks of Masked Language Models Using Membership
  Inference Attacks
Quantifying Privacy Risks of Masked Language Models Using Membership Inference AttacksConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Fatemehsadat Mireshghallah
Kartik Goyal
Archit Uniyal
Taylor Berg-Kirkpatrick
Reza Shokri
MIALM
464
207
0
08 Mar 2022
Membership Inference Attacks From First Principles
Membership Inference Attacks From First Principles
Nicholas Carlini
Steve Chien
Milad Nasr
Shuang Song
Seth Neel
Florian Tramèr
MIACVMIALM
672
910
0
07 Dec 2021
On the Importance of Difficulty Calibration in Membership Inference
  Attacks
On the Importance of Difficulty Calibration in Membership Inference AttacksInternational Conference on Learning Representations (ICLR), 2021
Lauren Watson
Chuan Guo
Graham Cormode
Alex Sablayrolles
289
172
0
15 Nov 2021
DExperts: Decoding-Time Controlled Text Generation with Experts and
  Anti-Experts
DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-ExpertsAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Alisa Liu
Maarten Sap
Ximing Lu
Swabha Swayamdipta
Chandra Bhagavatula
Noah A. Smith
Yejin Choi
MU
569
443
0
07 May 2021
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
894
2,535
0
31 Dec 2020
Membership Inference Attacks against Machine Learning Models
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLRMIALMMIACV
898
4,792
0
18 Oct 2016
1