Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
All Papers
0 / 0 papers shown
Title
Home
Papers
1802.08232
Cited By
v1
v2
v3 (latest)
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
22 February 2018
Nicholas Carlini
Chang-rui Liu
Ulfar Erlingsson
Jernej Kos
Basel Alomair
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks"
50 / 790 papers shown
Title
SA-ADP: Sensitivity-Aware Adaptive Differential Privacy for Large Language Models
Stella Etuk
Ashraf Matrawy
44
0
0
01 Dec 2025
How do we measure privacy in text? A survey of text anonymization metrics
Yaxuan Ren
Krithika Ramesh
Yaxing Yao
Anjalie Field
AILaw
44
0
0
30 Nov 2025
Membership Inference Attacks Beyond Overfitting
Mona Khalil
Alberto Blanco-Justicia
N. Jebreel
Josep Domingo-Ferrer
MIALM
142
0
0
20 Nov 2025
Differentially Private In-Context Learning with Nearest Neighbor Search
A. Koskela
Tejas D. Kulkarni
Laith Zumot
116
0
0
06 Nov 2025
Black-Box Membership Inference Attack for LVLMs via Prior Knowledge-Calibrated Memory Probing
Jinhua Yin
Peiru Yang
Chen Yang
Huili Wang
Zhiyang Hu
Shangguang Wang
Yongfeng Huang
Tao Qi
123
1
0
03 Nov 2025
EL-MIA: Quantifying Membership Inference Risks of Sensitive Entities in LLMs
Ali Satvaty
Suzan Verberne
Fatih Turkmen
MIALM
263
0
0
31 Oct 2025
Hallucinations in Bibliographic Recommendation: Citation Frequency as a Proxy for Training Data Redundancy
Junichiro Niimi
HILM
RALM
148
1
0
29 Oct 2025
A Survey on Unlearning in Large Language Models
Ruichen Qiu
Jiajun Tan
Jiayue Pu
Honglin Wang
Xiao-Shan Gao
Fei Sun
MU
AILaw
PILM
582
0
0
29 Oct 2025
From Memorization to Reasoning in the Spectrum of Loss Curvature
Jack Merullo
Srihita Vatsavaya
Lucius Bushnaq
Owen Lewis
170
0
0
28 Oct 2025
Leverage Unlearning to Sanitize LLMs
Antoine Boutet
Lucas Magnana
MU
MedIm
178
0
0
24 Oct 2025
Blackbox Model Provenance via Palimpsestic Membership Inference
Rohith Kuditipudi
Jing-ling Huang
Sally Zhu
Diyi Yang
Christopher Potts
Abigail Z. Jacobs
158
0
0
22 Oct 2025
The Tail Tells All: Estimating Model-Level Membership Inference Vulnerability Without Reference Models
Euodia Dodd
Nataša Krčo
Igor Shilov
Yves-Alexandre de Montjoye
109
0
0
22 Oct 2025
Memorizing Long-tail Data Can Help Generalization Through Composition
Mo Zhou
Haoyang Ma
Rong Ge
TDI
301
0
0
18 Oct 2025
The Hidden Cost of Modeling P(X): Vulnerability to Membership Inference Attacks in Generative Text Classifiers
Owais Makroo
Siva Rajesh Kasa
Sumegh Roychowdhury
Karan Gupta
Nikhil Pattisapu
Santhosh Kumar Kasa
Sumit Negi
SILM
178
0
0
17 Oct 2025
An Investigation of Memorization Risk in Healthcare Foundation Models
S. Tonekaboni
Lena Stempfle
Adibvafa Fallahpour
Walter Gerych
Elisa Kreiss
109
0
0
14 Oct 2025
Early Detection and Reduction of Memorisation for Domain Adaptation and Instruction Tuning
Dean L. Slack
Noura Al Moubayed
100
0
0
13 Oct 2025
CoSPED: Consistent Soft Prompt Targeted Data Extraction and Defense
Yang Zhuochen
Fok Kar Wai
Thing Vrizlynn
AAML
SILM
222
0
0
13 Oct 2025
Secret-Protected Evolution for Differentially Private Synthetic Text Generation
Tianze Wang
Zhaoyu Chen
Jian Du
Yingtai Xiao
Linjun Zhang
Qiang Yan
SyDa
125
0
0
13 Oct 2025
The Model's Language Matters: A Comparative Privacy Analysis of LLMs
Abhishek K. Mishra
Antoine Boutet
Lucas Magnana
PILM
184
0
0
09 Oct 2025
Exploring Cross-Client Memorization of Training Data in Large Language Models for Federated Learning
Tinnakit Udsa
Can Udomcharoenchaikit
Patomporn Payoungkhamdee
Sarana Nutanong
Norrathep Rattanavipanon
FedML
80
0
0
09 Oct 2025
LLM-Assisted Modeling of Semantic Web-Enabled Multi-Agents Systems with AJAN
Hacane Hechehouche
Andre Antakli
Matthias Klusch
LLMAG
3DV
188
1
0
08 Oct 2025
Data Provenance Auditing of Fine-Tuned Large Language Models with a Text-Preserving Technique
Yanming Li
Seifeddine Ghozzi
Cédric Eichler
Nicolas Anciaux
Alexandra Bensamoun
Lorena Gonzalez-Manzano
WaLM
180
0
0
07 Oct 2025
Private and Fair Machine Learning: Revisiting the Disparate Impact of Differentially Private SGD
Lea Demelius
Dominik Kowald
Simone Kopeinik
Roman Kern
A. Trugler
FedML
109
0
0
02 Oct 2025
Towards Verifiable Federated Unlearning: Framework, Challenges, and The Road Ahead
Thanh Linh Nguyen
Marcela Tuler de Oliveira
An Braeken
Aaron Yi Ding
Quoc-Viet Pham
MU
73
0
0
01 Oct 2025
"We are not Future-ready": Understanding AI Privacy Risks and Existing Mitigation Strategies from the Perspective of AI Developers in Europe
Symposium On Usable Privacy and Security (SOUPS), 2025
Alexandra Klymenko
Stephen Meisenbacher
Patrick Gage Kelley
Sai Teja Peddinti
Kurt Thomas
Florian Matthes
107
0
0
01 Oct 2025
Adaptive Token-Weighted Differential Privacy for LLMs: Not All Tokens Require Equal Protection
Manjiang Yu
Priyanka Singh
Xue Li
Yang Cao
AAML
100
0
0
27 Sep 2025
Non-Linear Trajectory Modeling for Multi-Step Gradient Inversion Attacks in Federated Learning
Li Xia
Zheng Liu
Sili Huang
Wei Tang
Xuan Liu
Xuan Liu
AAML
108
1
0
26 Sep 2025
Functional Encryption in Secure Neural Network Training: Data Leakage and Practical Mitigations
Alexandru Ioniţă
Andreea Ioniţă
FedML
76
0
0
25 Sep 2025
No Prior, No Leakage: Revisiting Reconstruction Attacks in Trained Neural Networks
Yehonatan Refael
Guy Smorodinsky
Ofir Lindenbaum
Itay Safran
MIACV
AAML
255
0
0
25 Sep 2025
GEP: A GCG-Based method for extracting personally identifiable information from chatbots built on small language models
Jieli Zhu
Vi Ngoc-Nha Tran
192
0
0
25 Sep 2025
Efficiently Attacking Memorization Scores
Tue Do
Varun Chandrasekaran
Daniel Alabi
TDI
AAML
222
0
0
24 Sep 2025
Memory in Large Language Models: Mechanisms, Evaluation and Evolution
D. Zhang
Wendong Li
Kani Song
Jiaye Lu
Gang Li
Liuchun Yang
Sheng Li
KELM
197
1
0
23 Sep 2025
SynBench: A Benchmark for Differentially Private Text Generation
Yidan Sun
Viktor Schlegel
Srinivasan Nandakumar
Iqra Zahid
Yuping Wu
...
Jie Zhang
Warren Del-Pinto
Goran Nenadic
S. Lam
Anil A Bharath
SyDa
133
0
0
18 Sep 2025
Scrub It Out! Erasing Sensitive Memorization in Code Language Models via Machine Unlearning
Zhaoyang Chu
Yao Wan
Z. Zhang
Di Wang
Zhou Yang
H. Zhang
Pan Zhou
Xuanhua Shi
Hai Jin
David Lo
MU
AAML
126
1
0
17 Sep 2025
Why Data Anonymization Has Not Taken Off
Customer Needs and Solutions (CNS), 2025
Matthew J. Schneider
James Bailie
Dawn Iacobucci
134
1
0
12 Sep 2025
Differentially Private Decentralized Dataset Synthesis Through Randomized Mixing with Correlated Noise
Utsab Saha
Tanvir Muntakim Tonoy
Hafiz Imtiaz
80
0
0
12 Sep 2025
Generative Data Refinement: Just Ask for Better Data
Minqi Jiang
João G. M. Araújo
Will Ellsworth
Sian Gooding
Edward Grefenstette
172
3
0
10 Sep 2025
ArtifactGen: Benchmarking WGAN-GP vs Diffusion for Label-Aware EEG Artifact Synthesis
Hritik Arasu
Faisal R Jahangiri
DiffM
132
0
0
09 Sep 2025
PLRV-O: Advancing Differentially Private Deep Learning via Privacy Loss Random Variable Optimization
Qin Yang
Nicholas Stout
Meisam Mohammady
Zheng Chen
Ayesha Samreen
Christopher J Quinn
Yan Yan
A. Kundu
Yuan Hong
68
0
0
08 Sep 2025
Beyond ATE: Multi-Criteria Design for A/B Testing
Jiachun Li
Kaining Shi
David Simchi-Levi
89
0
0
06 Sep 2025
AntiDote: Bi-level Adversarial Training for Tamper-Resistant LLMs
Debdeep Sanyal
Manodeep Ray
Murari Mandal
AAML
148
0
0
06 Sep 2025
Privacy Risks in Time Series Forecasting: User- and Record-Level Membership Inference
Nicolas Johansson
Tobias Olsson
Daniel Nilsson
Johan Östman
Fazeleh Hoseini
AI4TS
172
0
0
04 Sep 2025
Discrete Functional Geometry of ReLU Networks via ReLU Transition Graphs
Sahil Rajesh Dhayalkar
122
0
0
03 Sep 2025
Safe-LLaVA: A Privacy-Preserving Vision-Language Dataset and Benchmark for Biometric Safety
Younggun Kim
S. Swetha
Fazil Kagdi
Mubarak Shah
PILM
246
3
0
29 Aug 2025
Embodied AI: Emerging Risks and Opportunities for Policy Action
Jared Perlo
Alexander Robey
Fazl Barez
Luciano Floridi
Jakob Mokander
206
2
0
28 Aug 2025
Tackling Federated Unlearning as a Parameter Estimation Problem
Antonio Balordi
Lorenzo Manini
Fabio Stella
Alessio Merlo
FedML
MU
116
0
0
26 Aug 2025
Membership Inference Attacks on LLM-based Recommender Systems
Jiajie He
Yuechun Gu
Min-Chun Chen
Keke Chen
AAML
216
1
0
26 Aug 2025
Attacking LLMs and AI Agents: Advertisement Embedding Attacks Against Large Language Models
Qiming Guo
Jinwen Tang
Xingran Huang
117
0
0
25 Aug 2025
Towards a Real-World Aligned Benchmark for Unlearning in Recommender Systems
Pierre Lubitzsch
Olga Ovcharenko
Hao Chen
Maarten de Rijke
Sebastian Schelter
MU
CML
136
0
0
23 Aug 2025
Demystifying Foreground-Background Memorization in Diffusion Models
Jimmy Z. Di
Yiwei Lu
Yaoliang Yu
Gautam Kamath
Adam Dziedzic
Franziska Boenisch
DiffM
145
0
0
16 Aug 2025
1
2
3
4
...
14
15
16
Next