ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.08232
  4. Cited By
The Secret Sharer: Evaluating and Testing Unintended Memorization in
  Neural Networks

The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks

22 February 2018
Nicholas Carlini
Chang-rui Liu
Ulfar Erlingsson
Jernej Kos
D. Song
ArXivPDFHTML

Papers citing "The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks"

50 / 710 papers shown
Title
DP-TRAE: A Dual-Phase Merging Transferable Reversible Adversarial Example for Image Privacy Protection
DP-TRAE: A Dual-Phase Merging Transferable Reversible Adversarial Example for Image Privacy Protection
Xia Du
Jiajie Zhu
Jizhe Zhou
Chi-Man Pun
Zheng Lin
Cong Wu
Z. Chen
Jun-Jie Luo
AAML
26
0
0
11 May 2025
Standing Firm in 5G: A Single-Round, Dropout-Resilient Secure Aggregation for Federated Learning
Standing Firm in 5G: A Single-Round, Dropout-Resilient Secure Aggregation for Federated Learning
Yiwei Zhang
R. Behnia
Imtiaz Karim
A. Yavuz
Elisa Bertino
28
0
0
11 May 2025
Efficient Full-Stack Private Federated Deep Learning with Post-Quantum Security
Efficient Full-Stack Private Federated Deep Learning with Post-Quantum Security
Yiwei Zhang
R. Behnia
A. Yavuz
Reza Ebrahimi
E. Bertino
FedML
31
0
0
09 May 2025
Privacy-Preserving Transformers: SwiftKey's Differential Privacy Implementation
Privacy-Preserving Transformers: SwiftKey's Differential Privacy Implementation
Abdelrahman Abouelenin
M. Abdelrehim
Raffy Fahim
Amr Hendy
Mohamed Afify
31
0
0
08 May 2025
Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation
Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation
Vaidehi Patil
Yi-Lin Sung
Peter Hase
Jie Peng
Tianlong Chen
Mohit Bansal
AAML
MU
81
3
0
01 May 2025
Can Differentially Private Fine-tuning LLMs Protect Against Privacy Attacks?
Can Differentially Private Fine-tuning LLMs Protect Against Privacy Attacks?
Hao Du
Shang Liu
Yang Cao
AAML
47
0
0
28 Apr 2025
A False Sense of Privacy: Evaluating Textual Data Sanitization Beyond Surface-level Privacy Leakage
A False Sense of Privacy: Evaluating Textual Data Sanitization Beyond Surface-level Privacy Leakage
Rui Xin
Niloofar Mireshghallah
Shuyue Stella Li
Michael Duan
Hyunwoo Kim
Yejin Choi
Yulia Tsvetkov
Sewoong Oh
Pang Wei Koh
74
1
0
28 Apr 2025
NoEsis: Differentially Private Knowledge Transfer in Modular LLM Adaptation
NoEsis: Differentially Private Knowledge Transfer in Modular LLM Adaptation
Rob Romijnders
Stefanos Laskaridis
Ali Shahin Shamsabadi
Hamed Haddadi
57
0
0
25 Apr 2025
Anti-adversarial Learning: Desensitizing Prompts for Large Language Models
Anti-adversarial Learning: Desensitizing Prompts for Large Language Models
Xuan Li
Zhe Yin
Xiaodong Gu
Beijun Shen
AAML
MU
60
0
0
25 Apr 2025
Information Leakage of Sentence Embeddings via Generative Embedding Inversion Attacks
Information Leakage of Sentence Embeddings via Generative Embedding Inversion Attacks
Antonios Tragoudaras
Theofanis Aslanidis
Emmanouil Georgios Lionis
Marina Orozco González
Panagiotis Eustratiadis
MIACV
SILM
61
0
0
23 Apr 2025
ParaPO: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data
ParaPO: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data
Tong Chen
Faeze Brahman
Jiacheng Liu
Niloofar Mireshghallah
Weijia Shi
Pang Wei Koh
Luke Zettlemoyer
Hannaneh Hajishirzi
36
0
0
20 Apr 2025
STAMP Your Content: Proving Dataset Membership via Watermarked Rephrasings
STAMP Your Content: Proving Dataset Membership via Watermarked Rephrasings
Saksham Rastogi
Pratyush Maini
Danish Pruthi
42
0
0
18 Apr 2025
SHA256 at SemEval-2025 Task 4: Selective Amnesia -- Constrained Unlearning for Large Language Models via Knowledge Isolation
SHA256 at SemEval-2025 Task 4: Selective Amnesia -- Constrained Unlearning for Large Language Models via Knowledge Isolation
Saransh Agrawal
Kuan-Hao Huang
MU
KELM
54
0
0
17 Apr 2025
Memorization: A Close Look at Books
Memorization: A Close Look at Books
Iris Ma
Ian Domingo
A. Krone-Martins
Pierre Baldi
Cristina V. Lopes
24
0
0
17 Apr 2025
Quantifying Privacy Leakage in Split Inference via Fisher-Approximated Shannon Information Analysis
Quantifying Privacy Leakage in Split Inference via Fisher-Approximated Shannon Information Analysis
Ruijun Deng
Zhihui Lu
Qiang Duan
FedML
46
0
0
14 Apr 2025
Large language models could be rote learners
Large language models could be rote learners
Yuyang Xu
Renjun Hu
Haochao Ying
J. Wu
Xing Shi
Wei Lin
ELM
145
0
0
11 Apr 2025
Sharpness-Aware Parameter Selection for Machine Unlearning
Sharpness-Aware Parameter Selection for Machine Unlearning
Saber Malekmohammadi
Hong kyu Lee
Li Xiong
MU
143
0
0
08 Apr 2025
Measuring Déjà vu Memorization Efficiently
Measuring Déjà vu Memorization Efficiently
Narine Kokhlikyan
Bargav Jayaraman
Florian Bordes
Chuan Guo
Kamalika Chaudhuri
30
1
0
08 Apr 2025
Hide and Seek in Noise Labels: Noise-Robust Collaborative Active Learning with LLM-Powered Assistance
Hide and Seek in Noise Labels: Noise-Robust Collaborative Active Learning with LLM-Powered Assistance
Bo Yuan
Yulin Chen
Yin Zhang
Wei Jiang
NoLa
30
6
0
03 Apr 2025
SemEval-2025 Task 4: Unlearning sensitive content from Large Language Models
SemEval-2025 Task 4: Unlearning sensitive content from Large Language Models
Anil Ramakrishna
Yixin Wan
Xiaomeng Jin
Kai-Wei Chang
Zhiqi Bu
Bhanukiran Vinzamuri
V. Cevher
Mingyi Hong
Rahul Gupta
AILaw
MU
156
0
0
02 Apr 2025
Forward Learning with Differential Privacy
Forward Learning with Differential Privacy
Mingqian Feng
Zeliang Zhang
Jinyang Jiang
Yijie Peng
Chenliang Xu
39
0
0
01 Apr 2025
Leaking LoRa: An Evaluation of Password Leaks and Knowledge Storage in Large Language Models
Leaking LoRa: An Evaluation of Password Leaks and Knowledge Storage in Large Language Models
Ryan Marinelli
Magnus Eckhoff
PILM
52
0
0
29 Mar 2025
Efficient Verified Machine Unlearning For Distillation
Efficient Verified Machine Unlearning For Distillation
Yijun Quan
Zushu Li
Giovanni Montana
MU
27
0
0
28 Mar 2025
Instance-Level Data-Use Auditing of Visual ML Models
Instance-Level Data-Use Auditing of Visual ML Models
Zonghao Huang
Neil Zhenqiang Gong
Michael K. Reiter
MLAU
60
0
0
28 Mar 2025
Malicious and Unintentional Disclosure Risks in Large Language Models for Code Generation
Malicious and Unintentional Disclosure Risks in Large Language Models for Code Generation
Rafiqul Rabin
Sean McGregor
Nick Judd
AAML
PILM
57
0
0
27 Mar 2025
Language Models May Verbatim Complete Text They Were Not Explicitly Trained On
Language Models May Verbatim Complete Text They Were Not Explicitly Trained On
Ken Ziyu Liu
Christopher A. Choquette-Choo
Matthew Jagielski
Peter Kairouz
Sanmi Koyejo
Percy Liang
Nicolas Papernot
51
0
0
21 Mar 2025
Learning on LLM Output Signatures for gray-box LLM Behavior Analysis
Learning on LLM Output Signatures for gray-box LLM Behavior Analysis
Guy Bar-Shalom
Fabrizio Frasca
Derek Lim
Yoav Gelberg
Yftah Ziser
Ran El-Yaniv
Gal Chechik
Haggai Maron
62
0
0
18 Mar 2025
Privacy Auditing of Large Language Models
Ashwinee Panda
Xinyu Tang
Milad Nasr
Christopher A. Choquette-Choo
Prateek Mittal
PILM
62
5
0
09 Mar 2025
Energy-Latency Attacks: A New Adversarial Threat to Deep Learning
H. B. Meftah
W. Hamidouche
Sid Ahmed Fezza
Olivier Déforges
AAML
46
0
0
06 Mar 2025
Memorize or Generalize? Evaluating LLM Code Generation with Evolved Questions
Wentao Chen
Lizhe Zhang
Li Zhong
Letian Peng
Zilong Wang
Jingbo Shang
ELM
80
1
0
04 Mar 2025
Machine Learners Should Acknowledge the Legal Implications of Large Language Models as Personal Data
Henrik Nolte
Michèle Finck
Kristof Meding
AILaw
PILM
75
0
0
03 Mar 2025
Privacy Ripple Effects from Adding or Removing Personal Information in Language Model Training
Privacy Ripple Effects from Adding or Removing Personal Information in Language Model Training
Jaydeep Borkar
Matthew Jagielski
Katherine Lee
Niloofar Mireshghallah
David A. Smith
Christopher A. Choquette-Choo
PILM
78
1
0
24 Feb 2025
Proactive Privacy Amnesia for Large Language Models: Safeguarding PII with Negligible Impact on Model Utility
Proactive Privacy Amnesia for Large Language Models: Safeguarding PII with Negligible Impact on Model Utility
Martin Kuo
Jingyang Zhang
Jianyi Zhang
Minxue Tang
Louis DiValentin
...
William Chen
Amin Hass
Tianlong Chen
Y. Chen
H. Li
MU
KELM
37
2
0
24 Feb 2025
A General Pseudonymization Framework for Cloud-Based LLMs: Replacing Privacy Information in Controlled Text Generation
A General Pseudonymization Framework for Cloud-Based LLMs: Replacing Privacy Information in Controlled Text Generation
Shilong Hou
Ruilin Shang
Zi Long
Xianghua Fu
Yin Chen
62
0
0
24 Feb 2025
The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Matthieu Meeus
Lukas Wutschitz
Santiago Zanella Béguelin
Shruti Tople
Reza Shokri
75
0
0
24 Feb 2025
Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents
Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents
Ivoline Ngong
Swanand Kadhe
Hao Wang
K. Murugesan
Justin D. Weisz
Amit Dhurandhar
K. Ramamurthy
44
2
0
22 Feb 2025
Interrogating LLM design under a fair learning doctrine
Interrogating LLM design under a fair learning doctrine
Johnny Tian-Zheng Wei
Maggie Wang
Ameya Godbole
Jonathan H. Choi
Robin Jia
27
0
0
22 Feb 2025
UPCORE: Utility-Preserving Coreset Selection for Balanced Unlearning
UPCORE: Utility-Preserving Coreset Selection for Balanced Unlearning
Vaidehi Patil
Elias Stengel-Eskin
Mohit Bansal
MU
CLL
75
2
0
20 Feb 2025
Captured by Captions: On Memorization and its Mitigation in CLIP Models
Captured by Captions: On Memorization and its Mitigation in CLIP Models
Wenhao Wang
Adam Dziedzic
Grace C. Kim
Michael Backes
Franziska Boenisch
83
0
0
11 Feb 2025
Episodic Memories Generation and Evaluation Benchmark for Large Language Models
Episodic Memories Generation and Evaluation Benchmark for Large Language Models
Alexis Huet
Zied Ben-Houidi
Dario Rossi
LLMAG
54
0
0
21 Jan 2025
Enhancing Privacy in the Early Detection of Sexual Predators Through Federated Learning and Differential Privacy
Enhancing Privacy in the Early Detection of Sexual Predators Through Federated Learning and Differential Privacy
Khaoula Chehbouni
Martine De Cock
Gilles Caporossi
Afaf Taik
Reihaneh Rabbany
G. Farnadi
73
0
0
21 Jan 2025
Understanding and Mitigating Membership Inference Risks of Neural Ordinary Differential Equations
Understanding and Mitigating Membership Inference Risks of Neural Ordinary Differential Equations
Sanghyun Hong
Fan Wu
A. Gruber
Kookjin Lee
42
0
0
12 Jan 2025
TAPFed: Threshold Secure Aggregation for Privacy-Preserving Federated Learning
TAPFed: Threshold Secure Aggregation for Privacy-Preserving Federated Learning
Runhua Xu
Bo Li
Chao Li
J. Joshi
Shuai Ma
Jianxin Li
FedML
33
10
0
10 Jan 2025
Unleashing the Power of Data Tsunami: A Comprehensive Survey on Data Assessment and Selection for Instruction Tuning of Language Models
Unleashing the Power of Data Tsunami: A Comprehensive Survey on Data Assessment and Selection for Instruction Tuning of Language Models
Yulei Qin
Yuncheng Yang
Pengcheng Guo
Gang Li
Hang Shao
Yuchen Shi
Zihan Xu
Yun Gu
Ke Li
Xing Sun
ALM
88
12
0
31 Dec 2024
Where Did Your Model Learn That? Label-free Influence for
  Self-supervised Learning
Where Did Your Model Learn That? Label-free Influence for Self-supervised Learning
Nidhin Harilal
Amit Rege
Reza Akbarian Bafghi
M. Raissi
C. Monteleoni
TDI
41
0
0
22 Dec 2024
The Vulnerability of Language Model Benchmarks: Do They Accurately
  Reflect True LLM Performance?
The Vulnerability of Language Model Benchmarks: Do They Accurately Reflect True LLM Performance?
Sourav Banerjee
Ayushi Agarwal
Eishkaran Singh
ELM
73
2
0
02 Dec 2024
Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios
Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios
Sangyeon Yoon
Wonje Jeung
Albert No
85
0
0
02 Dec 2024
Efficient and Private: Memorisation under differentially private
  parameter-efficient fine-tuning in language models
Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models
Olivia Ma
Jonathan Passerat-Palmbach
Dmitrii Usynin
75
0
0
24 Nov 2024
No-regret Exploration in Shuffle Private Reinforcement Learning
Shaojie Bai
Mohammad Sadegh Talebi
Chengcheng Zhao
Peng Cheng
Jiming Chen
OffRL
71
0
0
18 Nov 2024
Preempting Text Sanitization Utility in Resource-Constrained Privacy-Preserving LLM Interactions
Robin Carpentier
B. Zhao
H. Asghar
Dali Kaafar
75
1
0
18 Nov 2024
1234...131415
Next