ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.05156
  4. Cited By
On Protecting the Data Privacy of Large Language Models (LLMs): A Survey

On Protecting the Data Privacy of Large Language Models (LLMs): A Survey

8 March 2024
Biwei Yan
Kun Li
Minghui Xu
Yueyan Dong
Yue Zhang
Zhaochun Ren
Xiuzhen Cheng
    AILaw
    PILM
ArXivPDFHTML

Papers citing "On Protecting the Data Privacy of Large Language Models (LLMs): A Survey"

15 / 15 papers shown
Title
A False Sense of Privacy: Evaluating Textual Data Sanitization Beyond Surface-level Privacy Leakage
A False Sense of Privacy: Evaluating Textual Data Sanitization Beyond Surface-level Privacy Leakage
Rui Xin
Niloofar Mireshghallah
Shuyue Stella Li
Michael Duan
Hyunwoo Kim
Yejin Choi
Yulia Tsvetkov
Sewoong Oh
Pang Wei Koh
33
56
0
28 Apr 2025
Small Models, Big Tasks: An Exploratory Empirical Study on Small Language Models for Function Calling
Small Models, Big Tasks: An Exploratory Empirical Study on Small Language Models for Function Calling
Ishan Kavathekar
Raghav Donakanti
Ponnurangam Kumaraguru
Karthik Vaidhyanathan
31
36
0
27 Apr 2025
Rethinking Machine Unlearning for Large Language Models
Rethinking Machine Unlearning for Large Language Models
Sijia Liu
Yuanshun Yao
Jinghan Jia
Stephen Casper
Nathalie Baracaldo
...
Hang Li
Kush R. Varshney
Mohit Bansal
Sanmi Koyejo
Yang Liu
AILaw
MU
43
27
0
13 Feb 2024
Whispers in the Machine: Confidentiality in LLM-integrated Systems
Whispers in the Machine: Confidentiality in LLM-integrated Systems
Jonathan Evertz
Merlin Chlosta
Lea Schonherr
Thorsten Eisenhofer
50
13
0
10 Feb 2024
Taiyi: A Bilingual Fine-Tuned Large Language Model for Diverse
  Biomedical Tasks
Taiyi: A Bilingual Fine-Tuned Large Language Model for Diverse Biomedical Tasks
Ling Luo
Jinzhong Ning
Yingwen Zhao
Zhijun Wang
Zeyuan Ding
...
Yuqi Liu
Zhihao Yang
Jian Wang
Yuanyuan Sun
Hongfei Lin
LM&MA
65
42
0
20 Nov 2023
PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Hongwei Yao
Jian Lou
Zhan Qin
SILM
AAML
42
16
0
19 Oct 2023
Survey of Vulnerabilities in Large Language Models Revealed by
  Adversarial Attacks
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks
Erfan Shayegani
Md Abdullah Al Mamun
Yu Fu
Pedram Zaree
Yue Dong
Nael B. Abu-Ghazaleh
AAML
106
107
0
16 Oct 2023
Who's Harry Potter? Approximate Unlearning in LLMs
Who's Harry Potter? Approximate Unlearning in LLMs
Ronen Eldan
M. Russinovich
MU
MoMe
74
118
0
03 Oct 2023
FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language
  Models
FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models
Jingwei Sun
Ziyue Xu
Hongxu Yin
Dong Yang
Daguang Xu
Yiran Chen
Holger R. Roth
VLM
73
3
0
02 Oct 2023
LLM Platform Security: Applying a Systematic Evaluation Framework to
  OpenAI's ChatGPT Plugins
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
Umar Iqbal
Tadayoshi Kohno
Franziska Roesner
ELM
SILM
36
24
0
19 Sep 2023
Poisoning Language Models During Instruction Tuning
Poisoning Language Models During Instruction Tuning
Alexander Wan
Eric Wallace
Sheng Shen
Dan Klein
SILM
75
124
0
01 May 2023
Knowledge Unlearning for Mitigating Privacy Risks in Language Models
Knowledge Unlearning for Mitigating Privacy Risks in Language Models
Joel Jang
Dongkeun Yoon
Sohee Yang
Sungmin Cha
Moontae Lee
Lajanugen Logeswaran
Minjoon Seo
KELM
PILM
MU
100
110
0
04 Oct 2022
Text Revealer: Private Text Reconstruction via Model Inversion Attacks
  against Transformers
Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers
Ruisi Zhang
Seira Hidano
F. Koushanfar
SILM
42
19
0
21 Sep 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
270
8,441
0
04 Mar 2022
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
243
1,386
0
14 Dec 2020
1