ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.05520
  4. Cited By
What Does it Mean for a Language Model to Preserve Privacy?

What Does it Mean for a Language Model to Preserve Privacy?

11 February 2022
Hannah Brown
Katherine Lee
Fatemehsadat Mireshghallah
Reza Shokri
Florian Tramèr
    PILM
ArXivPDFHTML

Papers citing "What Does it Mean for a Language Model to Preserve Privacy?"

45 / 45 papers shown
Title
PatientDx: Merging Large Language Models for Protecting Data-Privacy in Healthcare
PatientDx: Merging Large Language Models for Protecting Data-Privacy in Healthcare
José G. Moreno
Jesus Lovon
M'Rick Robin-Charlet
Christine Damase-Michel
L. Tamine
MoMe
LM&MA
53
0
0
24 Apr 2025
A General Pseudonymization Framework for Cloud-Based LLMs: Replacing Privacy Information in Controlled Text Generation
A General Pseudonymization Framework for Cloud-Based LLMs: Replacing Privacy Information in Controlled Text Generation
Shilong Hou
Ruilin Shang
Zi Long
Xianghua Fu
Yin Chen
62
0
0
24 Feb 2025
Data-Constrained Synthesis of Training Data for De-Identification
Data-Constrained Synthesis of Training Data for De-Identification
Thomas Vakili
Aron Henriksson
Hercules Dalianis
SyDa
44
0
0
24 Feb 2025
Mitigating the Privacy Issues in Retrieval-Augmented Generation (RAG) via Pure Synthetic Data
Mitigating the Privacy Issues in Retrieval-Augmented Generation (RAG) via Pure Synthetic Data
Shenglai Zeng
Jiankun Zhang
Pengfei He
J. Ren
Tianqi Zheng
Hanqing Lu
Han Xu
Hui Liu
Yue Xing
Jiliang Tang
135
9
0
21 Feb 2025
MATH-Perturb: Benchmarking LLMs' Math Reasoning Abilities against Hard Perturbations
MATH-Perturb: Benchmarking LLMs' Math Reasoning Abilities against Hard Perturbations
Kaixuan Huang
Jiacheng Guo
Zihao Li
X. Ji
Jiawei Ge
...
Yangsibo Huang
Chi Jin
Xinyun Chen
Chiyuan Zhang
Mengdi Wang
AAML
LRM
93
7
0
10 Feb 2025
Can LLMs Rank the Harmfulness of Smaller LLMs? We are Not There Yet
Can LLMs Rank the Harmfulness of Smaller LLMs? We are Not There Yet
Berk Atil
Vipul Gupta
Sarkar Snigdha Sarathi Das
R. Passonneau
153
0
0
07 Feb 2025
Privacy-Preserving Edge Speech Understanding with Tiny Foundation Models
Privacy-Preserving Edge Speech Understanding with Tiny Foundation Models
A. Benazir
Felix Xiaozhu Lin
41
0
0
29 Jan 2025
Enhancing Privacy in the Early Detection of Sexual Predators Through Federated Learning and Differential Privacy
Enhancing Privacy in the Early Detection of Sexual Predators Through Federated Learning and Differential Privacy
Khaoula Chehbouni
Martine De Cock
Gilles Caporossi
Afaf Taik
Reihaneh Rabbany
G. Farnadi
73
0
0
21 Jan 2025
Human-inspired Perspectives: A Survey on AI Long-term Memory
Human-inspired Perspectives: A Survey on AI Long-term Memory
Zihong He
Weizhe Lin
Hao Zheng
Fan Zhang
Matt Jones
Laurence Aitchison
X. Xu
Miao Liu
Per Ola Kristensson
Junxiao Shen
77
2
0
01 Nov 2024
PAPILLON: Privacy Preservation from Internet-based and Local Language Model Ensembles
PAPILLON: Privacy Preservation from Internet-based and Local Language Model Ensembles
Li Siyan
Vethavikashini Chithrra Raghuram
Omar Khattab
Julia Hirschberg
Zhou Yu
21
7
0
22 Oct 2024
Position: LLM Unlearning Benchmarks are Weak Measures of Progress
Position: LLM Unlearning Benchmarks are Weak Measures of Progress
Pratiksha Thaker
Shengyuan Hu
Neil Kale
Yash Maurya
Zhiwei Steven Wu
Virginia Smith
MU
47
10
0
03 Oct 2024
Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data
Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data
Jie Zhang
Debeshee Das
Gautam Kamath
Florian Tramèr
MIALM
MIACV
223
16
1
29 Sep 2024
MEOW: MEMOry Supervised LLM Unlearning Via Inverted Facts
MEOW: MEMOry Supervised LLM Unlearning Via Inverted Facts
Tianle Gu
Kexin Huang
Ruilin Luo
Yuanqi Yao
Yujiu Yang
Yan Teng
Yingchun Wang
MU
36
4
0
18 Sep 2024
Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models
Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models
Haoyu Tang
Ye Liu
Xukai Liu
Xukai Liu
Yanghai Zhang
Kai Zhang
Xiaofang Zhou
Enhong Chen
MU
67
3
0
25 Jul 2024
Recite, Reconstruct, Recollect: Memorization in LMs as a Multifaceted Phenomenon
Recite, Reconstruct, Recollect: Memorization in LMs as a Multifaceted Phenomenon
USVSN Sai Prashanth
Alvin Deng
Kyle O'Brien
Jyothir S V
Mohammad Aflah Khan
...
Jacob Ray Fuehne
Stella Biderman
Tracy Ke
Katherine Lee
Naomi Saphra
55
12
0
25 Jun 2024
PlagBench: Exploring the Duality of Large Language Models in Plagiarism Generation and Detection
PlagBench: Exploring the Duality of Large Language Models in Plagiarism Generation and Detection
Jooyoung Lee
Toshini Agrawal
Adaku Uchendu
Thai V. Le
Jinghui Chen
Dongwon Lee
31
1
0
24 Jun 2024
REVS: Unlearning Sensitive Information in Language Models via Rank Editing in the Vocabulary Space
REVS: Unlearning Sensitive Information in Language Models via Rank Editing in the Vocabulary Space
Tomer Ashuach
Martin Tutek
Yonatan Belinkov
KELM
MU
63
4
0
13 Jun 2024
Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs
  with Nothing
Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing
Zhangchen Xu
Fengqing Jiang
Luyao Niu
Yuntian Deng
Radha Poovendran
Yejin Choi
Bill Yuchen Lin
SyDa
32
111
0
12 Jun 2024
Reconstructing training data from document understanding models
Reconstructing training data from document understanding models
Jérémie Dentan
Arnaud Paran
A. Shabou
AAML
SyDa
38
1
0
05 Jun 2024
Participation in the age of foundation models
Participation in the age of foundation models
Harini Suresh
Emily Tseng
Meg Young
Mary L. Gray
Emma Pierson
Karen Levy
36
20
0
29 May 2024
To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning
  in Large Language Models
To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning in Large Language Models
George-Octavian Barbulescu
Peter Triantafillou
MU
31
16
0
06 May 2024
IDPFilter: Mitigating Interdependent Privacy Issues in Third-Party Apps
IDPFilter: Mitigating Interdependent Privacy Issues in Third-Party Apps
Shuaishuai Liu
Gergely Biczók
16
0
0
02 May 2024
Exploring the Potential of Large Language Models for Improving Digital Forensic Investigation Efficiency
Exploring the Potential of Large Language Models for Improving Digital Forensic Investigation Efficiency
Akila Wickramasekara
F. Breitinger
Mark Scanlon
42
8
0
29 Feb 2024
FinLLMs: A Framework for Financial Reasoning Dataset Generation with
  Large Language Models
FinLLMs: A Framework for Financial Reasoning Dataset Generation with Large Language Models
Ziqiang Yuan
Kaiyuan Wang
Shoutai Zhu
Ye Yuan
Jingya Zhou
Yanlin Zhu
Wenqi Wei
34
5
0
19 Jan 2024
"I Want It That Way": Enabling Interactive Decision Support Using Large
  Language Models and Constraint Programming
"I Want It That Way": Enabling Interactive Decision Support Using Large Language Models and Constraint Programming
Connor Lawless
Jakob Schoeffer
Lindy Le
Kael Rowan
Shilad Sen
Cristina St. Hill
Jina Suh
Bahar Sarrafzadeh
33
8
0
12 Dec 2023
DP-NMT: Scalable Differentially-Private Machine Translation
DP-NMT: Scalable Differentially-Private Machine Translation
Timour Igamberdiev
Doan Nam Long Vu
Felix Künnecke
Zhuo Yu
Jannik Holmer
Ivan Habernal
29
7
0
24 Nov 2023
Leveraging Large Language Models for Collective Decision-Making
Leveraging Large Language Models for Collective Decision-Making
Marios Papachristou
Longqi Yang
Chin-Chia Hsu
LLMAG
31
2
0
03 Nov 2023
A Systematic Study of Performance Disparities in Multilingual
  Task-Oriented Dialogue Systems
A Systematic Study of Performance Disparities in Multilingual Task-Oriented Dialogue Systems
Songbo Hu
Han Zhou
Moy Yuan
Milan Gritta
Guchun Zhang
Ignacio Iacobacci
Anna Korhonen
Ivan Vulić
28
3
0
19 Oct 2023
Privacy Preserving Large Language Models: ChatGPT Case Study Based
  Vision and Framework
Privacy Preserving Large Language Models: ChatGPT Case Study Based Vision and Framework
Imdad Ullah
Najm Hassan
S. Gill
Basem Suleiman
T. Ahanger
Zawar Shah
Junaid Qadir
S. Kanhere
35
16
0
19 Oct 2023
Beyond Memorization: Violating Privacy Via Inference with Large Language
  Models
Beyond Memorization: Violating Privacy Via Inference with Large Language Models
Robin Staab
Mark Vero
Mislav Balunović
Martin Vechev
PILM
38
74
0
11 Oct 2023
Protecting User Privacy in Remote Conversational Systems: A
  Privacy-Preserving framework based on text sanitization
Protecting User Privacy in Remote Conversational Systems: A Privacy-Preserving framework based on text sanitization
Zhigang Kan
Linbo Qiao
Hao Yu
Liwen Peng
Yifu Gao
Dongsheng Li
26
20
0
14 Jun 2023
Extracting Training Data from Diffusion Models
Extracting Training Data from Diffusion Models
Nicholas Carlini
Jamie Hayes
Milad Nasr
Matthew Jagielski
Vikash Sehwag
Florian Tramèr
Borja Balle
Daphne Ippolito
Eric Wallace
DiffM
63
569
0
30 Jan 2023
Context-Aware Differential Privacy for Language Modeling
Context-Aware Differential Privacy for Language Modeling
M. H. Dinh
Ferdinando Fioretto
23
2
0
28 Jan 2023
Differentially Private Natural Language Models: Recent Advances and
  Future Directions
Differentially Private Natural Language Models: Recent Advances and Future Directions
Lijie Hu
Ivan Habernal
Lei Shen
Di Wang
AAML
15
18
0
22 Jan 2023
Tensions Between the Proxies of Human Values in AI
Tensions Between the Proxies of Human Values in AI
Teresa Datta
D. Nissani
Max Cembalest
Akash Khanna
Haley Massa
John P. Dickerson
26
2
0
14 Dec 2022
Reranking Overgenerated Responses for End-to-End Task-Oriented Dialogue
  Systems
Reranking Overgenerated Responses for End-to-End Task-Oriented Dialogue Systems
Songbo Hu
Ivan Vulić
Fangyu Liu
Anna Korhonen
30
0
0
07 Nov 2022
Pile of Law: Learning Responsible Data Filtering from the Law and a
  256GB Open-Source Legal Dataset
Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset
Peter Henderson
M. Krass
Lucia Zheng
Neel Guha
Christopher D. Manning
Dan Jurafsky
Daniel E. Ho
AILaw
ELM
129
97
0
01 Jul 2022
Mix and Match: Learning-free Controllable Text Generation using Energy
  Language Models
Mix and Match: Learning-free Controllable Text Generation using Energy Language Models
Fatemehsadat Mireshghallah
Kartik Goyal
Taylor Berg-Kirkpatrick
34
78
0
24 Mar 2022
Do Language Models Plagiarize?
Do Language Models Plagiarize?
Jooyoung Lee
Thai Le
Jinghui Chen
Dongwon Lee
25
73
0
15 Mar 2022
Quantifying Privacy Risks of Masked Language Models Using Membership
  Inference Attacks
Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks
Fatemehsadat Mireshghallah
Kartik Goyal
Archit Uniyal
Taylor Berg-Kirkpatrick
Reza Shokri
MIALM
30
151
0
08 Mar 2022
Differentially Private Fine-tuning of Language Models
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
134
346
0
13 Oct 2021
Deduplicating Training Data Makes Language Models Better
Deduplicating Training Data Makes Language Models Better
Katherine Lee
Daphne Ippolito
A. Nystrom
Chiyuan Zhang
Douglas Eck
Chris Callison-Burch
Nicholas Carlini
SyDa
242
591
0
14 Jul 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
269
1,812
0
14 Dec 2020
Calibration of Pre-trained Transformers
Calibration of Pre-trained Transformers
Shrey Desai
Greg Durrett
UQLM
243
289
0
17 Mar 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,460
0
23 Jan 2020
1