ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.17127
  4. Cited By
PAPILLON: Privacy Preservation from Internet-based and Local Language Model Ensembles
v1v2v3 (latest)

PAPILLON: Privacy Preservation from Internet-based and Local Language Model Ensembles

North American Chapter of the Association for Computational Linguistics (NAACL), 2024
22 October 2024
Li Siyan
Vethavikashini Chithrra Raghuram
Omar Khattab
Julia Hirschberg
Zhou Yu
ArXiv (abs)PDFHTML

Papers citing "PAPILLON: Privacy Preservation from Internet-based and Local Language Model Ensembles"

45 / 45 papers shown
Title
SALT: Steering Activations towards Leakage-free Thinking in Chain of Thought
SALT: Steering Activations towards Leakage-free Thinking in Chain of Thought
Shourya Batra
Pierce Tillman
Samarth Gaggar
Shashank Kesineni
Kevin Zhu
Sunishchal Dev
Ashwinee Panda
Vasu Sharma
Maheep Chaudhary
KELMPILMLLMSVLRMELM
399
0
0
11 Nov 2025
Feedback Descent: Open-Ended Text Optimization via Pairwise Comparison
Feedback Descent: Open-Ended Text Optimization via Pairwise Comparison
Yoonho Lee
Joseph Boen
Chelsea Finn
72
1
0
11 Nov 2025
An Automated Framework for Strategy Discovery, Retrieval, and Evolution in LLM Jailbreak Attacks
An Automated Framework for Strategy Discovery, Retrieval, and Evolution in LLM Jailbreak Attacks
Xu Liu
Yan Chen
Kan Ling
Yichi Zhu
Hengrun Zhang
Guisheng Fan
Huiqun Yu
AAML
73
0
0
04 Nov 2025
PrivacyPAD: A Reinforcement Learning Framework for Dynamic Privacy-Aware Delegation
PrivacyPAD: A Reinforcement Learning Framework for Dynamic Privacy-Aware Delegation
Zheng Hui
Yijiang River Dong
Sanhanat Sivapiromrat
Ehsan Shareghi
Nigel Collier
61
0
0
16 Oct 2025
VortexPIA: Indirect Prompt Injection Attack against LLMs for Efficient Extraction of User Privacy
VortexPIA: Indirect Prompt Injection Attack against LLMs for Efficient Extraction of User Privacy
Yu Cui
Sicheng Pan
Yifei Liu
Haibin Zhang
Cong Zuo
93
2
0
05 Oct 2025
Operationalizing Data Minimization for Privacy-Preserving LLM Prompting
Operationalizing Data Minimization for Privacy-Preserving LLM Prompting
Jijie Zhou
Niloofar Mireshghallah
Tianshi Li
80
1
0
04 Oct 2025
Position: Privacy Is Not Just Memorization!
Position: Privacy Is Not Just Memorization!
Niloofar Mireshghallah
Tianshi Li
PILM
189
1
0
02 Oct 2025
AutoSpec: An Agentic Framework for Automatically Drafting Patent Specification
AutoSpec: An Agentic Framework for Automatically Drafting Patent Specification
Ryan Shea
Zhou Yu
73
0
0
23 Sep 2025
Searching for Privacy Risks in LLM Agents via Simulation
Searching for Privacy Risks in LLM Agents via Simulation
Yanzhe Zhang
Diyi Yang
72
5
0
14 Aug 2025
The Surprising Effectiveness of Membership Inference with Simple N-Gram Coverage
The Surprising Effectiveness of Membership Inference with Simple N-Gram Coverage
Skyler Hallinan
Jaehun Jung
Melanie Sclar
Ximing Lu
Abhilasha Ravichander
Sahana Ramnath
Yejin Choi
Sai Praneeth Karimireddy
Niloofar Mireshghallah
Xiang Ren
AAMLMLAU
209
2
0
13 Aug 2025
Multi-module GRPO: Composing Policy Gradients and Prompt Optimization for Language Model Programs
Multi-module GRPO: Composing Policy Gradients and Prompt Optimization for Language Model Programs
Noah Ziems
Dilara Soylu
Lakshya A Agrawal
Isaac Miller
Liheng Lai
...
Dan Klein
Matei A. Zaharia
Karel DÓosterlinck
Christopher Potts
Omar Khattab
134
0
0
06 Aug 2025
PPMI: Privacy-Preserving LLM Interaction with Socratic Chain-of-Thought Reasoning and Homomorphically Encrypted Vector Databases
PPMI: Privacy-Preserving LLM Interaction with Socratic Chain-of-Thought Reasoning and Homomorphically Encrypted Vector Databases
Yubeen Bae
Minchan Kim
Jaejin Lee
Sangbum Kim
Jaehyung Kim
Yejin Choi
Niloofar Mireshghallah
104
3
0
19 Jun 2025
Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers
Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers
Tommaso Green
Martin Gubri
Haritz Puerto
Sangdoo Yun
Seong Joon Oh
MIACVPILMELMLRM
990
9
2
18 Jun 2025
Learning Obfuscations Of LLM Embedding Sequences: Stained Glass Transform
Learning Obfuscations Of LLM Embedding Sequences: Stained Glass Transform
Jay Roberts
Kyle Mylonakis
Sidhartha Roy
Kaan Kale
166
0
0
11 Jun 2025
Automated Privacy Information Annotation in Large Language Model Interactions
Automated Privacy Information Annotation in Large Language Model Interactions
Hang Zeng
Xiangyu Liu
Yong Hu
Chaoyue Niu
Fan Wu
Shaojie Tang
Guihai Chen
191
2
0
27 May 2025
Can Large Language Models Really Recognize Your Name?
Can Large Language Models Really Recognize Your Name?
Dzung Pham
Peter Kairouz
Niloofar Mireshghallah
Eugene Bagdasarian
Chau Minh Pham
Amir Houmansadr
PILM
286
4
0
20 May 2025
BeamClean: Language Aware Embedding Reconstruction
BeamClean: Language Aware Embedding Reconstruction
Kaan Kale
Kyle Mylonakis
Jay Roberts
Sidhartha Roy
AAML
369
1
0
19 May 2025
Collaborative LLM Numerical Reasoning with Local Data Protection
Collaborative LLM Numerical Reasoning with Local Data Protection
Min Zhang
Yuzhe Lu
Yun Zhou
Panpan Xu
Lin Lee Cheong
Chang-Tien Lu
Haozhu Wang
291
0
0
01 Apr 2025
Towards Trustworthy GUI Agents: A Survey
Towards Trustworthy GUI Agents: A Survey
Yucheng Shi
Wenhao Yu
Wenlin Yao
Wenhu Chen
Ninghao Liu
209
16
0
30 Mar 2025
How Robust Are Router-LLMs? Analysis of the Fragility of LLM Routing Capabilities
How Robust Are Router-LLMs? Analysis of the Fragility of LLM Routing Capabilities
Aly M. Kassem
Bernhard Schölkopf
Zhijing Jin
148
5
0
20 Mar 2025
Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents
Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational AgentsAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Ivoline Ngong
Swanand Kadhe
Hao Wang
K. Murugesan
Justin D. Weisz
Amit Dhurandhar
Karthikeyan N. Ramamurthy
219
12
0
22 Feb 2025
Minions: Cost-efficient Collaboration Between On-device and Cloud Language Models
Minions: Cost-efficient Collaboration Between On-device and Cloud Language Models
A. Narayan
D. Biderman
Sabri Eyuboglu
Avner May
Scott W. Linderman
James Zou
Christopher Ré
225
9
0
21 Feb 2025
RTBAS: Defending LLM Agents Against Prompt Injection and Privacy Leakage
RTBAS: Defending LLM Agents Against Prompt Injection and Privacy Leakage
Peter Yong Zhong
Siyuan Chen
Ruiqi Wang
McKenna McCall
Ben L. Titzer
Heather Miller
Phillip B. Gibbons
LLMAG
328
17
0
17 Feb 2025
PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action
PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in ActionNeural Information Processing Systems (NeurIPS), 2024
Yijia Shao
Tianshi Li
Weiyan Shi
Yanchen Liu
Diyi Yang
PILM
455
74
0
29 Aug 2024
Optimizing Instructions and Demonstrations for Multi-Stage Language
  Model Programs
Optimizing Instructions and Demonstrations for Multi-Stage Language Model Programs
Krista Opsahl-Ong
Michael J Ryan
Josh Purtell
David Broman
Christopher Potts
Matei A. Zaharia
Omar Khattab
189
102
0
17 Jun 2024
PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration
PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration
Huiping Zhuang
Jianwei Wang
Zhengdong Lu
Huiping Zhuang
Haoran Li
Huiping Zhuang
Cen Chen
RALMKELM
456
14
0
03 Jun 2024
WildChat: 1M ChatGPT Interaction Logs in the Wild
WildChat: 1M ChatGPT Interaction Logs in the WildInternational Conference on Learning Representations (ICLR), 2024
Wenting Zhao
Xiang Ren
Jack Hessel
Claire Cardie
Yejin Choi
Yuntian Deng
193
360
0
02 May 2024
ChatGPT Rates Natural Language Explanation Quality Like Humans: But on
  Which Scales?
ChatGPT Rates Natural Language Explanation Quality Like Humans: But on Which Scales?
Fan Huang
Haewoon Kwak
Kunwoo Park
Jisun An
ALMELMAI4MH
236
15
0
26 Mar 2024
Security and Privacy Challenges of Large Language Models: A Survey
Security and Privacy Challenges of Large Language Models: A Survey
B. Das
M. H. Amini
Yanzhao Wu
PILMELM
324
284
0
30 Jan 2024
DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt
  Engineer
DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt EngineerInternational Conference on Learning Representations (ICLR), 2023
Junyuan Hong
Jiachen T. Wang
Chenhui Zhang
Zhangheng Li
Yue Liu
Zinan Lin
421
54
0
27 Nov 2023
Can LLMs Keep a Secret? Testing Privacy Implications of Language Models
  via Contextual Integrity Theory
Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity TheoryInternational Conference on Learning Representations (ICLR), 2023
Niloofar Mireshghallah
Hyunwoo J. Kim
Xuhui Zhou
Yulia Tsvetkov
Maarten Sap
Reza Shokri
Yejin Choi
PILM
252
143
0
27 Oct 2023
Efficient Memory Management for Large Language Model Serving with
  PagedAttention
Efficient Memory Management for Large Language Model Serving with PagedAttentionSymposium on Operating Systems Principles (SOSP), 2023
Woosuk Kwon
Zhuohan Li
Siyuan Zhuang
Ying Sheng
Lianmin Zheng
Cody Hao Yu
Joseph E. Gonzalez
Haotong Zhang
Ion Stoica
VLM
1.1K
3,881
0
12 Sep 2023
Joint Prompt Optimization of Stacked LLMs using Variational Inference
Joint Prompt Optimization of Stacked LLMs using Variational InferenceNeural Information Processing Systems (NeurIPS), 2023
Alessandro Sordoni
Xingdi Yuan
Marc-Alexandre Côté
Matheus Pereira
Adam Trischler
Ziang Xiao
Arian Hosseini
Friederike Niedtner
Nicolas Le Roux
233
35
0
21 Jun 2023
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
Judging LLM-as-a-Judge with MT-Bench and Chatbot ArenaNeural Information Processing Systems (NeurIPS), 2023
Lianmin Zheng
Wei-Lin Chiang
Ying Sheng
Siyuan Zhuang
Zhanghao Wu
...
Dacheng Li
Eric Xing
Haotong Zhang
Joseph E. Gonzalez
Ion Stoica
ALMOSLMELM
2.2K
6,226
0
09 Jun 2023
Large Language Models are not Fair Evaluators
Large Language Models are not Fair EvaluatorsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Peiyi Wang
Lei Li
Liang Chen
Zefan Cai
Dawei Zhu
Binghuai Lin
Yunbo Cao
Qi Liu
Tianyu Liu
Zhifang Sui
ALM
418
759
0
29 May 2023
ChatGPT as a Therapist Assistant: A Suitability Study
ChatGPT as a Therapist Assistant: A Suitability StudySocial Science Research Network (SSRN), 2023
Mahshid Eshghie
Mojtaba Eshghie
LM&MAAI4MH
114
26
0
19 Apr 2023
ChatGPT Outperforms Crowd-Workers for Text-Annotation Tasks
ChatGPT Outperforms Crowd-Workers for Text-Annotation TasksProceedings of the National Academy of Sciences of the United States of America (PNAS), 2023
Fabrizio Gilardi
Meysam Alizadeh
M. Kubli
AI4MH
412
1,183
0
27 Mar 2023
Analyzing Leakage of Personally Identifiable Information in Language
  Models
Analyzing Leakage of Personally Identifiable Information in Language ModelsIEEE Symposium on Security and Privacy (IEEE S&P), 2023
Nils Lukas
A. Salem
Robert Sim
Shruti Tople
Lukas Wutschitz
Santiago Zanella Béguelin
PILM
518
305
0
01 Feb 2023
Red teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and
  Toxicity
Red teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and Toxicity
Terry Yue Zhuo
Yujin Huang
Chunyang Chen
Zhenchang Xing
SILM
358
125
0
30 Jan 2023
What Does it Mean for a Language Model to Preserve Privacy?
What Does it Mean for a Language Model to Preserve Privacy?Conference on Fairness, Accountability and Transparency (FAccT), 2022
Hannah Brown
Katherine Lee
Fatemehsadat Mireshghallah
Reza Shokri
Florian Tramèr
PILM
277
287
0
11 Feb 2022
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt TuningConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
1.2K
4,827
0
18 Apr 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language ModelsUSENIX Security Symposium (USENIX Security), 2020
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
Basel Alomair
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAUSILM
1.1K
2,402
0
14 Dec 2020
A Differentially Private Text Perturbation Method Using a Regularized
  Mahalanobis Metric
A Differentially Private Text Perturbation Method Using a Regularized Mahalanobis Metric
Zekun Xu
Abhinav Aggarwal
Oluwaseyi Feyisetan
Nathanael Teissier
198
64
0
22 Oct 2020
Training Production Language Models without Memorizing User Data
Training Production Language Models without Memorizing User Data
Swaroop Indra Ramaswamy
Om Thakkar
Rajiv Mathews
Galen Andrew
H. B. McMahan
Franccoise Beaufays
FedML
221
95
0
21 Sep 2020
Privacy- and Utility-Preserving Textual Analysis via Calibrated
  Multivariate Perturbations
Privacy- and Utility-Preserving Textual Analysis via Calibrated Multivariate PerturbationsWeb Search and Data Mining (WSDM), 2019
Oluwaseyi Feyisetan
Borja Balle
Thomas Drake
Tom Diethe
171
183
0
20 Oct 2019
1