ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.17127
19
7

PAPILLON: Privacy Preservation from Internet-based and Local Language Model Ensembles

22 October 2024
Li Siyan
Vethavikashini Chithrra Raghuram
Omar Khattab
Julia Hirschberg
Zhou Yu
ArXivPDFHTML
Abstract

Users can divulge sensitive information to proprietary LLM providers, raising significant privacy concerns. While open-source models, hosted locally on the user's machine, alleviate some concerns, models that users can host locally are often less capable than proprietary frontier models. Toward preserving user privacy while retaining the best quality, we propose Privacy-Conscious Delegation, a novel task for chaining API-based and local models. We utilize recent public collections of user-LLM interactions to construct a natural benchmark called PUPA, which contains personally identifiable information (PII). To study potential approaches, we devise PAPILLON, a multi-stage LLM pipeline that uses prompt optimization to address a simpler version of our task. Our best pipeline maintains high response quality for 85.5% of user queries while restricting privacy leakage to only 7.5%. We still leave a large margin to the generation quality of proprietary LLMs for future work. Our data and code is available atthis https URL.

View on arXiv
@article{siyan2025_2410.17127,
  title={ PAPILLON: Privacy Preservation from Internet-based and Local Language Model Ensembles },
  author={ Li Siyan and Vethavikashini Chithrra Raghuram and Omar Khattab and Julia Hirschberg and Zhou Yu },
  journal={arXiv preprint arXiv:2410.17127},
  year={ 2025 }
}
Comments on this paper