ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.01394
  4. Cited By
PrivacyRestore: Privacy-Preserving Inference in Large Language Models
  via Privacy Removal and Restoration

PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration

3 June 2024
Ziqian Zeng
Jianwei Wang
Zhengdong Lu
Huiping Zhuang
Cen Chen
    RALM
    KELM
ArXivPDFHTML

Papers citing "PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration"

11 / 11 papers shown
Title
Safeguarding LLM Embeddings in End-Cloud Collaboration via Entropy-Driven Perturbation
Safeguarding LLM Embeddings in End-Cloud Collaboration via Entropy-Driven Perturbation
Shuaifan Jin
Xiaoyi Pang
Zhibo Wang
He Wang
Jiacheng Du
Jiahui Hu
Kui Ren
SILM
AAML
78
0
0
17 Mar 2025
RewardDS: Privacy-Preserving Fine-Tuning for Large Language Models via Reward Driven Data Synthesis
RewardDS: Privacy-Preserving Fine-Tuning for Large Language Models via Reward Driven Data Synthesis
Jianwei Wang
Junyao Yang
Haoran Li
Huiping Zhuang
Cen Chen
Ziqian Zeng
SyDa
38
0
0
23 Feb 2025
PAPILLON: Privacy Preservation from Internet-based and Local Language Model Ensembles
PAPILLON: Privacy Preservation from Internet-based and Local Language Model Ensembles
Li Siyan
Vethavikashini Chithrra Raghuram
Omar Khattab
Julia Hirschberg
Zhou Yu
21
7
0
22 Oct 2024
Adanonymizer: Interactively Navigating and Balancing the Duality of
  Privacy and Output Performance in Human-LLM Interaction
Adanonymizer: Interactively Navigating and Balancing the Duality of Privacy and Output Performance in Human-LLM Interaction
Shuning Zhang
Xin Yi
Haobin Xing
Lyumanshan Ye
Yongquan Hu
Hewu Li
26
1
0
19 Oct 2024
Confidential Prompting: Protecting User Prompts from Cloud LLM Providers
Confidential Prompting: Protecting User Prompts from Cloud LLM Providers
In Gim
Caihua Li
Lin Zhong
35
2
0
27 Sep 2024
DDXPlus: A New Dataset For Automatic Medical Diagnosis
DDXPlus: A New Dataset For Automatic Medical Diagnosis
Arsène Fansi Tchango
Rishab Goel
Zhi Wen
Julien Martel
J. Ghosn
102
35
0
18 May 2022
You Don't Know My Favorite Color: Preventing Dialogue Representations
  from Revealing Speakers' Private Personas
You Don't Know My Favorite Color: Preventing Dialogue Representations from Revealing Speakers' Private Personas
Haoran Li
Yangqiu Song
Lixin Fan
59
17
0
26 Apr 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
303
11,730
0
04 Mar 2022
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning
  for Language Models
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Liam H. Fowl
Jonas Geiping
Steven Reich
Yuxin Wen
Wojtek Czaja
Micah Goldblum
Tom Goldstein
FedML
71
55
0
29 Jan 2022
Differentially Private Fine-tuning of Language Models
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
134
344
0
13 Oct 2021
Probing Classifiers: Promises, Shortcomings, and Advances
Probing Classifiers: Promises, Shortcomings, and Advances
Yonatan Belinkov
221
402
0
24 Feb 2021
1