ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12896
73
0

Safeguarding LLM Embeddings in End-Cloud Collaboration via Entropy-Driven Perturbation

17 March 2025
Shuaifan Jin
Xiaoyi Pang
Zhibo Wang
He Wang
Jiacheng Du
Jiahui Hu
Kui Ren
    SILM
    AAML
ArXivPDFHTML
Abstract

Recent studies improve on-device language model (LM) inference through end-cloud collaboration, where the end device retrieves useful information from cloud databases to enhance local processing, known as Retrieval-Augmented Generation (RAG). Typically, to retrieve information from the cloud while safeguarding privacy, the end device transforms original data into embeddings with a local embedding model. However, the recently emerging Embedding Inversion Attacks (EIAs) can still recover the original data from text embeddings (e.g., training a recovery model to map embeddings back to original texts), posing a significant threat to user privacy. To address this risk, we propose EntroGuard, an entropy-driven perturbation-based embedding privacy protection method, which can protect the privacy of text embeddings while maintaining retrieval accuracy during the end-cloud collaboration. Specifically, to defeat various EIAs, we perturb the embeddings to increase the entropy of the recovered text in the common structure of recovery models, thus steering the embeddings toward meaningless texts rather than original sensitive texts during the recovery process. To maintain retrieval performance in the cloud, we constrain the perturbations within a bound, applying the strategy of reducing them where redundant and increasing them where sparse. Moreover, EntroGuard can be directly integrated into end devices without requiring any modifications to the embedding model. Extensive experimental results demonstrate that EntroGuard can reduce the risk of privacy leakage by up to 8 times at most with negligible loss of retrieval performance compared to existing privacy-preserving methods.

View on arXiv
@article{jin2025_2503.12896,
  title={ Safeguarding LLM Embeddings in End-Cloud Collaboration via Entropy-Driven Perturbation },
  author={ Shuaifan Jin and Xiaoyi Pang and Zhibo Wang and He Wang and Jiacheng Du and Jiahui Hu and Kui Ren },
  journal={arXiv preprint arXiv:2503.12896},
  year={ 2025 }
}
Comments on this paper