ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.17485
30
5

Comet:\textit{Comet:}Comet: A Com‾\underline{Com}Com​munication-e‾\underline{e}e​fficient and Performant Approximat‾\underline{t}t​ion for Private Transformer Inference

24 May 2024
Xiangrui Xu
Qiao Zhang
R. Ning
Chunsheng Xin
Hongyi Wu
ArXivPDFHTML
Abstract

The prevalent use of Transformer-like models, exemplified by ChatGPT in modern language processing applications, underscores the critical need for enabling private inference essential for many cloud-based services reliant on such models. However, current privacy-preserving frameworks impose significant communication burden, especially for non-linear computation in Transformer model. In this paper, we introduce a novel plug-in method Comet to effectively reduce the communication cost without compromising the inference performance. We second introduce an efficient approximation method to eliminate the heavy communication in finding good initial approximation. We evaluate our Comet on Bert and RoBERTa models with GLUE benchmark datasets, showing up to 3.9×\times× less communication and 3.5×\times× speedups while keep competitive model performance compared to the prior art.

View on arXiv
Comments on this paper