33
2

Confidential Prompting: Protecting User Prompts from Cloud LLM Providers

Abstract

Our work tackles the challenge of securing user inputs in cloud-hosted large language model (LLM) serving while ensuring model confidentiality, output invariance, and compute efficiency. We introduce Secure Partitioned Decoding (SPD), which uses confidential computing to confine user prompts to a trusted execution environment (TEE), namely a confidential virtual machine (CVM), while allowing service providers to generate tokens efficiently. We also introduce a novel cryptographic method, Prompt Obfuscation (PO), to ensure robustness against reconstruction attacks on SPD. We demonstrate our approach preserves both prompt confidentiality and LLM serving efficiency. Our solution enables privacy-preserving cloud LLM serving that handles sensitive prompts, such as clinical records, financial data, and personal information.

View on arXiv
@article{gim2025_2409.19134,
  title={ Confidential Prompting: Protecting User Prompts from Cloud LLM Providers },
  author={ In Gim and Caihua Li and Lin Zhong },
  journal={arXiv preprint arXiv:2409.19134},
  year={ 2025 }
}
Comments on this paper