ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.04040
  4. Cited By
A First Look At Efficient And Secure On-Device LLM Inference Against KV
  Leakage

A First Look At Efficient And Secure On-Device LLM Inference Against KV Leakage

International Workshop on Mobility in the Evolving Internet Architecture (MobiArch), 2024
6 September 2024
Huan Yang
Deyu Zhang
Yudong Zhao
Yuanchun Li
Yunxin Liu
ArXiv (abs)PDFHTMLGithub (1415★)

Papers citing "A First Look At Efficient And Secure On-Device LLM Inference Against KV Leakage"

4 / 4 papers shown
Shadow in the Cache: Unveiling and Mitigating Privacy Risks of KV-cache in LLM Inference
Shadow in the Cache: Unveiling and Mitigating Privacy Risks of KV-cache in LLM Inference
Zhifan Luo
Shuo Shao
Su Zhang
Lijing Zhou
Yuke Hu
Chenxu Zhao
Zhihao Liu
Zhan Qin
374
11
0
13 Aug 2025
Depth Gives a False Sense of Privacy: LLM Internal States Inversion
Depth Gives a False Sense of Privacy: LLM Internal States Inversion
Tian Dong
Yan Meng
Shaofeng Li
Guoxing Chen
Zhen Liu
Haojin Zhu
AAML
295
5
0
22 Jul 2025
Taming the Titans: A Survey of Efficient LLM Inference Serving
Taming the Titans: A Survey of Efficient LLM Inference Serving
Ranran Zhen
Junlin Li
Yixin Ji
Zhiyong Yang
Tong Liu
Qingrong Xia
Xinyu Duan
Zehao Wang
Baoxing Huai
Hao Fei
LLMAG
541
14
0
28 Apr 2025
Watermarking Large Language Models and the Generated Content:
  Opportunities and Challenges
Watermarking Large Language Models and the Generated Content: Opportunities and ChallengesAsilomar Conference on Signals, Systems and Computers (ACSSC), 2024
Ruisi Zhang
F. Koushanfar
WaLM
329
3
0
24 Oct 2024
1
Page 1 of 1