Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2409.04992
Cited By
InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference
8 September 2024
Xiurui Pan
Endian Li
Qiao Li
Shengwen Liang
Yizhou Shan
Ke Zhou
Yingwei Luo
Xiaolin Wang
Jie Zhang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference"
4 / 4 papers shown
Title
Cognitive Memory in Large Language Models
Lianlei Shan
Shixian Luo
Zezhou Zhu
Yu Yuan
Yong Wu
LLMAG
KELM
63
1
0
03 Apr 2025
Seesaw: High-throughput LLM Inference via Model Re-sharding
Qidong Su
Wei Zhao
X. Li
Muralidhar Andoorveedu
Chenhao Jiang
Zhanda Zhu
Kevin Song
Christina Giannoula
Gennady Pekhimenko
LRM
68
0
0
09 Mar 2025
iServe: An Intent-based Serving System for LLMs
Dimitrios Liakopoulos
Tianrui Hu
Prasoon Sinha
N. Yadwadkar
VLM
56
0
0
08 Jan 2025
Hydragen: High-Throughput LLM Inference with Shared Prefixes
Jordan Juravsky
Bradley Brown
Ryan Ehrlich
Daniel Y. Fu
Christopher Ré
Azalia Mirhoseini
49
35
0
07 Feb 2024
1