ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24357
  4. Cited By
ReCalKV: Low-Rank KV Cache Compression via Head Reordering and Offline Calibration
v1v2v3 (latest)

ReCalKV: Low-Rank KV Cache Compression via Head Reordering and Offline Calibration

30 May 2025
Xianglong Yan
Zhiteng Li
Tianao Zhang
Linghe Kong
Yulun Zhang
Yulun Zhang
Yunbo Wang
ArXiv (abs)PDFHTMLGithub (3★)

Papers citing "ReCalKV: Low-Rank KV Cache Compression via Head Reordering and Offline Calibration"

3 / 3 papers shown
FlashEdit: Decoupling Speed, Structure, and Semantics for Precise Image Editing
FlashEdit: Decoupling Speed, Structure, and Semantics for Precise Image Editing
Junyi Wu
Zhiteng Li
Haotong Qin
Xiaohong Liu
Linghe Kong
Yulun Zhang
Xiaokang Yang
DiffM
244
0
0
26 Sep 2025
OjaKV: Context-Aware Online Low-Rank KV Cache Compression with Oja's Rule
OjaKV: Context-Aware Online Low-Rank KV Cache Compression with Oja's Rule
Yuxuan Zhu
David H. Yang
Mohammad Mohammadi Amiri
K. Murugesan
Tejaswini Pedapati
Pin-Yu Chen
VLM
172
0
0
25 Sep 2025
XQuant: Breaking the Memory Wall for LLM Inference with KV Cache Rematerialization
XQuant: Breaking the Memory Wall for LLM Inference with KV Cache Rematerialization
Aditya Tomar
Coleman Hooper
M Lee
Haocheng Xi
Rishabh Tiwari
Wonjun Kang
Luca Manolache
Michael W. Mahoney
Kurt Keutzer
A. Gholami
MQ
181
0
0
14 Aug 2025
1