ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.03157
  4. Cited By
Let the Code LLM Edit Itself When You Edit the Code
v1v2 (latest)

Let the Code LLM Edit Itself When You Edit the Code

3 July 2024
Zhenyu He
Jun Zhang
Shengjie Luo
Jingjing Xu
Zongzhang Zhang
Di He
    KELM
ArXiv (abs)PDFHTML

Papers citing "Let the Code LLM Edit Itself When You Edit the Code"

3 / 3 papers shown
Title
$A^3$: Attention-Aware Accurate KV Cache Fusion for Fast Large Language Model Serving
A3A^3A3: Attention-Aware Accurate KV Cache Fusion for Fast Large Language Model Serving
Yuechi Zhou
Yi Su
J. Zhang
Juntao Li
Qingrong Xia
Zhefeng Wang
Xinyu Duan
Baoxing Huai
31
0
0
13 Nov 2025
LLM as Effective Streaming Processor: Bridging Streaming-Batch Mismatches with Group Position Encoding
LLM as Effective Streaming Processor: Bridging Streaming-Batch Mismatches with Group Position EncodingAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Junlong Tong
Jinlan Fu
Zixuan Lin
Yingqi Fan
Anhao Zhao
Hui Su
Xiaoyu Shen
317
2
0
22 May 2025
EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty
EAGLE: Speculative Sampling Requires Rethinking Feature UncertaintyInternational Conference on Machine Learning (ICML), 2024
Yuhui Li
Fangyun Wei
Chao Zhang
Hongyang R. Zhang
446
290
0
26 Jan 2024
1