ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.16102
  4. Cited By
BlendServe: Optimizing Offline Inference for Auto-regressive Large
  Models with Resource-aware Batching

BlendServe: Optimizing Offline Inference for Auto-regressive Large Models with Resource-aware Batching

25 November 2024
Yilong Zhao
Shuo Yang
Kan Zhu
Lianmin Zheng
Baris Kasikci
Yang Zhou
Jiarong Xing
Ion Stoica
ArXivPDFHTML

Papers citing "BlendServe: Optimizing Offline Inference for Auto-regressive Large Models with Resource-aware Batching"

1 / 1 papers shown
Title
HyGen: Efficient LLM Serving via Elastic Online-Offline Request Co-location
HyGen: Efficient LLM Serving via Elastic Online-Offline Request Co-location
Ting Sun
Penghan Wang
Fan Lai
36
1
0
15 Jan 2025
1