Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2308.09723
Cited By
FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs
16 August 2023
Young Jin Kim
Rawn Henry
Raffy Fahim
Hany Awadalla
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs"
4 / 4 papers shown
Title
Oaken: Fast and Efficient LLM Serving with Online-Offline Hybrid KV Cache Quantization
Minsu Kim
Seongmin Hong
RyeoWook Ko
S. Choi
Hunjong Lee
Junsoo Kim
J. Kim
Jongse Park
57
0
0
24 Mar 2025
Scaling laws for post-training quantized large language models
Zifei Xu
Alexander Lan
W. Yazar
T. Webb
Sayeh Sharify
Xin Eric Wang
MQ
26
0
0
15 Oct 2024
Scalable and Efficient MoE Training for Multitask Multilingual Models
Young Jin Kim
A. A. Awan
Alexandre Muzio
Andres Felipe Cruz Salinas
Liyang Lu
Amr Hendy
Samyam Rajbhandari
Yuxiong He
Hany Awadalla
MoE
94
84
0
22 Sep 2021
ZeRO-Offload: Democratizing Billion-Scale Model Training
Jie Ren
Samyam Rajbhandari
Reza Yazdani Aminabadi
Olatunji Ruwase
Shuangyang Yang
Minjia Zhang
Dong Li
Yuxiong He
MoE
160
413
0
18 Jan 2021
1