Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2504.09223
Cited By
DL-QAT: Weight-Decomposed Low-Rank Quantization-Aware Training for Large Language Models
12 April 2025
Wenjin Ke
Zhe Li
D. Li
Lu Tian
E. Barsoum
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"DL-QAT: Weight-Decomposed Low-Rank Quantization-Aware Training for Large Language Models"
1 / 1 papers shown
Title
Enhancing Ultra-Low-Bit Quantization of Large Language Models Through Saliency-Aware Partial Retraining
Deyu Cao
Samin Aref
MQ
27
0
0
14 Apr 2025
1