ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.09223
  4. Cited By
DL-QAT: Weight-Decomposed Low-Rank Quantization-Aware Training for Large Language Models

DL-QAT: Weight-Decomposed Low-Rank Quantization-Aware Training for Large Language Models

12 April 2025
Wenjin Ke
Zhe Li
D. Li
Lu Tian
E. Barsoum
    MQ
ArXivPDFHTML

Papers citing "DL-QAT: Weight-Decomposed Low-Rank Quantization-Aware Training for Large Language Models"

1 / 1 papers shown
Title
Enhancing Ultra-Low-Bit Quantization of Large Language Models Through Saliency-Aware Partial Retraining
Enhancing Ultra-Low-Bit Quantization of Large Language Models Through Saliency-Aware Partial Retraining
Deyu Cao
Samin Aref
MQ
27
0
0
14 Apr 2025
1