113
0

LowRA: Accurate and Efficient LoRA Fine-Tuning of LLMs under 2 Bits

Abstract

Fine-tuning large language models (LLMs) is increasingly costly as models scale to hundreds of billions of parameters, and even parameter-efficient fine-tuning (PEFT) methods like LoRA remain resource-intensive. We introduce LowRA, the first framework to enable LoRA fine-tuning below 2 bits per parameter with minimal performance loss. LowRA optimizes fine-grained quantization - mapping, threshold selection, and precision assignment - while leveraging efficient CUDA kernels for scalable deployment. Extensive evaluations across 4 LLMs and 4 datasets show that LowRA achieves a superior performance-precision trade-off above 2 bits and remains accurate down to 1.15 bits, reducing memory usage by up to 50%. Our results highlight the potential of ultra-low-bit LoRA fine-tuning for resource-constrained environments.

View on arXiv
@article{zhou2025_2502.08141,
  title={ LowRA: Accurate and Efficient LoRA Fine-Tuning of LLMs under 2 Bits },
  author={ Zikai Zhou and Qizheng Zhang and Hermann Kumbong and Kunle Olukotun },
  journal={arXiv preprint arXiv:2502.08141},
  year={ 2025 }
}
Comments on this paper