44
0
v1v2 (latest)

MLorc: Momentum Low-rank Compression for Large Language Model Adaptation

Main:8 Pages
4 Figures
Bibliography:4 Pages
8 Tables
Appendix:8 Pages
Abstract

With increasing size of large language models (LLMs), full-parameter fine-tuning imposes substantial memory demands. To alleviate this, we propose a novel memory-efficient training paradigm called Momentum Low-rank compression (MLorc). By directly compressing and reconstructing momentum rather than gradients, MLorc avoids imposing a fixed-rank constraint on weight update matrices and better preserves the training dynamics of full-parameter fine-tuning, in contrast to existing low-rank approaches such as LoRA and GaLore. Empirically, MLorc consistently outperforms other memory-efficient training methods, matches or even exceeds the performance of full fine-tuning with a small rank (e.g., r=4r=4), and generalizes well across different optimizers -- all while not compromising time or memory efficiency. Furthermore, we provide a theoretical guarantee for its convergence under reasonable assumptions.

View on arXiv
@article{shen2025_2506.01897,
  title={ MLorc: Momentum Low-rank Compression for Large Language Model Adaptation },
  author={ Wei Shen and Zhang Yaxiang and Minhui Huang and Mengfan Xu and Jiawei Zhang and Cong Shen },
  journal={arXiv preprint arXiv:2506.01897},
  year={ 2025 }
}
Comments on this paper