ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.15352
34
0

CompAct: Compressed Activations for Memory-Efficient LLM Training

20 October 2024
Yara Shamshoum
Nitzan Hodos
Yuval Sieradzki
Assaf Schuster
    MQ
    VLM
ArXivPDFHTML
Abstract

We introduce CompAct, a technique that reduces peak memory utilization on GPU by 25-30% for pretraining and 50% for fine-tuning of LLMs. Peak device memory is a major limiting factor in training LLMs, with various recent works aiming to reduce model memory. However most works don't target the largest component of allocated memory during training: the model's compute graph, which is stored for the backward pass. By storing low-rank, compressed activations to be used in the backward pass we greatly reduce the required memory, unlike previous methods which only reduce optimizer overheads or the number of trained parameters. Our compression uses random projection matrices, thus avoiding additional memory overheads. Comparisons with previous techniques for either pretraining or fine-tuning show that CompAct substantially improves existing compute-performance tradeoffs. We expect CompAct's savings to scale even higher for larger models.

View on arXiv
@article{shamshoum2025_2410.15352,
  title={ CompAct: Compressed Activations for Memory-Efficient LLM Training },
  author={ Yara Shamshoum and Nitzan Hodos and Yuval Sieradzki and Assaf Schuster },
  journal={arXiv preprint arXiv:2410.15352},
  year={ 2025 }
}
Comments on this paper