ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.02214
  4. Cited By
SLTrain: a sparse plus low-rank approach for parameter and memory
  efficient pretraining

SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining

4 June 2024
Andi Han
Jiaxiang Li
Wei Huang
Mingyi Hong
Akiko Takeda
Pratik Jawanpuria
Bamdev Mishra
ArXivPDFHTML

Papers citing "SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining"

8 / 8 papers shown
Title
CompAct: Compressed Activations for Memory-Efficient LLM Training
CompAct: Compressed Activations for Memory-Efficient LLM Training
Yara Shamshoum
Nitzan Hodos
Yuval Sieradzki
Assaf Schuster
MQ
VLM
34
0
0
20 Oct 2024
Training Neural Networks from Scratch with Parallel Low-Rank Adapters
Training Neural Networks from Scratch with Parallel Low-Rank Adapters
Minyoung Huh
Brian Cheung
Jeremy Bernstein
Phillip Isola
Pulkit Agrawal
25
10
0
26 Feb 2024
Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models
Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models
Fangzhao Zhang
Mert Pilanci
AI4CE
47
13
0
04 Feb 2024
Chain of LoRA: Efficient Fine-tuning of Language Models via Residual
  Learning
Chain of LoRA: Efficient Fine-tuning of Language Models via Residual Learning
Wenhan Xia
Chengwei Qin
Elad Hazan
46
52
0
08 Jan 2024
Exploring Low Rank Training of Deep Neural Networks
Exploring Low Rank Training of Deep Neural Networks
Siddhartha Rao Kamalakara
Acyr F. Locatelli
Bharat Venkitesh
Jimmy Ba
Y. Gal
Aidan N. Gomez
48
22
0
27 Sep 2022
Initialization and Regularization of Factorized Neural Layers
Initialization and Regularization of Factorized Neural Layers
M. Khodak
Neil A. Tenenholtz
Lester W. Mackey
Nicolò Fusi
63
56
0
03 May 2021
Sparsity in Deep Learning: Pruning and growth for efficient inference
  and training in neural networks
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
131
679
0
31 Jan 2021
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
223
4,424
0
23 Jan 2020
1