ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.17660
  4. Cited By
Grass: Compute Efficient Low-Memory LLM Training with Structured Sparse
  Gradients

Grass: Compute Efficient Low-Memory LLM Training with Structured Sparse Gradients

25 June 2024
Aashiq Muhamed
Oscar Li
David Woodruff
Mona Diab
Virginia Smith
ArXivPDFHTML

Papers citing "Grass: Compute Efficient Low-Memory LLM Training with Structured Sparse Gradients"

3 / 3 papers shown
Title
CompAct: Compressed Activations for Memory-Efficient LLM Training
CompAct: Compressed Activations for Memory-Efficient LLM Training
Yara Shamshoum
Nitzan Hodos
Yuval Sieradzki
Assaf Schuster
MQ
VLM
34
0
0
20 Oct 2024
Chain of LoRA: Efficient Fine-tuning of Language Models via Residual
  Learning
Chain of LoRA: Efficient Fine-tuning of Language Models via Residual Learning
Wenhan Xia
Chengwei Qin
Elad Hazan
46
52
0
08 Jan 2024
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1