Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.17991
Cited By
VeLoRA: Memory Efficient Training using Rank-1 Sub-Token Projections
28 May 2024
Roy Miles
Pradyumna Reddy
Ismail Elezi
Jiankang Deng
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"VeLoRA: Memory Efficient Training using Rank-1 Sub-Token Projections"
5 / 5 papers shown
Title
CompAct: Compressed Activations for Memory-Efficient LLM Training
Yara Shamshoum
Nitzan Hodos
Yuval Sieradzki
Assaf Schuster
MQ
VLM
34
0
0
20 Oct 2024
Low-Rank Interconnected Adaptation Across Layers
Yibo Zhong
Yao Zhou
OffRL
MoE
38
1
0
13 Jul 2024
Chain of LoRA: Efficient Fine-tuning of Language Models via Residual Learning
Wenhan Xia
Chengwei Qin
Elad Hazan
46
52
0
08 Jan 2024
AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
Shoufa Chen
Chongjian Ge
Zhan Tong
Jiangliu Wang
Yibing Song
Jue Wang
Ping Luo
141
631
0
26 May 2022
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1