Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2305.18513
Cited By
SlimFit: Memory-Efficient Fine-Tuning of Transformer-based Models Using Training Dynamics
North American Chapter of the Association for Computational Linguistics (NAACL), 2023
29 May 2023
A. Ardakani
Altan Haan
Shangyin Tan
Doru-Thom Popovici
Alvin Cheung
Costin Iancu
Koushik Sen
Re-assign community
ArXiv (abs)
PDF
HTML
HuggingFace (2 upvotes)
Github (10★)
Papers citing
"SlimFit: Memory-Efficient Fine-Tuning of Transformer-based Models Using Training Dynamics"
2 / 2 papers shown
Fed-HeLLo: Efficient Federated Foundation Model Fine-Tuning with Heterogeneous LoRA Allocation
IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2025
Zikai Zhang
Ping Liu
Jiahao Xu
Rui Hu
287
11
0
13 Jun 2025
Fed-pilot: Optimizing LoRA Allocation for Efficient Federated Fine-Tuning with Heterogeneous Clients
Zikai Zhang
Jiahao Xu
Ping Liu
Rui Hu
330
5
0
14 Oct 2024
1
Page 1 of 1