ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.18513
  4. Cited By
SlimFit: Memory-Efficient Fine-Tuning of Transformer-based Models Using
  Training Dynamics

SlimFit: Memory-Efficient Fine-Tuning of Transformer-based Models Using Training Dynamics

North American Chapter of the Association for Computational Linguistics (NAACL), 2023
29 May 2023
A. Ardakani
Altan Haan
Shangyin Tan
Doru-Thom Popovici
Alvin Cheung
Costin Iancu
Koushik Sen
ArXiv (abs)PDFHTMLHuggingFace (2 upvotes)Github (10★)

Papers citing "SlimFit: Memory-Efficient Fine-Tuning of Transformer-based Models Using Training Dynamics"

2 / 2 papers shown
Fed-HeLLo: Efficient Federated Foundation Model Fine-Tuning with Heterogeneous LoRA Allocation
Fed-HeLLo: Efficient Federated Foundation Model Fine-Tuning with Heterogeneous LoRA AllocationIEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2025
Zikai Zhang
Ping Liu
Jiahao Xu
Rui Hu
287
11
0
13 Jun 2025
Fed-pilot: Optimizing LoRA Allocation for Efficient Federated Fine-Tuning with Heterogeneous Clients
Fed-pilot: Optimizing LoRA Allocation for Efficient Federated Fine-Tuning with Heterogeneous Clients
Zikai Zhang
Jiahao Xu
Ping Liu
Rui Hu
330
5
0
14 Oct 2024
1
Page 1 of 1