Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2305.15348
Cited By
READ: Recurrent Adaptation of Large Transformers
24 May 2023
Sida I. Wang
John Nguyen
Ke Li
Carole-Jean Wu
Re-assign community
ArXiv (abs)
PDF
HTML
HuggingFace (2 upvotes)
Papers citing
"READ: Recurrent Adaptation of Large Transformers"
7 / 7 papers shown
SliceFine: The Universal Winning-Slice Hypothesis for Pretrained Networks
Md. Kowsher
Ali O. Polat
Ehsan Mohammady Ardehaly
Mehrdad Salehi
Zia Ghiasi
Prasanth Murali
Chen Chen
186
2
0
09 Oct 2025
Towards Optimal Adapter Placement for Efficient Transfer Learning
Aleksandra I. Nowak
Otniel-Bogdan Mercea
Anurag Arnab
Jonas Pfeiffer
Yann N. Dauphin
Utku Evci
270
2
0
21 Oct 2024
Sustainable self-supervised learning for speech representations
Luis Lugo
Valentin Vielzeuf
266
3
0
11 Jun 2024
DLoRA: Distributed Parameter-Efficient Fine-Tuning Solution for Large Language Model
Chao Gao
Sai Qian Zhang
ALM
370
9
0
08 Apr 2024
Hierarchical Recurrent Adapters for Efficient Multi-Task Adaptation of Large Speech Models
Tsendsuren Munkhdalai
Youzheng Chen
K. Sim
Fadi Biadsy
Tara N. Sainath
P. M. Mengibar
178
1
0
25 Mar 2024
Efficiency-oriented approaches for self-supervised speech representation learning
Luis Lugo
Valentin Vielzeuf
SSL
257
1
0
18 Dec 2023
Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning
International Conference on Learning Representations (ICLR), 2023
Ruizhe Shi
Yuyao Liu
Yanjie Ze
Simon S. Du
Huazhe Xu
OffRL
RALM
481
32
0
31 Oct 2023
1