ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.15348
  4. Cited By
READ: Recurrent Adaptation of Large Transformers

READ: Recurrent Adaptation of Large Transformers

24 May 2023
Sida I. Wang
John Nguyen
Ke Li
Carole-Jean Wu
ArXiv (abs)PDFHTMLHuggingFace (2 upvotes)

Papers citing "READ: Recurrent Adaptation of Large Transformers"

7 / 7 papers shown
SliceFine: The Universal Winning-Slice Hypothesis for Pretrained Networks
SliceFine: The Universal Winning-Slice Hypothesis for Pretrained Networks
Md. Kowsher
Ali O. Polat
Ehsan Mohammady Ardehaly
Mehrdad Salehi
Zia Ghiasi
Prasanth Murali
Chen Chen
186
2
0
09 Oct 2025
Towards Optimal Adapter Placement for Efficient Transfer Learning
Towards Optimal Adapter Placement for Efficient Transfer Learning
Aleksandra I. Nowak
Otniel-Bogdan Mercea
Anurag Arnab
Jonas Pfeiffer
Yann N. Dauphin
Utku Evci
270
2
0
21 Oct 2024
Sustainable self-supervised learning for speech representations
Sustainable self-supervised learning for speech representations
Luis Lugo
Valentin Vielzeuf
266
3
0
11 Jun 2024
DLoRA: Distributed Parameter-Efficient Fine-Tuning Solution for Large
  Language Model
DLoRA: Distributed Parameter-Efficient Fine-Tuning Solution for Large Language Model
Chao Gao
Sai Qian Zhang
ALM
370
9
0
08 Apr 2024
Hierarchical Recurrent Adapters for Efficient Multi-Task Adaptation of
  Large Speech Models
Hierarchical Recurrent Adapters for Efficient Multi-Task Adaptation of Large Speech Models
Tsendsuren Munkhdalai
Youzheng Chen
K. Sim
Fadi Biadsy
Tara N. Sainath
P. M. Mengibar
178
1
0
25 Mar 2024
Efficiency-oriented approaches for self-supervised speech representation
  learning
Efficiency-oriented approaches for self-supervised speech representation learning
Luis Lugo
Valentin Vielzeuf
SSL
257
1
0
18 Dec 2023
Unleashing the Power of Pre-trained Language Models for Offline
  Reinforcement Learning
Unleashing the Power of Pre-trained Language Models for Offline Reinforcement LearningInternational Conference on Learning Representations (ICLR), 2023
Ruizhe Shi
Yuyao Liu
Yanjie Ze
Simon S. Du
Huazhe Xu
OffRLRALM
481
32
0
31 Oct 2023
1