ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.17604
  4. Cited By
LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters

LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters

27 May 2024
Klaudia Bałazy
Mohammadreza Banaei
Karl Aberer
Jacek Tabor
ArXivPDFHTML

Papers citing "LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters"

9 / 9 papers shown
Title
Model Diffusion for Certifiable Few-shot Transfer Learning
Model Diffusion for Certifiable Few-shot Transfer Learning
Fady Rezk
Royson Lee
H. Gouk
Timothy M. Hospedales
Minyoung Kim
45
0
0
10 Feb 2025
RandLoRA: Full-rank parameter-efficient fine-tuning of large models
RandLoRA: Full-rank parameter-efficient fine-tuning of large models
Paul Albert
Frederic Z. Zhang
Hemanth Saratchandran
Cristian Rodriguez-Opazo
Anton van den Hengel
Ehsan Abbasnejad
94
0
0
03 Feb 2025
EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value Decomposition
EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value Decomposition
Hamid Nasiri
Peter Garraghan
31
1
0
21 Jan 2025
Talking Heads: Understanding Inter-layer Communication in Transformer Language Models
Talking Heads: Understanding Inter-layer Communication in Transformer Language Models
Jack Merullo
Carsten Eickhoff
Ellie Pavlick
43
12
0
13 Jun 2024
The Impact of Initialization on LoRA Finetuning Dynamics
The Impact of Initialization on LoRA Finetuning Dynamics
Soufiane Hayou
Nikhil Ghosh
Bin Yu
AI4CE
24
10
0
12 Jun 2024
Gemma: Open Models Based on Gemini Research and Technology
Gemma: Open Models Based on Gemini Research and Technology
Gemma Team
Gemma Team Thomas Mesnard
Cassidy Hardin
Robert Dadashi
Surya Bhupatiraju
...
Armand Joulin
Noah Fiedel
Evan Senter
Alek Andreev
Kathleen Kenealy
VLM
LLMAG
123
415
0
13 Mar 2024
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures
  of Soft Prompts
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
118
98
0
24 May 2022
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1