ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.11454
  4. Cited By
VeRA: Vector-based Random Matrix Adaptation

VeRA: Vector-based Random Matrix Adaptation

17 October 2023
D. J. Kopiczko
Tijmen Blankevoort
Yuki Markus Asano
    VLM
ArXivPDFHTML

Papers citing "VeRA: Vector-based Random Matrix Adaptation"

50 / 103 papers shown
Title
AltLoRA: Towards Better Gradient Approximation in Low-Rank Adaptation with Alternating Projections
AltLoRA: Towards Better Gradient Approximation in Low-Rank Adaptation with Alternating Projections
Xin Yu
Yujia Wang
Jinghui Chen
Lingzhou Xue
2
0
0
18 May 2025
Memory-Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation
Memory-Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation
Fei Wu
Jia Hu
Geyong Min
Shiqiang Wang
17
0
0
16 May 2025
HSplitLoRA: A Heterogeneous Split Parameter-Efficient Fine-Tuning Framework for Large Language Models
HSplitLoRA: A Heterogeneous Split Parameter-Efficient Fine-Tuning Framework for Large Language Models
Zheng Lin
Yuxin Zhang
Zhe Chen
Zihan Fang
Xianhao Chen
Praneeth Vepakomma
Wei Ni
Jun Luo
Yue Gao
MoE
46
2
0
05 May 2025
A Survey on Parameter-Efficient Fine-Tuning for Foundation Models in Federated Learning
A Survey on Parameter-Efficient Fine-Tuning for Foundation Models in Federated Learning
Jieming Bian
Yuanzhe Peng
Lei Wang
Yin Huang
Jie Xu
FedML
65
0
0
29 Apr 2025
Sparsity Outperforms Low-Rank Projections in Few-Shot Adaptation
Sparsity Outperforms Low-Rank Projections in Few-Shot Adaptation
Nairouz Mrabah
Nicolas Richet
Ismail Ben Ayed
Eric Granger
BDL
VLM
58
0
0
16 Apr 2025
LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation
LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation
Juzheng Zhang
Jiacheng You
Ashwinee Panda
Tom Goldstein
MoMe
53
1
0
10 Apr 2025
Communication-Efficient and Personalized Federated Foundation Model Fine-Tuning via Tri-Matrix Adaptation
Communication-Efficient and Personalized Federated Foundation Model Fine-Tuning via Tri-Matrix Adaptation
Yong Li
Bo Liu
Sheng Huang
Zhe Zhang
Xiaotong Yuan
Richang Hong
46
0
0
31 Mar 2025
Concept-Aware LoRA for Domain-Aligned Segmentation Dataset Generation
Concept-Aware LoRA for Domain-Aligned Segmentation Dataset Generation
Minho Park
S. Park
Jungsoo Lee
Hyojin Park
Kyuwoong Hwang
Fatih Porikli
Jaegul Choo
Sungha Choi
39
0
0
28 Mar 2025
Meta-LoRA: Meta-Learning LoRA Components for Domain-Aware ID Personalization
Meta-LoRA: Meta-Learning LoRA Components for Domain-Aware ID Personalization
Barış Batuhan Topal
Umut Özyurt
Zafer Doğan Budak
Ramazan Gokberk Cinbis
55
0
0
28 Mar 2025
Progressive Rendering Distillation: Adapting Stable Diffusion for Instant Text-to-Mesh Generation without 3D Data
Progressive Rendering Distillation: Adapting Stable Diffusion for Instant Text-to-Mesh Generation without 3D Data
Zhiyuan Ma
Xinyue Liang
Rongyuan Wu
Xiangyu Zhu
Zhen Lei
Lei Zhang
73
0
0
27 Mar 2025
Coeff-Tuning: A Graph Filter Subspace View for Tuning Attention-Based Large Models
Coeff-Tuning: A Graph Filter Subspace View for Tuning Attention-Based Large Models
Zichen Miao
Wei Chen
Qiang Qiu
92
1
0
24 Mar 2025
LoRASculpt: Sculpting LoRA for Harmonizing General and Specialized Knowledge in Multimodal Large Language Models
LoRASculpt: Sculpting LoRA for Harmonizing General and Specialized Knowledge in Multimodal Large Language Models
Jian Liang
Wenke Huang
Guancheng Wan
Qu Yang
Mang Ye
MoMe
CLL
AI4CE
62
1
0
21 Mar 2025
Prada: Black-Box LLM Adaptation with Private Data on Resource-Constrained Devices
Prada: Black-Box LLM Adaptation with Private Data on Resource-Constrained Devices
Zihan Wang
Yexiao He
Zheyu Shen
Yu Li
Guoheng Sun
Myungjin Lee
Ang Li
48
0
0
19 Mar 2025
RaSA: Rank-Sharing Low-Rank Adaptation
RaSA: Rank-Sharing Low-Rank Adaptation
Zhiwei He
Zhaopeng Tu
Xing Wang
Xingyu Chen
Zhaoxiang Wang
Jiahao Xu
Tian Liang
Wenxiang Jiao
Zhenru Zhang
Rui Wang
ALM
90
1
0
16 Mar 2025
1LoRA: Summation Compression for Very Low-Rank Adaptation
Alessio Quercia
Zhuo Cao
Arya Bangun
Richard D. Paul
Abigail Morrison
Ira Assent
Hanno Scharr
58
0
0
11 Mar 2025
Keeping Yourself is Important in Downstream Tuning Multimodal Large Language Model
Wenke Huang
Jian Liang
Xianda Guo
Yiyang Fang
Guancheng Wan
...
Bin Yang
He Li
Jiawei Shao
Mang Ye
Bo Du
OffRL
LRM
MLLM
KELM
VLM
65
1
0
06 Mar 2025
LoRA-Null: Low-Rank Adaptation via Null Space for Large Language Models
Pengwei Tang
Y. Liu
Dongjie Zhang
Xing Wu
Debing Zhang
62
0
0
04 Mar 2025
Alchemist: Towards the Design of Efficient Online Continual Learning System
Yuyang Huang
Yuhan Liu
Haryadi S. Gunawi
Beibin Li
Changho Hwang
CLL
OnRL
103
0
0
03 Mar 2025
Unsupervised Parameter Efficient Source-free Post-pretraining
Unsupervised Parameter Efficient Source-free Post-pretraining
Abhishek Jha
Tinne Tuytelaars
Yuki M. Asano
OOD
45
0
0
28 Feb 2025
PaCA: Partial Connection Adaptation for Efficient Fine-Tuning
Sunghyeon Woo
Sol Namkung
Sunwoo Lee
Inho Jeong
Beomseok Kim
Dongsuk Jeon
39
0
0
28 Feb 2025
K-LoRA: Unlocking Training-Free Fusion of Any Subject and Style LoRAs
K-LoRA: Unlocking Training-Free Fusion of Any Subject and Style LoRAs
Ziheng Ouyang
Zhen Li
Qibin Hou
MoMe
OffRL
139
2
0
25 Feb 2025
C-LoRA: Continual Low-Rank Adaptation for Pre-trained Models
C-LoRA: Continual Low-Rank Adaptation for Pre-trained Models
Xin Zhang
Liang Bai
Xian Yang
Jiye Liang
CLL
65
1
0
25 Feb 2025
NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models
NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models
Yibo Zhong
Haoxiang Jiang
Lincan Li
Ryumei Nakada
Tianci Liu
Linjun Zhang
Huaxiu Yao
Haoyu Wang
79
2
0
24 Feb 2025
Fed-SB: A Silver Bullet for Extreme Communication Efficiency and Performance in (Private) Federated LoRA Fine-Tuning
Fed-SB: A Silver Bullet for Extreme Communication Efficiency and Performance in (Private) Federated LoRA Fine-Tuning
Raghav Singhal
Kaustubh Ponkshe
Rohit Vartak
Lav R. Varshney
Praneeth Vepakomma
FedML
79
0
0
24 Feb 2025
Sparsity May Be All You Need: Sparse Random Parameter Adaptation
Sparsity May Be All You Need: Sparse Random Parameter Adaptation
Jesus Rios
Pierre L. Dognin
Ronny Luss
K. Ramamurthy
32
1
0
21 Feb 2025
SSH: Sparse Spectrum Adaptation via Discrete Hartley Transformation
Yixian Shen
Qi Bi
Jia-Hong Huang
Hongyi Zhu
Andy D. Pimentel
Anuj Pathania
46
0
0
08 Feb 2025
LoCA: Location-Aware Cosine Adaptation for Parameter-Efficient Fine-Tuning
LoCA: Location-Aware Cosine Adaptation for Parameter-Efficient Fine-Tuning
Zhekai Du
Yinjie Min
Jingjing Li
Ke Lu
Changliang Zou
Liuhua Peng
Tingjin Chu
Mingming Gong
186
1
0
05 Feb 2025
Sparse High Rank Adapters
Sparse High Rank Adapters
K. Bhardwaj
N. Pandey
Sweta Priyadarshi
Viswanath Ganapathy
Rafael Esteves
...
P. Whatmough
Risheek Garrepalli
M. V. Baalen
Harris Teague
Markus Nagel
MQ
43
4
0
28 Jan 2025
Fine Tuning without Catastrophic Forgetting via Selective Low Rank Adaptation
Reza Akbarian Bafghi
Carden Bagwell
Avinash Ravichandran
Ashish Shrivastava
M. Raissi
48
0
0
28 Jan 2025
Language Fusion for Parameter-Efficient Cross-lingual Transfer
Language Fusion for Parameter-Efficient Cross-lingual Transfer
Philipp Borchert
Ivan Vulić
Marie-Francine Moens
Jochen De Weerdt
41
0
0
12 Jan 2025
GaLore$+$: Boosting Low-Rank Adaptation for LLMs with Cross-Head Projection
GaLore+++: Boosting Low-Rank Adaptation for LLMs with Cross-Head Projection
Xutao Liao
Shaohui Li
Yuhui Xu
Zhi Li
Y. Liu
You He
VLM
59
3
0
31 Dec 2024
Transducer Tuning: Efficient Model Adaptation for Software Tasks Using
  Code Property Graphs
Transducer Tuning: Efficient Model Adaptation for Software Tasks Using Code Property Graphs
Imam Nur Bani Yusuf
Lingxiao Jiang
88
0
0
18 Dec 2024
FineGates: LLMs Finetuning with Compression using Stochastic Gates
FineGates: LLMs Finetuning with Compression using Stochastic Gates
Jonathan Svirsky
Yehonathan Refael
Ofir Lindenbaum
75
0
0
17 Dec 2024
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models
Fan Wang
Juyong Jiang
Chansung Park
Sunghun Kim
Jing Tang
94
1
0
08 Dec 2024
PEFT-as-an-Attack! Jailbreaking Language Models during Federated
  Parameter-Efficient Fine-Tuning
PEFT-as-an-Attack! Jailbreaking Language Models during Federated Parameter-Efficient Fine-Tuning
Shenghui Li
Edith C.H. Ngai
Fanghua Ye
Thiemo Voigt
SILM
90
6
0
28 Nov 2024
Enhancing Parameter-Efficient Fine-Tuning of Vision Transformers through
  Frequency-Based Adaptation
Enhancing Parameter-Efficient Fine-Tuning of Vision Transformers through Frequency-Based Adaptation
S. Ly
Hien Nguyen
82
1
0
28 Nov 2024
Adaptive Blind All-in-One Image Restoration
Adaptive Blind All-in-One Image Restoration
David Serrano-Lozano
Luis Herranz
Shaolin Su
Javier Vázquez-Corral
VLM
97
0
0
27 Nov 2024
Parameter Efficient Mamba Tuning via Projector-targeted Diagonal-centric Linear Transformation
Parameter Efficient Mamba Tuning via Projector-targeted Diagonal-centric Linear Transformation
Seokil Ham
H. Kim
Sangmin Woo
Changick Kim
Mamba
204
0
0
21 Nov 2024
MALoRA: Mixture of Asymmetric Low-Rank Adaptation for Enhanced
  Multi-Task Learning
MALoRA: Mixture of Asymmetric Low-Rank Adaptation for Enhanced Multi-Task Learning
Xujia Wang
Haiyan Zhao
Shuo Wang
Hanqing Wang
Zhiyuan Liu
MoMe
MoE
37
0
0
30 Oct 2024
LoRA vs Full Fine-tuning: An Illusion of Equivalence
LoRA vs Full Fine-tuning: An Illusion of Equivalence
Reece Shuttleworth
Jacob Andreas
Antonio Torralba
Pratyusha Sharma
35
10
0
28 Oct 2024
MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language
  Models Fine-tuning
MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning
Jingfan Zhang
Yi Zhao
Dan Chen
Xing Tian
Huanran Zheng
Wei Zhu
MoE
40
12
0
23 Oct 2024
LoRA-C: Parameter-Efficient Fine-Tuning of Robust CNN for IoT Devices
LoRA-C: Parameter-Efficient Fine-Tuning of Robust CNN for IoT Devices
Chuntao Ding
Xu Cao
Jianhang Xie
Linlin Fan
Shangguang Wang
Zhichao Lu
39
1
0
22 Oct 2024
Towards Optimal Adapter Placement for Efficient Transfer Learning
Towards Optimal Adapter Placement for Efficient Transfer Learning
Aleksandra I. Nowak
Otniel-Bogdan Mercea
Anurag Arnab
Jonas Pfeiffer
Yann N. Dauphin
Utku Evci
25
0
0
21 Oct 2024
LoLDU: Low-Rank Adaptation via Lower-Diag-Upper Decomposition for
  Parameter-Efficient Fine-Tuning
LoLDU: Low-Rank Adaptation via Lower-Diag-Upper Decomposition for Parameter-Efficient Fine-Tuning
Yiming Shi
Jiwei Wei
Yujia Wu
Ran Ran
Chengwei Sun
Shiyuan He
Yang Yang
ALM
43
1
0
17 Oct 2024
MoR: Mixture of Ranks for Low-Rank Adaptation Tuning
MoR: Mixture of Ranks for Low-Rank Adaptation Tuning
Chuanyu Tang
Yilong Chen
Zhenyu Zhang
Junyuan Shang
Wenyuan Zhang
Yong Huang
Tingwen Liu
MoE
26
0
0
17 Oct 2024
QSpec: Speculative Decoding with Complementary Quantization Schemes
QSpec: Speculative Decoding with Complementary Quantization Schemes
Juntao Zhao
Wenhao Lu
Sheng Wang
Lingpeng Kong
Chuan Wu
MQ
71
5
0
15 Oct 2024
RoCoFT: Efficient Finetuning of Large Language Models with Row-Column
  Updates
RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates
Md. Kowsher
Tara Esmaeilbeig
Chun-Nam Yu
Mojtaba Soltanalian
Niloofar Yousefi
32
0
0
14 Oct 2024
Parameter-Efficient Fine-Tuning via Selective Discrete Cosine Transform
Parameter-Efficient Fine-Tuning via Selective Discrete Cosine Transform
Yixian Shen
Qi Bi
Jia-Hong Huang
Hongyi Zhu
Anuj Pathania
36
1
0
09 Oct 2024
Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs
Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs
Ruijia Niu
D. Wu
Rose Yu
Yi Ma
33
1
0
09 Oct 2024
LoRTA: Low Rank Tensor Adaptation of Large Language Models
LoRTA: Low Rank Tensor Adaptation of Large Language Models
Ignacio Hounie
Charilaos I. Kanatsoulis
Arnuv Tandon
Alejandro Ribeiro
36
0
0
05 Oct 2024
123
Next