354

Towards Minimal Fine-Tuning of VLMs

Tiange Luo
Lajanugen Logeswaran
Jaekyeom Kim
Justin Johnson
Honglak Lee
Main:8 Pages
11 Figures
Bibliography:2 Pages
10 Tables
Appendix:7 Pages
Abstract

We introduce Image-LoRA, a lightweight parameter efficient fine-tuning (PEFT) recipe for transformer-based vision-language models (VLMs). Image-LoRA applies low-rank adaptation only to the value path of attention layers within the visual-token span, reducing adapter-only training FLOPs roughly in proportion to the visual-token fraction. We further adapt only a subset of attention heads, selected using head influence scores estimated with a rank-1 Image-LoRA, and stabilize per-layer updates via selection-size normalization. Across screen-centric grounding and referring benchmarks spanning text-heavy to image-heavy regimes, Image-LoRA matches or closely approaches standard LoRA accuracy while using fewer trainable parameters and lower adapter-only training FLOPs. The method also preserves the pure-text reasoning performance of VLMs before and after fine-tuning, as further shown on GSM8K.

View on arXiv
Comments on this paper