28

LoFA: Learning to Predict Personalized Priors for Fast Adaptation of Visual Generative Models

Yiming Hao
Mutian Xu
Chongjie Ye
Jie Qin
Shunlin Lu
Yipeng Qin
Xiaoguang Han
Main:8 Pages
17 Figures
Bibliography:3 Pages
9 Tables
Appendix:5 Pages
Abstract

Personalizing visual generative models to meet specific user needs has gained increasing attention, yet current methods like Low-Rank Adaptation (LoRA) remain impractical due to their demand for task-specific data and lengthy optimization. While a few hypernetwork-based approaches attempt to predict adaptation weights directly, they struggle to map fine-grained user prompts to complex LoRA distributions, limiting their practical applicability. To bridge this gap, we propose LoFA, a general framework that efficiently predicts personalized priors for fast model adaptation. We first identify a key property of LoRA: structured distribution patterns emerge in the relative changes between LoRA and base model parameters. Building on this, we design a two-stage hypernetwork: first predicting relative distribution patterns that capture key adaptation regions, then using these to guide final LoRA weight prediction. Extensive experiments demonstrate that our method consistently predicts high-quality personalized priors within seconds, across multiple tasks and user prompts, even outperforming conventional LoRA that requires hours of processing. Project page:this https URL.

View on arXiv
Comments on this paper