19
2

Closed-form merging of parameter-efficient modules for Federated Continual Learning

Abstract

Model merging has emerged as a crucial technique in Deep Learning, enabling the integration of multiple models into a unified system while preserving perfor-mance and scalability. In this respect, the compositional properties of low-rank adaptation techniques (e.g., LoRA) have proven beneficial, as simple averaging LoRA modules yields a single model that mostly integrates the capabilities of all individual modules. Building on LoRA, we take a step further by imposing that the merged model matches the responses of all learned modules. Solving this objective in closed form yields an indeterminate system with A and B as unknown variables, indicating the existence of infinitely many closed-form solutions. To address this challenge, we introduce LoRM, an alternating optimization strategy that trains one LoRA matrix at a time. This allows solving for each unknown variable individually, thus finding a unique solution. We apply our proposed methodology to Federated Class-Incremental Learning (FCIL), ensuring alignment of model responses both between clients and across tasks. Our method demonstrates state-of-the-art performance across a range of FCIL scenarios. The code to reproduce our experiments is available atthis http URL.

View on arXiv
@article{salami2025_2410.17961,
  title={ Closed-form merging of parameter-efficient modules for Federated Continual Learning },
  author={ Riccardo Salami and Pietro Buzzega and Matteo Mosconi and Jacopo Bonato and Luigi Sabetta and Simone Calderara },
  journal={arXiv preprint arXiv:2410.17961},
  year={ 2025 }
}
Comments on this paper