39
0

Fine-tuning Multimodal Transformers on Edge: A Parallel Split Learning Approach

Abstract

Multimodal transformers integrate diverse data types like images, audio, and text, advancing tasks such as audio-visual understanding and image-text retrieval; yet their high parameterization limits deployment on resource-constrained edge devices. Split Learning (SL), which partitions models at a designated cut-layer to offload compute-intensive operations to the server, offers a promising approach for distributed training of multimodal transformers, though its application remains underexplored. We present MPSL, a parallel SL approach for computational efficient fine-tuning of multimodal transformers in a distributed manner, while eliminating label sharing, client synchronization, and per-client sub-model management. MPSL employs lightweight client-side tokenizers and a unified modality-agnostic encoder, allowing flexible adaptation to task-specific needs. Our evaluation across 7 multimodal datasets demonstrates that MPSL matches or outperforms Federated Learning, reduces client-side computations by 250x, and achieves superior scalability in communication cost with model growth. Through extensive analysis, we highlight task suitability, trade-offs, and scenarios where MPSL excels, inspiring further exploration.

View on arXiv
@article{fudala2025_2502.06355,
  title={ Fine-tuning Multimodal Transformers on Edge: A Parallel Split Learning Approach },
  author={ Timo Fudala and Vasileios Tsouvalas and Nirvana Meratnia },
  journal={arXiv preprint arXiv:2502.06355},
  year={ 2025 }
}
Comments on this paper