Most pre-trained Vision-Language (VL) models and training data for the downstream tasks are only available in English. Therefore, multilingual VL tasks are solved using cross-lingual transfer: fine-tune a multilingual pre-trained model or transfer the text encoder using parallel data. We study the alternative approach: transferring an already trained encoder using parallel data. We investigate the effect of parallel data: domain and the number of languages, which were out of focus in previous work. Our results show that even machine-translated task data are the best on average, caption-like authentic parallel data outperformed it in some languages. Further, we show that most languages benefit from multilingual training.
View on arXiv@article{manea2025_2504.21681, title={ Investigating the Effect of Parallel Data in the Cross-Lingual Transfer for Vision-Language Encoders }, author={ Andrei-Alexandru Manea and Jindřich Libovický }, journal={arXiv preprint arXiv:2504.21681}, year={ 2025 } }