v1v2 (latest)
Investigating the Effect of Parallel Data in the Cross-Lingual Transfer for Vision-Language Encoders
International Conference on Text, Speech and Dialogue (TSD), 2025
- VLM
Main:6 Pages
1 Figures
Bibliography:3 Pages
9 Tables
Appendix:3 Pages
Abstract
Most pre-trained Vision-Language (VL) models and training data for the downstream tasks are only available in English. Therefore, multilingual VL tasks are solved using cross-lingual transfer: fine-tune a multilingual pre-trained model or transfer the text encoder using parallel data. We study the alternative approach: transferring an already trained encoder using parallel data. We investigate the effect of parallel data: domain and the number of languages, which were out of focus in previous work. Our results show that even machine-translated task data are the best on average, caption-like authentic parallel data outperformed it in some languages. Further, we show that most languages benefit from multilingual training.
View on arXivComments on this paper
