Should VLMs be Pre-trained with Image Data?
Pre-trained LLMs that are further trained with image data perform well on vision-language tasks. While adding images during a second training phase effectively unlocks this capability, it is unclear how much of a gain or loss this two-step pipeline gives over VLMs which integrate images earlier into the training process. To investigate this, we train models spanning various datasets, scales, image-text ratios, and amount of pre-training done before introducing vision tokens. We then fine-tune these models and evaluate their downstream performance on a suite of vision-language and text-only tasks. We find that pre-training with a mixture of image and text data allows models to perform better on vision-language tasks while maintaining strong performance on text-only evaluations. On an average of 6 diverse tasks, we find that for a 1B model, introducing visual tokens 80% of the way through pre-training results in a 2% average improvement over introducing visual tokens to a fully pre-trained model.
View on arXiv@article{keh2025_2503.07603, title={ Should VLMs be Pre-trained with Image Data? }, author={ Sedrick Keh and Jean Mercat and Samir Yitzhak Gadre and Kushal Arora and Igor Vasiljevic and Benjamin Burchfiel and Shuran Song and Russ Tedrake and Thomas Kollar and Ludwig Schmidt and Achal Dave }, journal={arXiv preprint arXiv:2503.07603}, year={ 2025 } }