Platonic Grounding for Efficient Multimodal Language Models

The hyperscaling of data and parameter count in Transformer-based models is yielding diminishing performance improvement, especially when weighed against training costs. Such plateauing indicates the importance of methods for more efficient finetuning and inference, while retaining similar performance. This is especially relevant for multimodal learning paradigms, where inference costs of processing multimodal tokens can determine the model's practical viability. At the same time, research on representations and mechanistic interpretability has improved our understanding of the inner workings of Transformer-based models; one such line of work reveals an implicit alignment in the deeper layers of pretrained models, across modalities. Taking inspiration from this, we motivate and propose a simple modification to existing multimodal frameworks that rely on aligning pretrained models. We demonstrate that our approach maintains and, in some cases, even improves performance of baseline methods while achieving significant gains in both training and inference-time compute. Our work also has implications for combining pretrained models into larger systems efficiently.
View on arXiv@article{choraria2025_2504.19327, title={ Platonic Grounding for Efficient Multimodal Language Models }, author={ Moulik Choraria and Xinbo Wu and Akhil Bhimaraju and Nitesh Sekhar and Yue Wu and Xu Zhang and Prateek Singhal and Lav R. Varshney }, journal={arXiv preprint arXiv:2504.19327}, year={ 2025 } }