Numerous works have noted significant similarities in how machine learning models represent the world, even across modalities. Although much effort has been devoted to uncovering properties and metrics on which these models align, surprisingly little work has explored causes of this similarity. To advance this line of inquiry, this work explores how two possible causal factors -- dataset overlap and task overlap -- influence downstream model similarity. The exploration of dataset overlap is motivated by the reality that large-scale generative AI models are often trained on overlapping datasets of scraped internet data, while the exploration of task overlap seeks to substantiate claims from a recent work, the Platonic Representation Hypothesis, that task similarity may drive model similarity. We evaluate the effects of both factors through a broad set of experiments. We find that both positively correlate with higher representational similarity and that combining them provides the strongest effect. Our code and dataset are published.
View on arXiv@article{li2025_2505.13899, title={ Exploring Causes of Representational Similarity in Machine Learning Models }, author={ Zeyu Michael Li and Hung Anh Vu and Damilola Awofisayo and Emily Wenger }, journal={arXiv preprint arXiv:2505.13899}, year={ 2025 } }