ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13899
109
0

Exploring Causes of Representational Similarity in Machine Learning Models

20 May 2025
Zeyu Michael Li
Hung Anh Vu
Damilola Awofisayo
Emily Wenger
    CML
ArXivPDFHTML
Abstract

Numerous works have noted significant similarities in how machine learning models represent the world, even across modalities. Although much effort has been devoted to uncovering properties and metrics on which these models align, surprisingly little work has explored causes of this similarity. To advance this line of inquiry, this work explores how two possible causal factors -- dataset overlap and task overlap -- influence downstream model similarity. The exploration of dataset overlap is motivated by the reality that large-scale generative AI models are often trained on overlapping datasets of scraped internet data, while the exploration of task overlap seeks to substantiate claims from a recent work, the Platonic Representation Hypothesis, that task similarity may drive model similarity. We evaluate the effects of both factors through a broad set of experiments. We find that both positively correlate with higher representational similarity and that combining them provides the strongest effect. Our code and dataset are published.

View on arXiv
@article{li2025_2505.13899,
  title={ Exploring Causes of Representational Similarity in Machine Learning Models },
  author={ Zeyu Michael Li and Hung Anh Vu and Damilola Awofisayo and Emily Wenger },
  journal={arXiv preprint arXiv:2505.13899},
  year={ 2025 }
}
Comments on this paper