ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.03994
61
0

Seeing What Tastes Good: Revisiting Multimodal Distributional Semantics in the Billion Parameter Era

4 June 2025
Dan Oneaţă
Desmond Elliott
Stella Frank
ArXiv (abs)PDFHTML
Main:9 Pages
10 Figures
Bibliography:3 Pages
6 Tables
Appendix:6 Pages
Abstract

Human learning and conceptual representation is grounded in sensorimotor experience, in contrast to state-of-the-art foundation models. In this paper, we investigate how well such large-scale models, trained on vast quantities of data, represent the semantic feature norms of concrete object concepts, e.g. a ROSE is red, smells sweet, and is a flower. More specifically, we use probing tasks to test which properties of objects these models are aware of. We evaluate image encoders trained on image data alone, as well as multimodally-trained image encoders and language-only models, on predicting an extended denser version of the classic McRae norms and the newer Binder dataset of attribute ratings. We find that multimodal image encoders slightly outperform language-only approaches, and that image-only encoders perform comparably to the language models, even on non-visual attributes that are classified as "encyclopedic" or "function". These results offer new insights into what can be learned from pure unimodal learning, and the complementarity of the modalities.

View on arXiv
@article{oneata2025_2506.03994,
  title={ Seeing What Tastes Good: Revisiting Multimodal Distributional Semantics in the Billion Parameter Era },
  author={ Dan Oneata and Desmond Elliott and Stella Frank },
  journal={arXiv preprint arXiv:2506.03994},
  year={ 2025 }
}
Comments on this paper