ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.16173
71
2
v1v2 (latest)

Mapping 1,000+ Language Models via the Log-Likelihood Vector

22 February 2025
Momose Oyama
Hiroaki Yamagiwa
Yusuke Takase
Hidetoshi Shimodaira
ArXiv (abs)PDFHTML
Main:16 Pages
22 Figures
Bibliography:6 Pages
10 Tables
Appendix:34 Pages
Abstract

To compare autoregressive language models at scale, we propose using log-likelihood vectors computed on a predefined text set as model features. This approach has a solid theoretical basis: when treated as model coordinates, their squared Euclidean distance approximates the Kullback-Leibler divergence of text-generation probabilities. Our method is highly scalable, with computational cost growing linearly in both the number of models and text samples, and is easy to implement as the required features are derived from cross-entropy loss. Applying this method to over 1,000 language models, we constructed a "model map," providing a new perspective on large-scale model analysis.

View on arXiv
@article{oyama2025_2502.16173,
  title={ Mapping 1,000+ Language Models via the Log-Likelihood Vector },
  author={ Momose Oyama and Hiroaki Yamagiwa and Yusuke Takase and Hidetoshi Shimodaira },
  journal={arXiv preprint arXiv:2502.16173},
  year={ 2025 }
}
Comments on this paper