ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.16318
23
0

Semantics at an Angle: When Cosine Similarity Works Until It Doesn't

22 April 2025
Kisung You
ArXivPDFHTML
Abstract

Cosine similarity has become a standard metric for comparing embeddings in modern machine learning. Its scale-invariance and alignment with model training objectives have contributed to its widespread adoption. However, recent studies have revealed important limitations, particularly when embedding norms carry meaningful semantic information. This informal article offers a reflective and selective examination of the evolution, strengths, and limitations of cosine similarity. We highlight why it performs well in many settings, where it tends to break down, and how emerging alternatives are beginning to address its blind spots. We hope to offer a mix of conceptual clarity and practical perspective, especially for quantitative scientists who think about embeddings not just as vectors, but as geometric and philosophical objects.

View on arXiv
@article{you2025_2504.16318,
  title={ Semantics at an Angle: When Cosine Similarity Works Until It Doesn't },
  author={ Kisung You },
  journal={arXiv preprint arXiv:2504.16318},
  year={ 2025 }
}
Comments on this paper