ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.12243
126
1
v1v2 (latest)

SEA-BED: Southeast Asia Embedding Benchmark

17 August 2025
Wuttikorn Ponwitayarat
Raymond Ng
Jann Railey Montalan
Thura Aung
Jian Gang Ngui
Yosephine Susanto
William-Chandra Tjhi
Panuthep Tasawong
Erik Cambria
Ekapol Chuangsuwanich
Sarana Nutanong
Peerat Limkonchotiwat
ArXiv (abs)PDFHTMLGithub (17★)
Main:19 Pages
17 Figures
Bibliography:11 Pages
13 Tables
Appendix:7 Pages
Abstract

Sentence embeddings are essential for NLP tasks such as semantic search, re-ranking, and textual similarity. Although multilingual benchmarks like MMTEB broaden coverage, Southeast Asia (SEA) datasets are scarce and often machine-translated, missing native linguistic properties. With nearly 700 million speakers, the SEA region lacks a region-specific embedding benchmark. We introduce SEA-BED, the first large-scale SEA embedding benchmark with 169 datasets across 9 tasks and 10 languages, where 71% are formulated by humans, not machine generation or translation. We address three research questions: (1) which SEA languages and tasks are challenging, (2) whether SEA languages show unique performance gaps globally, and (3) how human vs. machine translations affect evaluation. We evaluate 17 embedding models across six studies, analyzing task and language challenges, cross-benchmark comparisons, and translation trade-offs. Results show sharp ranking shifts, inconsistent model performance among SEA languages, and the importance of human-curated datasets for low-resource languages like Burmese.

View on arXiv
Comments on this paper