ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.11938
257
46
v1v2 (latest)

XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023
19 May 2023
Sebastian Ruder
J. Clark
Alexander Gutkin
Mihir Kale
Min Ma
Massimo Nicosia
Shruti Rijhwani
Parker Riley
J. M. Sarr
Xinyi Wang
John Wieting
Nitish Gupta
Anna Katanova
Christo Kirov
Dana L. Dickinson
Brian Roark
Bidisha Samanta
Connie Tao
David Ifeoluwa Adelani
Vera Axelrod
Isaac Caswell
Colin Cherry
Dan Garrette
R. Ingle
Melvin Johnson
Dmitry Panteleev
Partha P. Talukdar
    ELM
ArXiv (abs)PDFHTMLHuggingFace (1 upvotes)
Abstract

Data scarcity is a crucial issue for the development of highly multilingual NLP systems. Yet for many under-represented languages (ULs) -- languages for which NLP re-search is particularly far behind in meeting user needs -- it is feasible to annotate small amounts of data. Motivated by this, we propose XTREME-UP, a benchmark defined by: its focus on the scarce-data scenario rather than zero-shot; its focus on user-centric tasks -- tasks with broad adoption by speakers of high-resource languages; and its focus on under-represented languages where this scarce-data scenario tends to be most realistic. XTREME-UP evaluates the capabilities of language models across 88 under-represented languages over 9 key user-centric technologies including ASR, OCR, MT, and information access tasks that are of general utility. We create new datasets for OCR, autocomplete, semantic parsing, and transliteration, and build on and refine existing datasets for other tasks. XTREME-UP provides methodology for evaluating many modeling scenarios including text-only, multi-modal (vision, audio, and text),supervised parameter tuning, and in-context learning. We evaluate commonly used models on the benchmark. We release all code and scripts to train and evaluate models

View on arXiv
Comments on this paper