97

EXECUTE: A Multilingual Benchmark for LLM Token Understanding

Annual Meeting of the Association for Computational Linguistics (ACL), 2025
Main:4 Pages
5 Figures
Bibliography:2 Pages
11 Tables
Appendix:4 Pages
Abstract

The CUTE benchmark showed that LLMs struggle with character understanding in English. We extend it to more languages with diverse scripts and writing systems, introducing EXECUTE. Our simplified framework allows easy expansion to any language. Tests across multiple LLMs reveal that challenges in other languages are not always on the character level as in English. Some languages show word-level processing issues, some show no issues at all. We also examine sub-character tasks in Chinese, Japanese, and Korean to assess LLMs' understanding of character components.

View on arXiv
Comments on this paper