ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.10258
27
8

Is Translation All You Need? A Study on Solving Multilingual Tasks with Large Language Models

15 March 2024
Chaoqun Liu
Wenxuan Zhang
Yiran Zhao
A. Luu
Lidong Bing
    LRM
ArXivPDFHTML
Abstract

Large language models (LLMs) have demonstrated multilingual capabilities, yet they are mostly English-centric due to the imbalanced training corpora. While prior works have leveraged this bias to enhance multilingual performance through translation, they have been largely limited to natural language processing (NLP) tasks. In this work, we extend the evaluation to real-world user queries and non-English-centric LLMs, offering a broader examination of multilingual performance. Our key contribution lies in demonstrating that while translation into English can boost the performance of English-centric LLMs on NLP tasks, it is not universally optimal. For culture-related tasks that need deep language understanding, prompting in the native language proves more effective as it better captures the nuances of culture and language. Our experiments expose varied behaviors across LLMs and tasks in the multilingual context, underscoring the need for a more comprehensive approach to multilingual evaluation. Therefore, we call for greater efforts in developing and evaluating LLMs that go beyond English-centric paradigms.

View on arXiv
@article{liu2025_2403.10258,
  title={ Is Translation All You Need? A Study on Solving Multilingual Tasks with Large Language Models },
  author={ Chaoqun Liu and Wenxuan Zhang and Yiran Zhao and Anh Tuan Luu and Lidong Bing },
  journal={arXiv preprint arXiv:2403.10258},
  year={ 2025 }
}
Comments on this paper