ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.13737
31
5

EthioLLM: Multilingual Large Language Models for Ethiopian Languages with Task Evaluation

20 March 2024
A. Tonja
Israel Abebe Azime
Tadesse Destaw Belay
M. Yigezu
Moges Ahmed Mehamed
A. Ayele
Ebrahim Chekol Jibril
Michael Melese Woldeyohannis
Olga Kolesnikova
Philipp Slusallek
Dietrich Klakow
Shengwu Xiong
Seid Muhie Yimam
ArXivPDFHTML
Abstract

Large language models (LLMs) have gained popularity recently due to their outstanding performance in various downstream Natural Language Processing (NLP) tasks. However, low-resource languages are still lagging behind current state-of-the-art (SOTA) developments in the field of NLP due to insufficient resources to train LLMs. Ethiopian languages exhibit remarkable linguistic diversity, encompassing a wide array of scripts, and are imbued with profound religious and cultural significance. This paper introduces EthioLLM -- multilingual large language models for five Ethiopian languages (Amharic, Geéz, Afan Oromo, Somali, and Tigrinya) and English, and Ethiobenchmark -- a new benchmark dataset for various downstream NLP tasks. We evaluate the performance of these models across five downstream NLP tasks. We open-source our multilingual language models, new benchmark datasets for various downstream tasks, and task-specific fine-tuned language models and discuss the performance of the models. Our dataset and models are available at the https://huggingface.co/EthioNLP repository.

View on arXiv
Comments on this paper