ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.08015
16
1

Walia-LLM: Enhancing Amharic-LLaMA by Integrating Task-Specific and Generative Datasets

12 February 2024
Israel Abebe Azime
A. Tonja
Tadesse Destaw Belay
Mitiku Yohannes Fuge
A. Wassie
Eyasu Shiferaw Jada
Yonas Chanie
W. Sewunetie
Seid Muhie Yimam
ArXivPDFHTML
Abstract

Large language models (LLMs) have received a lot of attention in natural language processing (NLP) research because of their exceptional performance in understanding and generating human languages. However, low-resource languages are left behind due to the unavailability of resources. In this work, we focus on enhancing the LLaMA-2-Amharic model by integrating task-specific and generative datasets to improve language model performance for Amharic. We compile an Amharic instruction fine-tuning dataset and fine-tuned LLaMA-2-Amharic model. The fine-tuned model shows promising results in different NLP tasks. We open-source our dataset creation pipeline, instruction datasets, trained models, and evaluation outputs to promote language-specific studies on these models.

View on arXiv
Comments on this paper