ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.02353
16
197

Participatory Research for Low-resourced Machine Translation: A Case Study in African Languages

5 October 2020
W. Nekoto
Vukosi Marivate
T. Matsila
Timi E. Fasubaa
T. Kolawole
T. Fagbohungbe
S. Akinola
Shamsuddeen Hassan Muhammad
Salomon Kabongo KABENAMUALU
Salomey Osei
Sackey Freshia
Andre Niyongabo Rubungo
Ricky Macharm
Perez Ogayo
Orevaoghene Ahia
Musie Meressa
Mofetoluwa Adeyemi
Masabata Mokgesi-Selinga
Lawrence Okegbemi
Laura Martinus
Kolawole Tajudeen
Kevin Degila
Kelechi Ogueji
Kathleen Siminyu
Julia Kreutzer
Jason Webster
Jamiil Toure Ali
Jade Z. Abbott
Iroro Orife
Ignatius M Ezeani
Idris Abdulkabir Dangana
Herman Kamper
Hady ElSahar
Goodness Duru
Ghollah Kioko
Espoir Murhabazi
Elan Van Biljon
Daniel Whitenack
Christopher Onyefuluchi
Chris C. Emezue
Bonaventure F. P. Dossou
Blessing K. Sibanda
B. Bassey
A. Olabiyi
A. Ramkilowan
A. Oktem
Adewale Akinfaderin
Abdallah Bashir
ArXivPDFHTML
Abstract

Research in NLP lacks geographic diversity, and the question of how NLP can be scaled to low-resourced languages has not yet been adequately solved. "Low-resourced"-ness is a complex problem going beyond data availability and reflects systemic problems in society. In this paper, we focus on the task of Machine Translation (MT), that plays a crucial role for information accessibility and communication worldwide. Despite immense improvements in MT over the past decade, MT is centered around a few high-resourced languages. As MT researchers cannot solve the problem of low-resourcedness alone, we propose participatory research as a means to involve all necessary agents required in the MT development process. We demonstrate the feasibility and scalability of participatory research with a case study on MT for African languages. Its implementation leads to a collection of novel translation datasets, MT benchmarks for over 30 languages, with human evaluations for a third of them, and enables participants without formal training to make a unique scientific contribution. Benchmarks, models, data, code, and evaluation results are released under https://github.com/masakhane-io/masakhane-mt.

View on arXiv
Comments on this paper