46
0

RigoChat 2: an adapted language model to Spanish using a bounded dataset and reduced hardware

Abstract

Large Language Models (LLMs) have become a key element of modern artificial intelligence, demonstrating the ability to address a wide range of language processing tasks at unprecedented levels of accuracy without the need of collecting problem-specific data. However, these versatile models face a significant challenge: both their training and inference processes require substantial computational resources, time, and memory. Consequently, optimizing this kind of models to minimize these requirements is crucial. In this article, we demonstrate that, with minimal resources and in a remarkably short time, it is possible to enhance a state-of-the-art model, specifically for a given language task, without compromising its overall capabilities using a relatively small pretrained LLM as a basis. Specifically, we present our use case, RigoChat 2, illustrating how LLMs can be adapted to achieve superior results in Spanish-language tasks.

View on arXiv
@article{gómez2025_2503.08188,
  title={ RigoChat 2: an adapted language model to Spanish using a bounded dataset and reduced hardware },
  author={ Gonzalo Santamaría Gómez and Guillem García Subies and Pablo Gutiérrez Ruiz and Mario González Valero and Natàlia Fuertes and Helena Montoro Zamorano and Carmen Muñoz Sanz and Leire Rosado Plaza and Nuria Aldama García and David Betancur Sánchez and Kateryna Sushkova and Marta Guerrero Nieto and Álvaro Barbero Jiménez },
  journal={arXiv preprint arXiv:2503.08188},
  year={ 2025 }
}
Comments on this paper