31
0

Leveraging Large Language Models for Building Interpretable Rule-Based Data-to-Text Systems

Abstract

We introduce a simple approach that uses a large language model (LLM) to automatically implement a fully interpretable rule-based data-to-text system in pure Python. Experimental evaluation on the WebNLG dataset showed that such a constructed system produces text of better quality (according to the BLEU and BLEURT metrics) than the same LLM prompted to directly produce outputs, and produces fewer hallucinations than a BART language model fine-tuned on the same data. Furthermore, at runtime, the approach generates text in a fraction of the processing time required by neural approaches, using only a single CPU

View on arXiv
@article{warczyński2025_2502.20609,
  title={ Leveraging Large Language Models for Building Interpretable Rule-Based Data-to-Text Systems },
  author={ Jędrzej Warczyński and Mateusz Lango and Ondrej Dusek },
  journal={arXiv preprint arXiv:2502.20609},
  year={ 2025 }
}
Comments on this paper