207

Finetuning LLMs for Comparative Assessment Tasks

International Conference on Computational Linguistics (COLING), 2024
Main:4 Pages
5 Figures
Bibliography:2 Pages
7 Tables
Appendix:2 Pages
Abstract

Automated assessment in natural language generation is a challenging task. Instruction-tuned large language models (LLMs) have shown promise in reference-free evaluation, particularly through comparative assessment. However, the quadratic computational complexity of pairwise comparisons limits its scalability. To address this, efficient comparative assessment has been explored by applying comparative strategies on zero-shot LLM probabilities. We propose a framework for finetuning LLMs for comparative assessment to align the model's output with the target distribution of comparative probabilities. By training on soft probabilities, our approach improves state-of-the-art performance while maintaining high performance with an efficient subset of comparisons.

View on arXiv
Comments on this paper