LLMR: Knowledge Distillation with a Large Language Model-Induced Reward
International Conference on Language Resources and Evaluation (LREC), 2024
Main:4 Pages
2 Figures
Bibliography:4 Pages
2 Tables
Abstract
Large language models have become increasingly popular and demonstrated remarkable performance in various natural language processing (NLP) tasks. However, these models are typically computationally expensive and difficult to be deployed in resource-constrained environments. In this paper, we propose LLMR, a novel knowledge distillation (KD) method based on a reward function induced from large language models. We conducted experiments on multiple datasets in the dialogue generation and summarization tasks. Empirical results demonstrate that our LLMR approach consistently outperforms traditional KD methods in different tasks and datasets.
View on arXivComments on this paper
