34
0

Optimizing Language Models for Inference Time Objectives using Reinforcement Learning

Abstract

In this work, we investigate the merits of explicitly optimizing for inference time algorithmic performance during model training. We show how optimizing for inference time performance can improve overall model efficacy. We consider generic inference time objectives with kk samples, with a focus on pass@kk and majority voting as two main applications. With language model training on reasoning datasets, we showcase the performance trade-off enabled by training with such objectives. When training on code generation tasks, we show that the approach significantly improves pass@kk objectives compared to the baseline method.

View on arXiv
@article{tang2025_2503.19595,
  title={ Optimizing Language Models for Inference Time Objectives using Reinforcement Learning },
  author={ Yunhao Tang and Kunhao Zheng and Gabriel Synnaeve and Rémi Munos },
  journal={arXiv preprint arXiv:2503.19595},
  year={ 2025 }
}
Comments on this paper