BoNBoN Alignment for Large Language Models and the Sweetness of Best-of-n Sampling

This paper concerns the problem of aligning samples from large language models to human preferences using best-of- sampling, where we draw samples, rank them, and return the best one. We consider two fundamental problems. First: what is the relationship between best-of- and approaches to alignment that train LLMs to output samples with a high expected reward (e.g., RLHF or DPO)? To answer this, we embed both the best-of- distribution and the sampling distributions learned by alignment procedures in a common class of tiltings of the base LLM distribution. We then show that, within this class, best-of- is essentially optimal in terms of the trade-off between win-rate against the base model vs KL distance from the base model. That is, best-of- is the best choice of alignment distribution if the goal is to maximize win rate. However, best-of- requires drawing samples for each inference, a substantial cost. To avoid this, the second problem we consider is how to fine-tune a LLM to mimic the best-of- sampling distribution. We derive BoNBoN Alignment to achieve this by exploiting the special structure of the best-of- distribution. Experiments show that BoNBoN alignment yields substantial improvements in producing a model that is preferred to the base policy while minimally affecting off-target aspects.
View on arXiv