Improving Constrained Generation in Language Models via Self-Distilled Twisted Sequential Monte Carlo

Recent work has framed constrained text generation with autoregressive language models as a probabilistic inference problem. Among these, Zhao et al. (2024) introduced a promising approach based on twisted Sequential Monte Carlo, which incorporates learned twist functions and twist-induced proposals to guide the generation process. However, in constrained generation settings where the target distribution concentrates on outputs that are unlikely under the base model, learning becomes challenging due to sparse and uninformative reward signals. We show that iteratively refining the base model through self-distillation alleviates this issue by making the model progressively more aligned with the target, leading to substantial gains in generation quality.
View on arXiv@article{kim2025_2507.02315, title={ Improving Constrained Generation in Language Models via Self-Distilled Twisted Sequential Monte Carlo }, author={ Sooyeon Kim and Giung Nam and Juho Lee }, journal={arXiv preprint arXiv:2507.02315}, year={ 2025 } }