Adaptive teachers for amortized samplers

Amortized inference is the task of training a parametric model, such as a neural network, to approximate a distribution with a given unnormalized density where exact sampling is intractable. When sampling is implemented as a sequential decision-making process, reinforcement learning (RL) methods, such as generative flow networks, can be used to train the sampling policy. Off-policy RL training facilitates the discovery of diverse, high-reward candidates, but existing methods still face challenges in efficient exploration. We propose to use an adaptive training distribution (the \teacher) to guide the training of the primary amortized sampler (the \student). The \teacher, an auxiliary behavior model, is trained to sample high-loss regions of the \student and can generalize across unexplored modes, thereby enhancing mode coverage by providing an efficient training curriculum. We validate the effectiveness of this approach in a synthetic environment designed to present an exploration challenge, two diffusion-based sampling tasks, and four biochemical discovery tasks demonstrating its ability to improve sample efficiency and mode coverage. Source code is available atthis https URL.
View on arXiv@article{kim2025_2410.01432, title={ Adaptive teachers for amortized samplers }, author={ Minsu Kim and Sanghyeok Choi and Taeyoung Yun and Emmanuel Bengio and Leo Feng and Jarrid Rector-Brooks and Sungsoo Ahn and Jinkyoo Park and Nikolay Malkin and Yoshua Bengio }, journal={arXiv preprint arXiv:2410.01432}, year={ 2025 } }