33

Is RL fine-tuning harder than regression? A PDE learning approach for diffusion models

Main:34 Pages
Bibliography:6 Pages
Appendix:14 Pages
Abstract

We study the problem of learning the optimal control policy for fine-tuning a given diffusion process, using general value function approximation. We develop a new class of algorithms by solving a variational inequality problem based on the Hamilton-Jacobi-Bellman (HJB) equations. We prove sharp statistical rates for the learned value function and control policy, depending on the complexity and approximation errors of the function class. In contrast to generic reinforcement learning problems, our approach shows that fine-tuning can be achieved via supervised regression, with faster statistical rate guarantees.

View on arXiv
Comments on this paper