ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.15474
16
0

Optimizing Backward Policies in GFlowNets via Trajectory Likelihood Maximization

20 October 2024
Timofei Gritsaev
Nikita Morozov
S. Samsonov
D. Tiapkin
ArXivPDFHTML
Abstract

Generative Flow Networks (GFlowNets) are a family of generative models that learn to sample objects with probabilities proportional to a given reward function. The key concept behind GFlowNets is the use of two stochastic policies: a forward policy, which incrementally constructs compositional objects, and a backward policy, which sequentially deconstructs them. Recent results show a close relationship between GFlowNet training and entropy-regularized reinforcement learning (RL) problems with a particular reward design. However, this connection applies only in the setting of a fixed backward policy, which might be a significant limitation. As a remedy to this problem, we introduce a simple backward policy optimization algorithm that involves direct maximization of the value function in an entropy-regularized Markov Decision Process (MDP) over intermediate rewards. We provide an extensive experimental evaluation of the proposed approach across various benchmarks in combination with both RL and GFlowNet algorithms and demonstrate its faster convergence and mode discovery in complex environments.

View on arXiv
@article{gritsaev2025_2410.15474,
  title={ Optimizing Backward Policies in GFlowNets via Trajectory Likelihood Maximization },
  author={ Timofei Gritsaev and Nikita Morozov and Sergey Samsonov and Daniil Tiapkin },
  journal={arXiv preprint arXiv:2410.15474},
  year={ 2025 }
}
Comments on this paper