ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 0906.4779
177
71
v1v2v3v4 (latest)

Minimum Probability Flow Learning

25 June 2009
Jascha Narain Sohl-Dickstein
P. Battaglino
M. DeWeese
ArXiv (abs)PDFHTML
Abstract

Fitting probabilistic models to data is often difficult, due to the general intractability of the partition function and its derivatives. Here we propose a new parameter estimation technique that does not require computing an intractable normalization factor or sampling from the equilibrium distribution of the model. This is achieved by establishing dynamics that would transform the observed data distribution into the model distribution, and then setting as the objective the minimization of the KL divergence between the data distribution and the distribution produced by running the dynamics for an infinitesimal time. Score matching, minimum velocity learning, and certain forms of contrastive divergence are shown to be special cases of this learning technique. We demonstrate parameter estimation in Ising models, deep belief networks and a product of Student-t test model of natural scenes. In the Ising model case, current state of the art techniques are outperformed by approximately two orders of magnitude in learning time, with comparable error in recovered parameters. This technique promises to broaden the class of probabilistic models that are practical for use with large, complex data sets.

View on arXiv
Comments on this paper