ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1206.2689
134
42
v1v2 (latest)

Approximation algorithms for the normalizing constant of Gibbs distributions

12 June 2012
M. Huber
ArXiv (abs)PDFHTML
Abstract

Consider a family of distributions {πβ}\{\pi_{\beta}\}{πβ​} where X∼πβX\sim\pi_{\beta}X∼πβ​ means that P(X=x)=exp⁡(−βH(x))/Z(β)\mathbb{P}(X=x)=\exp(-\beta H(x))/Z(\beta)P(X=x)=exp(−βH(x))/Z(β). Here Z(β)Z(\beta)Z(β) is the proper normalizing constant, equal to ∑xexp⁡(−βH(x))\sum_x\exp(-\beta H(x))∑x​exp(−βH(x)). Then {πβ}\{\pi_{\beta}\}{πβ​} is known as a Gibbs distribution, and Z(β)Z(\beta)Z(β) is the partition function. This work presents a new method for approximating the partition function to a specified level of relative accuracy using only a number of samples, that is, O(ln⁡(Z(β))ln⁡(ln⁡(Z(β))))O(\ln(Z(\beta))\ln(\ln(Z(\beta))))O(ln(Z(β))ln(ln(Z(β)))) when Z(0)≥1Z(0)\geq1Z(0)≥1. This is a sharp improvement over previous, similar approaches that used a much more complicated algorithm, requiring O(ln⁡(Z(β))ln⁡(ln⁡(Z(β)))5)O(\ln(Z(\beta))\ln(\ln(Z(\beta)))^5)O(ln(Z(β))ln(ln(Z(β)))5) samples.

View on arXiv
Comments on this paper