88

Learning Ising Models under Hard Constraints using One Sample

Main:12 Pages
1 Figures
Bibliography:4 Pages
Appendix:18 Pages
Abstract

We consider the problem of estimating inverse temperature parameter β\beta of an nn-dimensional truncated Ising model using a single sample. Given a graph G=(V,E)G = (V,E) with nn vertices, a truncated Ising model is a probability distribution over the nn-dimensional hypercube {1,1}n\{-1,1\}^n where each configuration σ\mathbf{\sigma} is constrained to lie in a truncation set S{1,1}nS \subseteq \{-1,1\}^n and has probability Pr(σ)exp(βσAσ)\Pr(\mathbf{\sigma}) \propto \exp(\beta\mathbf{\sigma}^\top A\mathbf{\sigma}) with AA being the adjacency matrix of GG. We adopt the recent setting of [Galanis et al. SODA'24], where the truncation set SS can be expressed as the set of satisfying assignments of a kk-SAT formula. Given a single sample σ\mathbf{\sigma} from a truncated Ising model, with inverse parameter β\beta^*, underlying graph GG of bounded degree Δ\Delta and SS being expressed as the set of satisfying assignments of a kk-SAT formula, we design in nearly O(n)O(n) time an estimator β^\hat{\beta} that is O(Δ3/n)O(\Delta^3/\sqrt{n})-consistent with the true parameter β\beta^* for klog(d2k)Δ3.k \gtrsim \log(d^2k)\Delta^3.Our estimator is based on the maximization of the pseudolikelihood, a notion that has received extensive analysis for various probabilistic models without [Chatterjee, Annals of Statistics '07] or with truncation [Galanis et al. SODA '24]. Our approach generalizes recent techniques from [Daskalakis et al. STOC '19, Galanis et al. SODA '24], to confront the more challenging setting of the truncated Ising model.

View on arXiv
Comments on this paper