288
v1v2v3 (latest)

One-Shot Learning for k-SAT

International Colloquium on Automata, Languages and Programming (ICALP), 2025
Main:15 Pages
Bibliography:4 Pages
Abstract

Consider a kk-SAT formula Φ\Phi where every variable appears at most dd times. Let σ\sigma be a satisfying assignment, sampled proportionally to eβm(σ)e^{\beta m(\sigma)} where m(σ)m(\sigma) is the number of true variables and β\beta is a real parameter. Given Φ\Phi and σ\sigma, can we efficiently learn β\beta?This problem falls into a recent line of work about single-sample (``one-shot'') learning of Markov random fields. Our kk-SAT setting was recently studied by Galanis, Kalavasis, Kandiros (SODA24). They showed that single-sample learning is possible when roughly d2k/6.45d\leq 2^{k/6.45} and impossible when d(k+1)2k1d\geq (k+1) 2^{k-1}. In addition to the gap in~dd, their impossibility result left open the question of whether the feasibility threshold for one-shot learning is dictated by the satisfiability threshold for bounded-degree kk-SAT formulas.Our main contribution is to answer this question negatively. We show that one-shot learning for kk-SAT is infeasible well below the satisfiability threshold; in fact, we obtain impossibility results for degrees dd as low as k2k^2 when β\beta is sufficiently large, and bootstrap this to small values of β\beta when dd scales exponentially with kk, via a probabilistic construction. On the positive side, we simplify the analysis of the learning algorithm, obtaining significantly stronger bounds on dd in terms of β\beta. For the uniform case β0\beta\rightarrow 0, we show that learning is possible under the condition d2k/2d\lesssim 2^{k/2}. This is (up to constant factors) all the way to the sampling threshold -- it is known that sampling a uniformly-distributed satisfying assignment is NP-hard for d2k/2d\gtrsim 2^{k/2}.

View on arXiv
Comments on this paper