One-Shot Learning for k-SAT
Consider a -SAT formula where every variable appears at most times. Let be a satisfying assignment, sampled proportionally to where is the number of true variables and is a real parameter. Given and , can we efficiently learn ?This problem falls into a recent line of work about single-sample (``one-shot'') learning of Markov random fields. Our -SAT setting was recently studied by Galanis, Kalavasis, Kandiros (SODA24). They showed that single-sample learning is possible when roughly and impossible when . In addition to the gap in~, their impossibility result left open the question of whether the feasibility threshold for one-shot learning is dictated by the satisfiability threshold for bounded-degree -SAT formulas.Our main contribution is to answer this question negatively. We show that one-shot learning for -SAT is infeasible well below the satisfiability threshold; in fact, we obtain impossibility results for degrees as low as when is sufficiently large, and bootstrap this to small values of when scales exponentially with , via a probabilistic construction. On the positive side, we simplify the analysis of the learning algorithm, obtaining significantly stronger bounds on in terms of . For the uniform case , we show that learning is possible under the condition . This is (up to constant factors) all the way to the sampling threshold -- it is known that sampling a uniformly-distributed satisfying assignment is NP-hard for .
View on arXiv