One-Shot Learning for k-SAT

Consider a -SAT formula where every variable appears at most times, and let be a satisfying assignment of sampled proportionally to where is the number of variables set to true and is a real parameter. Given and , can we learn the value of efficiently?This problem falls into a recent line of works about single-sample ("one-shot") learning of Markov random fields. The -SAT setting we consider here was recently studied by Galanis, Kandiros, and Kalavasis (SODA'24) where they showed that single-sample learning is possible when roughly and impossible when . Crucially, for their impossibility results they used the existence of unsatisfiable instances which, aside from the gap in , left open the question of whether the feasibility threshold for one-shot learning is dictated by the satisfiability threshold of -SAT formulas of bounded degree.Our main contribution is to answer this question negatively. We show that one-shot learning for -SAT is infeasible well below the satisfiability threshold; in fact, we obtain impossibility results for degrees as low as when is sufficiently large, and bootstrap this to small values of when scales exponentially with , via a probabilistic construction. On the positive side, we simplify the analysis of the learning algorithm and obtain significantly stronger bounds on in terms of . In particular, for the uniform case that has been studied extensively in the sampling literature, our analysis shows that learning is possible under the condition . This is nearly optimal (up to constant factors) in the sense that it is known that sampling a uniformly-distributed satisfying assignment is NP-hard for .
View on arXiv@article{galanis2025_2502.07135, title={ One-Shot Learning for k-SAT }, author={ Andreas Galanis and Leslie Ann Goldberg and Xusheng Zhang }, journal={arXiv preprint arXiv:2502.07135}, year={ 2025 } }