31
0

One-Shot Learning for k-SAT

Abstract

Consider a kk-SAT formula Φ\Phi where every variable appears at most dd times, and let σ\sigma be a satisfying assignment of Φ\Phi sampled proportionally to eβm(σ)e^{\beta m(\sigma)} where m(σ)m(\sigma) is the number of variables set to true and β\beta is a real parameter. Given Φ\Phi and σ\sigma, can we learn the value of β\beta efficiently?This problem falls into a recent line of works about single-sample ("one-shot") learning of Markov random fields. The kk-SAT setting we consider here was recently studied by Galanis, Kandiros, and Kalavasis (SODA'24) where they showed that single-sample learning is possible when roughly d2k/6.45d\leq 2^{k/6.45} and impossible when d(k+1)2k1d\geq (k+1) 2^{k-1}. Crucially, for their impossibility results they used the existence of unsatisfiable instances which, aside from the gap in dd, left open the question of whether the feasibility threshold for one-shot learning is dictated by the satisfiability threshold of kk-SAT formulas of bounded degree.Our main contribution is to answer this question negatively. We show that one-shot learning for kk-SAT is infeasible well below the satisfiability threshold; in fact, we obtain impossibility results for degrees dd as low as k2k^2 when β\beta is sufficiently large, and bootstrap this to small values of β\beta when dd scales exponentially with kk, via a probabilistic construction. On the positive side, we simplify the analysis of the learning algorithm and obtain significantly stronger bounds on dd in terms of β\beta. In particular, for the uniform case β0\beta\rightarrow 0 that has been studied extensively in the sampling literature, our analysis shows that learning is possible under the condition d2k/2d\lesssim 2^{k/2}. This is nearly optimal (up to constant factors) in the sense that it is known that sampling a uniformly-distributed satisfying assignment is NP-hard for d2k/2d\gtrsim 2^{k/2}.

View on arXiv
@article{galanis2025_2502.07135,
  title={ One-Shot Learning for k-SAT },
  author={ Andreas Galanis and Leslie Ann Goldberg and Xusheng Zhang },
  journal={arXiv preprint arXiv:2502.07135},
  year={ 2025 }
}
Comments on this paper