2
0

Neural Approaches to SAT Solving: Design Choices and Interpretability

David Mojžíšek
Jan Hůla
Ziwei Li
Ziyu Zhou
Mikoláš Janota
Abstract

In this contribution, we provide a comprehensive evaluation of graph neural networks applied to Boolean satisfiability problems, accompanied by an intuitive explanation of the mechanisms enabling the model to generalize to different instances. We introduce several training improvements, particularly a novel closest assignment supervision method that dynamically adapts to the model's current state, significantly enhancing performance on problems with larger solution spaces. Our experiments demonstrate the suitability of variable-clause graph representations with recurrent neural network updates, which achieve good accuracy on SAT assignment prediction while reducing computational demands. We extend the base graph neural network into a diffusion model that facilitates incremental sampling and can be effectively combined with classical techniques like unit propagation. Through analysis of embedding space patterns and optimization trajectories, we show how these networks implicitly perform a process very similar to continuous relaxations of MaxSAT, offering an interpretable view of their reasoning process. This understanding guides our design choices and explains the ability of recurrent architectures to scale effectively at inference time beyond their training distribution, which we demonstrate with test-time scaling experiments.

View on arXiv
@article{mojžíšek2025_2504.01173,
  title={ Neural Approaches to SAT Solving: Design Choices and Interpretability },
  author={ David Mojžíšek and Jan Hůla and Ziwei Li and Ziyu Zhou and Mikoláš Janota },
  journal={arXiv preprint arXiv:2504.01173},
  year={ 2025 }
}
Comments on this paper