Learning from satisfying assignments under continuous distributions

What kinds of functions are learnable from their satisfying assignments? Motivated by this simple question, we extend the framework of De, Diakonikolas, and Servedio [DDS15], which studied the learnability of probability distributions over defined by the set of satisfying assignments to "low-complexity" Boolean functions, to Boolean-valued functions defined over continuous domains. In our learning scenario there is a known "background distribution" over (such as a known normal distribution or a known log-concave distribution) and the learner is given i.i.d. samples drawn from a target distribution , where is restricted to the satisfying assignments of an unknown low-complexity Boolean-valued function . The problem is to learn an approximation of the target distribution which has small error as measured in total variation distance. We give a range of efficient algorithms and hardness results for this problem, focusing on the case when is a low-degree polynomial threshold function (PTF). When the background distribution is log-concave, we show that this learning problem is efficiently solvable for degree-1 PTFs (i.e.,~linear threshold functions) but not for degree-2 PTFs. In contrast, when is a normal distribution, we show that this learning problem is efficiently solvable for degree-2 PTFs but not for degree-4 PTFs. Our hardness results rely on standard assumptions about secure signature schemes.
View on arXiv