We study the complexity of learning and approximation of self-bounding functions over the uniform distribution on the Boolean hypercube . Informally, a function is self-bounding if for every , upper bounds the sum of all the marginal decreases in the value of the function at . Self-bounding functions include such well-known classes of functions as submodular and fractionally-subadditive (XOS) functions. They were introduced by Boucheron et al in the context of concentration of measure inequalities. Our main result is a nearly tight -approximation of self-bounding functions by low-degree juntas. Specifically, all self-bounding functions can be -approximated in by a polynomial of degree over variables. Both the degree and junta-size are optimal up to logarithmic terms. Previously, the best known bound was on the degree and on the number of variables (Feldman and Vondr \'{a}k 2013). These results lead to improved and in several cases almost tight bounds for PAC and agnostic learning of submodular, XOS and self-bounding functions. In particular, assuming hardness of learning juntas, we show that PAC and agnostic learning of self-bounding functions have complexity of .
View on arXiv