We present a greedy algorithm for solving binary classification problems in situations where the dataset is either too small or not fully representative of the problem being solved, and obtaining more data is not possible. This algorithm is of particular interest when training small models that have trouble generalizing. It relies on a trained model with loose accuracy constraints, an iterative hyperparameter pruning procedure, and a function used to generate new data. Analysis on correctness and runtime complexity under ideal conditions and an extension to deep neural networks is provided. In the former case we obtain an asymptotic bound of , where is the cardinality of the set of hyperparameters to be searched; and are the sizes of the evaluation and training datasets, respectively; and are the inference times for the trained model and the candidate model; and is a polynomial on and . Under these conditions, this algorithm returns a solution that is times better than simply enumerating and training with any . As part of our analysis of the generating function we also prove that, under certain assumptions, if an open cover of has the same homology as the manifold where the support of the underlying probability distribution lies, then is learnable, and viceversa.
View on arXiv