Iterative thresholding for non-linear learning in the strong -contamination model

We derive approximation bounds for learning single neuron models using thresholded gradient descent when both the labels and the covariates are possibly corrupted adversarially. We assume the data follows the model where is a nonlinear activation function, the noise is Gaussian, and the covariate vector is sampled from a sub-Gaussian distribution. We study sigmoidal, leaky-ReLU, and ReLU activation functions and derive a approximation bound in -norm, with sample complexity and failure probability . We also study the linear regression problem, where . We derive a approximation bound, improving upon the previous approximation bounds for the gradient-descent based iterative thresholding algorithms of Bhatia et al. (NeurIPS 2015) and Shen and Sanghavi (ICML 2019). Our algorithm has a runtime complexity when , improving upon the runtime complexity of Awasthi et al. (NeurIPS 2022).
View on arXiv