We consider unregularized robust M-estimators for linear models under Gaussian design and heavy-tailed noise, in the proportional asymptotics regime where the sample size n and the number of features p are both increasing such that . An estimator of the out-of-sample error of a robust M-estimator is analysed and proved to be consistent for a large family of loss functions that includes the Huber loss. As an application of this result, we propose an adaptive tuning procedure of the scale parameter of a given loss function : choosing in a given interval that minimizes the out-of-sample error estimate of the M-estimator constructed with loss leads to the optimal out-of-sample error over . The proof relies on a smoothing argument: the unregularized M-estimation objective function is perturbed, or smoothed, with a Ridge penalty that vanishes as , and show that the unregularized M-estimator of interest inherits properties of its smoothed version.
View on arXiv