241

Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions

International Conference on Machine Learning (ICML), 2018
Abstract

Consider the following class of learning schemes: β^:=argminβ  j=1n(xjβ;yj)+λR(β),(1)\hat{\boldsymbol{\beta}} := \arg\min_{\boldsymbol{\beta}}\;\sum_{j=1}^n \ell(\boldsymbol{x}_j^\top\boldsymbol{\beta}; y_j) + \lambda R(\boldsymbol{\beta}),\qquad\qquad (1) where xiRp\boldsymbol{x}_i \in \mathbb{R}^p and yiRy_i \in \mathbb{R} denote the ithi^{\text{th}} feature and response variable respectively. Let \ell and RR be the loss function and regularizer, β\boldsymbol{\beta} denote the unknown weights, and λ\lambda be a regularization parameter. Finding the optimal choice of λ\lambda is a challenging problem in high-dimensional regimes where both nn and pp are large. We propose two frameworks to obtain a computationally efficient approximation ALO of the leave-one-out cross validation (LOOCV) risk for nonsmooth losses and regularizers. Our two frameworks are based on the primal and dual formulations of (1). We prove the equivalence of the two approaches under smoothness conditions. This equivalence enables us to justify the accuracy of both methods under such conditions. We use our approaches to obtain a risk estimate for several standard problems, including generalized LASSO, nuclear norm regularization, and support vector machines. We empirically demonstrate the effectiveness of our results for non-differentiable cases.

View on arXiv
Comments on this paper