310
v1v2v3 (latest)

Learning rates of lql^q coefficient regularization learning with Gaussian kernel

Neural Computation (Neural Comput.), 2013
Abstract

Regularization is a well recognized powerful strategy to improve the performance of a learning machine and lql^q regularization schemes with 0<q<0<q<\infty are central in use. It is known that different qq leads to different properties of the deduced estimators, say, l2l^2 regularization leads to smooth estimators while l1l^1 regularization leads to sparse estimators. Then, how does the generalization capabilities of lql^q regularization learning vary with qq? In this paper, we study this problem in the framework of statistical learning theory and show that implementing lql^q coefficient regularization schemes in the sample dependent hypothesis space associated with Gaussian kernel can attain the same almost optimal learning rates for all 0<q<0<q<\infty. That is, the upper and lower bounds of learning rates for lql^q regularization learning are asymptotically identical for all 0<q<0<q<\infty. Our finding tentatively reveals that, in some modeling contexts, the choice of qq might not have a strong impact with respect to the generalization capability. From this perspective, qq can be arbitrarily specified, or specified merely by other no generalization criteria like smoothness, computational complexity, sparsity, etc..

View on arXiv
Comments on this paper