ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1307.1827
94
188

Loss minimization and parameter estimation with heavy tails

7 July 2013
Daniel J. Hsu
Sivan Sabato
ArXivPDFHTML
Abstract

This work studies applications and generalizations of a simple estimation technique that provides exponential concentration under heavy-tailed distributions, assuming only bounded low-order moments. We show that the technique can be used for approximate minimization of smooth and strongly convex losses, and specifically for least squares linear regression. For instance, our ddd-dimensional estimator requires just O~(dlog⁡(1/δ))\tilde{O}(d\log(1/\delta))O~(dlog(1/δ)) random samples to obtain a constant factor approximation to the optimal least squares loss with probability 1−δ1-\delta1−δ, without requiring the covariates or noise to be bounded or subgaussian. We provide further applications to sparse linear regression and low-rank covariance matrix estimation with similar allowances on the noise and covariate distributions. The core technique is a generalization of the median-of-means estimator to arbitrary metric spaces.

View on arXiv
Comments on this paper