ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1103.5407
136
45
v1v2v3v4 (latest)

Sparse Bayes estimation in non-Gaussian models via data augmentation

28 March 2011
Nicholas G. Polson
James G. Scott
ArXiv (abs)PDFHTML
Abstract

In this paper we provide a data-augmentation scheme that unifies many common sparse Bayes estimators into a single class. This leads to simple iterative algorithms for estimating the posterior mode under arbitrary combinations of likelihoods and priors within the class. The class itself is quite large: for example, it includes quantile regression, support vector machines, and logistic and multinomial logistic regression, along with the usual ridge regression, lasso, bridge estimators, and regression with heavy-tailed errors. To arrive at this unified framework, we represent a wide class of objective functions as variance--mean mixtures of Gaussians involving both the likelihood and penalty functions. This generalizes existing theory based solely on variance mixtures for the penalty function, and allows the theory of conditionally normal linear models to be brought to bear on a much wider class of models. We focus on two possible choices of the mixing measures: the generalized inverse-Gaussian and Polya distributions, leading to the hyperbolic and Z distributions, respectively. We exploit this conditional normality to find sparse, regularized estimates using tilted iteratively re-weighted least squares (TIRLS). Finally, we characterize the conditional moments of the latent variances for any model in our proposed class, and show the relationship between our method and two recent algorithms: LQA (local quadratic approximation) and LLA (local linear approximation).

View on arXiv
Comments on this paper