ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.04071
555
6
v1v2v3v4v5v6v7 (latest)

A framework for posterior consistency in model selection

11 June 2018
D. Rossell
ArXiv (abs)PDFHTML
Abstract

We develop a framework to help understand frequentist properties of Bayesian model selection, specifically its ability to select the (Kullback-Leibler) optimal model and portray model selection uncertainty. We outline its general basis and then focus on linear regression. The contribution is not proving consistency under given prior conditions but providing finite-sample rates that describe how model selection depends on the prior and problem characteristics such as sample size, signal-to-noise, problem dimension and true sparsity. A corollary proves a strong form of convergence for L0L_0L0​ penalties and pseudo-posterior probabilities of interest for L0L_0L0​ uncertainty quantification. These results unify and extend current Bayesian model selection literature and signal limitations, specifically that asymptotically optimal sparse priors can significantly reduce power even for moderately large nnn and that less sparse priors can improve power trade-offs not adequately captured by asymptotic rates. These issues are compounded by the fact that model misspecification often causes an exponential drop in power, as we briefly study here. Our examples confirm these findings, underlining the importance of considering the data at hand's characteristics to judge the quality of model selection procedures, rather than relying purely on asymptotics.

View on arXiv
Comments on this paper