ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.07252
22
0
v1v2 (latest)

Universal Batch Learning Under The Misspecification Setting

12 May 2024
Shlomi Vituri
Meir Feder
ArXiv (abs)PDFHTML
Abstract

In this paper we consider the problem of universal {\em batch} learning in a misspecification setting with log-loss. In this setting the hypothesis class is a set of models Θ\ThetaΘ. However, the data is generated by an unknown distribution that may not belong to this set but comes from a larger set of models Φ⊃Θ\Phi \supset \ThetaΦ⊃Θ. Given a training sample, a universal learner is requested to predict a probability distribution for the next outcome and a log-loss is incurred. The universal learner performance is measured by the regret relative to the best hypothesis matching the data, chosen from Θ\ThetaΘ. Utilizing the minimax theorem and information theoretical tools, we derive the optimal universal learner, a mixture over the set of the data generating distributions, and get a closed form expression for the min-max regret. We show that this regret can be considered as a constrained version of the conditional capacity between the data and its generating distributions set. We present tight bounds for this min-max regret, implying that the complexity of the problem is dominated by the richness of the hypotheses models Θ\ThetaΘ and not by the data generating distributions set Φ\PhiΦ. We develop an extension to the Arimoto-Blahut algorithm for numerical evaluation of the regret and its capacity achieving prior distribution. We demonstrate our results for the case where the observations come from a KKK-parameters multinomial distributions while the hypothesis class Θ\ThetaΘ is only a subset of this family of distributions.

View on arXiv
Comments on this paper