ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.07382
  4. Cited By
Learning, compression, and leakage: Minimising classification error via
  meta-universal compression principles
v1v2 (latest)

Learning, compression, and leakage: Minimising classification error via meta-universal compression principles

14 October 2020
F. Rosas
P. Mediano
Michael C. Gastpar
ArXiv (abs)PDFHTML

Papers citing "Learning, compression, and leakage: Minimising classification error via meta-universal compression principles"

4 / 4 papers shown
Title
Batch Universal Prediction
Batch Universal Prediction
Marco Bondaschi
Michael C. Gastpar
24
3
0
06 Feb 2024
Single Layer Predictive Normalized Maximum Likelihood for
  Out-of-Distribution Detection
Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection
Koby Bibas
M. Feder
Tal Hassner
OODD
79
24
0
18 Oct 2021
Distribution Free Uncertainty for the Minimum Norm Solution of
  Over-parameterized Linear Regression
Distribution Free Uncertainty for the Minimum Norm Solution of Over-parameterized Linear Regression
Koby Bibas
M. Feder
35
5
0
14 Feb 2021
Sequential prediction under log-loss and misspecification
Sequential prediction under log-loss and misspecification
M. Feder
Yury Polyanskiy
10
8
0
29 Jan 2021
1