ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.06970
28
11

Understanding the Eluder Dimension

14 April 2021
Gen Li
Pritish Kamath
Dylan J. Foster
Nathan Srebro
ArXivPDFHTML
Abstract

We provide new insights on eluder dimension, a complexity measure that has been extensively used to bound the regret of algorithms for online bandits and reinforcement learning with function approximation. First, we study the relationship between the eluder dimension for a function class and a generalized notion of rank, defined for any monotone "activation" σ:R→R\sigma : \mathbb{R}\to \mathbb{R}σ:R→R, which corresponds to the minimal dimension required to represent the class as a generalized linear model. It is known that when σ\sigmaσ has derivatives bounded away from 000, σ\sigmaσ-rank gives rise to an upper bound on eluder dimension for any function class; we show however that eluder dimension can be exponentially smaller than σ\sigmaσ-rank. We also show that the condition on the derivative is necessary; namely, when σ\sigmaσ is the relu\mathsf{relu}relu activation, the eluder dimension can be exponentially larger than σ\sigmaσ-rank. For binary-valued function classes, we obtain a characterization of the eluder dimension in terms of star number and threshold dimension, quantities which are relevant in active learning and online learning respectively.

View on arXiv
Comments on this paper