ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 0707.3087
574
43
v1v2v3 (latest)

Universal Reinforcement Learning

IEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2007
20 July 2007
Vivek F. Farias
C. Moallemi
Tsachy Weissman
Benjamin Van Roy
ArXiv (abs)PDFHTML
Abstract

We consider an agent interacting with an unmodeled environment. At each time, the agent makes an observation, takes an action, and incurs a cost. Its actions can influence future observations and costs. The goal is to minimize the long-run average cost. We propose an algorithm for optimal control based on ideas from the Lempel-Ziv scheme for universal data compression and prediction. We establish that, if there exists an integer K such that the future is conditionally independent of the past given a window of K consecutive actions and observations, then the average cost converges to the optimum. Experimental results involving the game of Rock-Paper-Scissors illustrate merits of the algorithm.

View on arXiv
Comments on this paper