23

Learning to Bet for Horizon-Aware Anytime-Valid Testing

Ege Onur Taga
Samet Oymak
Shubhanshu Shekhar
Main:7 Pages
11 Figures
Bibliography:3 Pages
1 Tables
Appendix:11 Pages
Abstract

We develop horizon-aware anytime-valid tests and confidence sequences for bounded means under a strict deadline NN. Using the betting/e-process framework, we cast horizon-aware betting as a finite-horizon optimal control problem with state space (t,logWt)(t, \log W_t), where tt is the time and WtW_t is the test martingale value. We first show that in certain interior regions of the state space, policies that deviate significantly from Kelly betting are provably suboptimal, while Kelly betting reaches the threshold with high probability. We then identify sufficient conditions showing that outside this region, more aggressive betting than Kelly can be better if the bettor is behind schedule, and less aggressive can be better if the bettor is ahead. Taken together these results suggest a simple phase diagram in the (t,logWt)(t, \log W_t) plane, delineating regions where Kelly, fractional Kelly, and aggressive betting may be preferable. Guided by this phase diagram, we introduce a Deep Reinforcement Learning approach based on a universal Deep Q-Network (DQN) agent that learns a single policy from synthetic experience and maps simple statistics of past observations to bets across horizons and null values. In limited-horizon experiments, the learned DQN policy yields state-of-the-art results.

View on arXiv
Comments on this paper