116

Implicit Updates for Average-Reward Temporal Difference Learning

Main:47 Pages
9 Figures
Bibliography:1 Pages
Appendix:1 Pages
Abstract

Temporal difference (TD) learning is a cornerstone of reinforcement learning. In the average-reward setting, standard TD(λ\lambda) is highly sensitive to the choice of step-size and thus requires careful tuning to maintain numerical stability. We introduce average-reward implicit TD(λ\lambda), which employs an implicit fixed point update to provide data-adaptive stabilization while preserving the per iteration computational complexity of standard average-reward TD(λ\lambda). In contrast to prior finite-time analyses of average-reward TD(λ\lambda), which impose restrictive step-size conditions, we establish finite-time error bounds for the implicit variant under substantially weaker step-size requirements. Empirically, average-reward implicit TD(λ\lambda) operates reliably over a much broader range of step-sizes and exhibits markedly improved numerical stability. This enables more efficient policy evaluation and policy learning, highlighting its effectiveness as a robust alternative to average-reward TD(λ\lambda).

View on arXiv
Comments on this paper