303
v1v2v3v4v5 (latest)

Prediction with a Short Memory

Abstract

We consider the problem of predicting the next observation given a sequence of past observations, and consider the extent to which accurate prediction requires complex algorithms that explicitly leverage long-range dependencies. Perhaps surprisingly, our positive results show that for a broad class of sequences, there is an algorithm that predicts well on average, and bases its predictions only on the most recent few observation together with a set of simple summary statistics of the past observations. Specifically, we show that for any distribution over observations, if the mutual information between past observations and future observations is upper bounded by II, then a simple Markov model over the most recent I/ϵI/\epsilon observations obtains expected KL error ϵ\epsilon---and hence 1\ell_1 error ϵ\sqrt{\epsilon}---with respect to the optimal predictor that has access to the entire past and knows the data generating distribution. For a Hidden Markov Model with nn hidden states, II is bounded by logn\log n, a quantity that does not depend on the mixing time, and we show that the trivial prediction algorithm based on the empirical frequencies of length O(logn/ϵ)O(\log n/\epsilon) windows of observations achieves this error, provided the length of the sequence is dΩ(logn/ϵ)d^{\Omega(\log n/\epsilon)}, where dd is the size of the observation alphabet. We also establish that this result cannot be improved upon, even for the class of HMMs, in the following two senses: First, for HMMs with nn hidden states, a window length of logn/ϵ\log n/\epsilon is information-theoretically necessary to achieve expected 1\ell_1 error ϵ\sqrt{\epsilon}. Second, the dΘ(logn/ϵ)d^{\Theta(\log n/\epsilon)} samples required to estimate the Markov model for an observation alphabet of size dd is necessary for any computationally tractable learning algorithm, assuming the hardness of strongly refuting a certain class of CSPs.

View on arXiv
Comments on this paper