This work applies modern AI tools (transformers) to solving one of the oldest statistical problems: Poisson means under empirical Bayes (Poisson-EB) setting. In Poisson-EB a high-dimensional mean vector (with iid coordinates sampled from an unknown prior ) is estimated on the basis of . A transformer model is pre-trained on a set of synthetically generated pairs and learns to do in-context learning (ICL) by adapting to unknown . Theoretically, we show that a sufficiently wide transformer can achieve vanishing regret with respect to an oracle estimator who knows as dimension grows to infinity. Practically, we discover that already very small models (100k parameters) are able to outperform the best classical algorithm (non-parametric maximum likelihood, or NPMLE) both in runtime and validation loss, which we compute on out-of-distribution synthetic data as well as real-world datasets (NHL hockey, MLB baseball, BookCorpusOpen). Finally, by using linear probes, we confirm that the transformer's EB estimator appears to internally work differently from either NPMLE or Robbins' estimators.
View on arXiv@article{teh2025_2502.09844, title={ Solving Empirical Bayes via Transformers }, author={ Anzo Teh and Mark Jabbour and Yury Polyanskiy }, journal={arXiv preprint arXiv:2502.09844}, year={ 2025 } }