61
0

Long-Tail Crisis in Nearest Neighbor Language Models

Abstract

The kk-nearest-neighbor language model (kkNN-LM), one of the retrieval-augmented language models, improves the perplexity for given text by directly accessing a large datastore built from any text data during inference. A widely held hypothesis for the success of kkNN-LM is that its explicit memory, i.e., the datastore, enhances predictions for long-tail phenomena. However, prior works have primarily shown its ability to retrieve long-tail contexts, leaving the model's performance remain underexplored in estimating the probabilities of long-tail target tokens during inference. In this paper, we investigate the behavior of kkNN-LM on low-frequency tokens, examining prediction probability, retrieval accuracy, token distribution in the datastore, and approximation error of the product quantization. Our experimental results reveal that kkNN-LM does not improve prediction performance for low-frequency tokens but mainly benefits high-frequency tokens regardless of long-tail contexts in the datastore.

View on arXiv
@article{nishida2025_2503.22426,
  title={ Long-Tail Crisis in Nearest Neighbor Language Models },
  author={ Yuto Nishida and Makoto Morishita and Hiroyuki Deguchi and Hidetaka Kamigaito and Taro Watanabe },
  journal={arXiv preprint arXiv:2503.22426},
  year={ 2025 }
}
Comments on this paper