Efficient k-Nearest-Neighbor Machine Translation with Dynamic Retrieval

To achieve non-parametric NMT domain adaptation, -Nearest-Neighbor Machine Translation (NN-MT) constructs an external datastore to store domain-specific translation knowledge, which derives a NN distribution to interpolate the prediction distribution of the NMT model via a linear interpolation coefficient . Despite its success, NN retrieval at each timestep leads to substantial time overhead. To address this issue, dominant studies resort to NN-MT with adaptive retrieval (NN-MT-AR), which dynamically estimates and skips NN retrieval if is less than a fixed threshold. Unfortunately, NN-MT-AR does not yield satisfactory results. In this paper, we first conduct a preliminary study to reveal two key limitations of NN-MT-AR: 1) the optimization gap leads to inaccurate estimation of for determining NN retrieval skipping, and 2) using a fixed threshold fails to accommodate the dynamic demands for NN retrieval at different timesteps. To mitigate these limitations, we then propose NN-MT with dynamic retrieval (NN-MT-DR) that significantly extends vanilla NN-MT in two aspects. Firstly, we equip NN-MT with a MLP-based classifier for determining whether to skip NN retrieval at each timestep. Particularly, we explore several carefully-designed scalar features to fully exert the potential of the classifier. Secondly, we propose a timestep-aware threshold adjustment method to dynamically generate the threshold, which further improves the efficiency of our model. Experimental results on the widely-used datasets demonstrate the effectiveness and generality of our model.\footnote{Our code is available at \url{https://github.com/DeepLearnXMU/knn-mt-dr}.
View on arXiv