15
1

Foundation Model's Embedded Representations May Detect Distribution Shift

Abstract

Sampling biases can cause distribution shifts between train and test datasets for supervised learning tasks, obscuring our ability to understand the generalization capacity of a model. This is especially important considering the wide adoption of pre-trained foundational neural networks -- whose behavior remains poorly understood -- for transfer learning (TL) tasks. We present a case study for TL on the Sentiment140 dataset and show that many pre-trained foundation models encode different representations of Sentiment140's manually curated test set MM from the automatically labeled training set PP, confirming that a distribution shift has occurred. We argue training on PP and measuring performance on MM is a biased measure of generalization. Experiments on pre-trained GPT-2 show that the features learnable from PP do not improve (and in fact hamper) performance on MM. Linear probes on pre-trained GPT-2's representations are robust and may even outperform overall fine-tuning, implying a fundamental importance for discerning distribution shift in train/test splits for model interpretation.

View on arXiv
Comments on this paper