549
v1v2 (latest)

Task-Adaptive Pretrained Language Models via Clustered-Importance Sampling

International Conference on Learning Representations (ICLR), 2024
Simin Fan
Pierre Ablin
Main:10 Pages
14 Figures
Bibliography:8 Pages
18 Tables
Appendix:5 Pages
Abstract

Specialist language models (LMs) focus on a specific task or domain on which they often outperform generalist LMs of the same size. However, the specialist data needed to pretrain these models is only available in limited amount for most tasks. In this work, we build specialist models from large generalist training sets instead. We propose a novel method, ClusteRed Importance SamPling (CRISP). CRISP clusters the generalist dataset and samples from these clusters based on their frequencies in the smaller specialist dataset. It is scalable, suitable for both pretraining and continued pretraining, and works well in multi-task settings. CRISP performs favorably compared to other methods that adjust the training distribution of the generalist data with guidance from the limited domain-specific data. Our findings demonstrate improvements across different domains in terms of language modeling perplexity and accuracy on multiple-choice question tasks. We also present ablation studies that examine the impact of dataset sizes, clustering configurations, and model sizes.

View on arXiv
Comments on this paper