Provable In-context Learning for Mixture of Linear Regressions using
Transformers
We theoretically investigate the in-context learning capabilities of transformers in the context of learning mixtures of linear regression models. For the case of two mixtures, we demonstrate the existence of transformers that can achieve an accuracy, relative to the oracle predictor, of order in the low signal-to-noise ratio (SNR) regime and in the high SNR regime, where is the length of the prompt, and is the dimension of the problem. Additionally, we derive in-context excess risk bounds of order , where denotes the number of (training) prompts, and represents the number of attention layers. The order of depends on whether the SNR is low or high. In the high SNR regime, we extend the results to -component mixture models for finite . Extensive simulations also highlight the advantages of transformers for this task, outperforming other baselines such as the Expectation-Maximization algorithm.
View on arXiv