35
0

On Understanding Attention-Based In-Context Learning for Categorical Data

Abstract

In-context learning based on attention models is examined for data with categorical outcomes, with inference in such models viewed from the perspective of functional gradient descent (GD). We develop a network composed of attention blocks, with each block employing a self-attention layer followed by a cross-attention layer, with associated skip connections. This model can exactly perform multi-step functional GD inference for in-context inference with categorical observations. We perform a theoretical analysis of this setup, generalizing many prior assumptions in this line of work, including the class of attention mechanisms for which it is appropriate. We demonstrate the framework empirically on synthetic data, image classification and language generation.

View on arXiv
@article{wang2025_2405.17248,
  title={ On Understanding Attention-Based In-Context Learning for Categorical Data },
  author={ Aaron T. Wang and William Convertino and Xiang Cheng and Ricardo Henao and Lawrence Carin },
  journal={arXiv preprint arXiv:2405.17248},
  year={ 2025 }
}
Comments on this paper