We explore the capability of transformers to address endogeneity in in-context linear regression. Our main finding is that transformers inherently possess a mechanism to handle endogeneity effectively using instrumental variables (IV). First, we demonstrate that the transformer architecture can emulate a gradient-based bi-level optimization procedure that converges to the widely used two-stage least squares solution at an exponential rate. Next, we propose an in-context pretraining scheme and provide theoretical guarantees showing that the global minimizer of the pre-training loss achieves a small excess loss. Our extensive experiments validate these theoretical findings, showing that the trained transformer provides more robust and reliable in-context predictions and coefficient estimates than the method, in the presence of endogeneity.
View on arXiv@article{liang2025_2410.01265, title={ Transformers Handle Endogeneity in In-Context Linear Regression }, author={ Haodong Liang and Krishnakumar Balasubramanian and Lifeng Lai }, journal={arXiv preprint arXiv:2410.01265}, year={ 2025 } }