ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02200
12
0
v1v2 (latest)

Learning Treatment Representations for Downstream Instrumental Variable Regression

2 June 2025
Shiangyi Lin
Hui Lan
Vasilis Syrgkanis
    CML
ArXiv (abs)PDFHTML
Main:11 Pages
28 Figures
Bibliography:3 Pages
6 Tables
Appendix:24 Pages
Abstract

Traditional instrumental variable (IV) estimators face a fundamental constraint: they can only accommodate as many endogenous treatment variables as available instruments. This limitation becomes particularly challenging in settings where the treatment is presented in a high-dimensional and unstructured manner (e.g. descriptions of patient treatment pathways in a hospital). In such settings, researchers typically resort to applying unsupervised dimension reduction techniques to learn a low-dimensional treatment representation prior to implementing IV regression analysis. We show that such methods can suffer from substantial omitted variable bias due to implicit regularization in the representation learning step. We propose a novel approach to construct treatment representations by explicitly incorporating instrumental variables during the representation learning process. Our approach provides a framework for handling high-dimensional endogenous variables with limited instruments. We demonstrate both theoretically and empirically that fitting IV models on these instrument-informed representations ensures identification of directions that optimize outcome prediction. Our experiments show that our proposed methodology improves upon the conventional two-stage approaches that perform dimension reduction without incorporating instrument information.

View on arXiv
@article{lin2025_2506.02200,
  title={ Learning Treatment Representations for Downstream Instrumental Variable Regression },
  author={ Shiangyi Lin and Hui Lan and Vasilis Syrgkanis },
  journal={arXiv preprint arXiv:2506.02200},
  year={ 2025 }
}
Comments on this paper