ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.01664
20
6

Estimating Potential Outcome Distributions with Collaborating Causal Networks

4 October 2021
Tianhui Zhou
William E Carson IV
David Carlson
    CML
ArXivPDFHTML
Abstract

Traditional causal inference approaches leverage observational study data to estimate the difference in observed and unobserved outcomes for a potential treatment, known as the Conditional Average Treatment Effect (CATE). However, CATE corresponds to the comparison on the first moment alone, and as such may be insufficient in reflecting the full picture of treatment effects. As an alternative, estimating the full potential outcome distributions could provide greater insights. However, existing methods for estimating treatment effect potential outcome distributions often impose restrictive or simplistic assumptions about these distributions. Here, we propose Collaborating Causal Networks (CCN), a novel methodology which goes beyond the estimation of CATE alone by learning the full potential outcome distributions. Estimation of outcome distributions via the CCN framework does not require restrictive assumptions of the underlying data generating process. Additionally, CCN facilitates estimation of the utility of each possible treatment and permits individual-specific variation through utility functions. CCN not only extends outcome estimation beyond traditional risk difference, but also enables a more comprehensive decision-making process through definition of flexible comparisons. Under assumptions commonly made in the causal literature, we show that CCN learns distributions that asymptotically capture the true potential outcome distributions. Furthermore, we propose an adjustment approach that is empirically effective in alleviating sample imbalance between treatment groups in observational data. Finally, we evaluate the performance of CCN in multiple synthetic and semi-synthetic experiments. We demonstrate that CCN learns improved distribution estimates compared to existing Bayesian and deep generative methods as well as improved decisions with respects to a variety of utility functions.

View on arXiv
Comments on this paper