ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.10834
59
0

On the Identifiability of Causal Abstractions

13 March 2025
Xiusi Li
Sékou-Oumar Kaba
Siamak Ravanbakhsh
    CML
ArXivPDFHTML
Abstract

Causal representation learning (CRL) enhances machine learning models' robustness and generalizability by learning structural causal models associated with data-generating processes. We focus on a family of CRL methods that uses contrastive data pairs in the observable space, generated before and after a random, unknown intervention, to identify the latent causal model. (Brehmer et al., 2022) showed that this is indeed possible, given that all latent variables can be intervened on individually. However, this is a highly restrictive assumption in many systems. In this work, we instead assume interventions on arbitrary subsets of latent variables, which is more realistic. We introduce a theoretical framework that calculates the degree to which we can identify a causal model, given a set of possible interventions, up to an abstraction that describes the system at a higher level of granularity.

View on arXiv
@article{li2025_2503.10834,
  title={ On the Identifiability of Causal Abstractions },
  author={ Xiusi Li and Sékou-Oumar Kaba and Siamak Ravanbakhsh },
  journal={arXiv preprint arXiv:2503.10834},
  year={ 2025 }
}
Comments on this paper