300

On the Role of Priors in Bayesian Causal Learning

IEEE Transactions on Artificial Intelligence (IEEE TAI), 2025
Main:6 Pages
3 Figures
Bibliography:1 Pages
Abstract

In this work, we investigate causal learning of independent causal mechanisms from a Bayesian perspective. Confirming previous claims from the literature, we show in a didactically accessible manner that unlabeled data (i.e., cause realizations) do not improve the estimation of the parameters defining the mechanism. Furthermore, we observe the importance of choosing an appropriate prior for the cause and mechanism parameters, respectively. Specifically, we show that a factorized prior results in a factorized posterior, which resonates with Janzing and Schölkopf's definition of independent causal mechanisms via the Kolmogorov complexity of the involved distributions and with the concept of parameter independence of Heckerman et al.

View on arXiv
Comments on this paper