Overview and practical recommendations on using Shapley Values for identifying predictive biomarkers via CATE modeling

In recent years, two parallel research trends have emerged in machine learning, yet their intersections remain largely unexplored. On one hand, there has been a significant increase in literature focused on Individual Treatment Effect (ITE) modeling, particularly targeting the Conditional Average Treatment Effect (CATE) using meta-learner techniques. These approaches often aim to identify causal effects from observational data. On the other hand, the field of Explainable Machine Learning (XML) has gained traction, with various approaches developed to explain complex models and make their predictions more interpretable. A prominent technique in this area is Shapley Additive Explanations (SHAP), which has become mainstream in data science for analyzing supervised learning models. However, there has been limited exploration of SHAP application in identifying predictive biomarkers through CATE models, a crucial aspect in pharmaceutical precision medicine. We address inherent challenges associated with the SHAP concept in multi-stage CATE strategies and introduce a surrogate estimation approach that is agnostic to the choice of CATE strategy, effectively reducing computational burdens in high-dimensional data. Using this approach, we conduct simulation benchmarking to evaluate the ability to accurately identify biomarkers using SHAP values derived from various CATE meta-learners and Causal Forest.
View on arXiv@article{svensson2025_2505.01145, title={ Overview and practical recommendations on using Shapley Values for identifying predictive biomarkers via CATE modeling }, author={ David Svensson and Erik Hermansson and Nikolaos Nikolaou and Konstantinos Sechidis and Ilya Lipkovich }, journal={arXiv preprint arXiv:2505.01145}, year={ 2025 } }