ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.20471
41
0

The Estimation of Continual Causal Effect for Dataset Shifting Streams

29 April 2025
Baining Chen
Yiming Zhang
Yuqiao Han
Ruyue Zhang
Ruihuan Du
Zhishuo Zhou
Zhengdan Zhu
Xun Liu
Jiecheng Guo
ArXivPDFHTML
Abstract

Causal effect estimation has been widely used in marketing optimization. The framework of an uplift model followed by a constrained optimization algorithm is popular in practice. To enhance performance in the online environment, the framework needs to be improved to address the complexities caused by temporal dataset shift. This paper focuses on capturing the dataset shift from user behavior and domain distribution changing over time. We propose an Incremental Causal Effect with Proxy Knowledge Distillation (ICE-PKD) framework to tackle this challenge. The ICE-PKD framework includes two components: (i) a multi-treatment uplift network that eliminates confounding bias using counterfactual regression; (ii) an incremental training strategy that adapts to the temporal dataset shift by updating with the latest data and protects generalization via replay-based knowledge distillation. We also revisit the uplift modeling metrics and introduce a novel metric for more precise online evaluation in multiple treatment scenarios. Extensive experiments on both simulated and online datasets show that the proposed framework achieves better performance. The ICE-PKD framework has been deployed in the marketing system of Huaxiaozhu, a ride-hailing platform in China.

View on arXiv
@article{chen2025_2504.20471,
  title={ The Estimation of Continual Causal Effect for Dataset Shifting Streams },
  author={ Baining Chen and Yiming Zhang and Yuqiao Han and Ruyue Zhang and Ruihuan Du and Zhishuo Zhou and Zhengdan Zhu and Xun Liu and Jiecheng Guo },
  journal={arXiv preprint arXiv:2504.20471},
  year={ 2025 }
}
Comments on this paper