6
0

Adversarial Cooperative Rationalization: The Risk of Spurious Correlations in Even Clean Datasets

Abstract

This study investigates the self-rationalization framework constructed with a cooperative game, where a generator initially extracts the most informative segment from raw input, and a subsequent predictor utilizes the selected subset for its input. The generator and predictor are trained collaboratively to maximize prediction accuracy. In this paper, we first uncover a potential caveat: such a cooperative game could unintentionally introduce a sampling bias during rationale extraction. Specifically, the generator might inadvertently create an incorrect correlation between the selected rationale candidate and the label, even when they are semantically unrelated in the original dataset. Subsequently, we elucidate the origins of this bias using both detailed theoretical analysis and empirical evidence. Our findings suggest a direction for inspecting these correlations through attacks, based on which we further introduce an instruction to prevent the predictor from learning the correlations. Through experiments on six text classification datasets and two graph classification datasets using three network architectures (GRUs, BERT, and GCN), we show that our method not only significantly outperforms recent rationalization methods, but also achieves comparable or even better results than a representative LLM (llama3.1-8b-instruct).

View on arXiv
@article{liu2025_2505.02118,
  title={ Adversarial Cooperative Rationalization: The Risk of Spurious Correlations in Even Clean Datasets },
  author={ Wei Liu and Zhongyu Niu and Lang Gao and Zhiying Deng and Jun Wang and Haozhao Wang and Ruixuan Li },
  journal={arXiv preprint arXiv:2505.02118},
  year={ 2025 }
}
Comments on this paper