CoPT: Mitigating Bias in Pre-trained Language Models through Counterfactual Contrastive Prompt Tuning

Pre-trained Language Models are widely used in many important real-world applications. However, recent studies show that these models can encode social biases from large pre-training corpora and even amplify biases in downstream applications. To address this challenge, we propose CoPT, an efficient and effective debias-while-prompt tuning method for mitigating biases via counterfactual contrastive prompt tuning on downstream tasks. Our experiments conducted on three extrinsic bias benchmarks demonstrate the effectiveness of CoPT on bias mitigation during the prompt tuning process and its adaptability to existing upstream debiased language models. These findings indicate the strength of CoPT and provide promising avenues for further enhancement in bias mitigation on downstream tasks.
View on arXiv