A binary decision task, like yes-no questions or answer verification, reflects a significant real-world scenario such as where users look for confirmation about the correctness of their decisions on specific issues. In this work, we observe that language models exhibit a negative bias in the binary decisions of complex reasoning tasks. Based on our observations and the rationale about attention-based model dynamics, we propose a negative attention score (NAS) to systematically and quantitatively formulate negative bias. Based on NAS, we identify attention heads that attend to negative tokens provided in the instructions as answer candidate of binary decisions, regardless of the question in the prompt, and validate their association with the negative bias. Additionally, we propose the negative attention score alignment (NASA) method, which is a parameter-efficient fine-tuning technique to address the extracted negatively biased attention heads. Experimental results from various domains of reasoning tasks and large model search space demonstrate that NASA significantly reduces the gap between precision and recall caused by negative bias while preserving their generalization abilities.
View on arXiv@article{yu2025_2408.00137, title={ Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment }, author={ Sangwon Yu and Jongyoon Song and Bongkyu Hwang and Hoyoung Kang and Sooah Cho and Junhwa Choi and Seongho Joe and Taehee Lee and Youngjune L. Gwon and Sungroh Yoon }, journal={arXiv preprint arXiv:2408.00137}, year={ 2025 } }