65
v1v2 (latest)

Perturbation Effects on Accuracy and Fairness among Similar Individuals

Main:8 Pages
2 Figures
Bibliography:2 Pages
9 Tables
Appendix:1 Pages
Abstract

Deep neural networks (DNNs) are vulnerable to adversarial perturbations that degrade both predictive accuracy and individual fairness, posing critical risks in high-stakes online decision-making. The relationship between these two dimensions of robustness remains poorly understood. To bridge this gap, we introduce robust individual fairness (RIF), which requires that similar individuals receive predictions consistent with the same ground truth even under adversarial manipulation. To evaluate and expose violations of RIF, we propose RIFair, an attack framework that applies identical perturbations to similar individuals to induce accuracy or fairness failures. We further introduce perturbation impact index (PII) and perturbation impact direction (PID) to quantify and explain why identical perturbations produce unequal effects on individuals who should behave similarly. Experiments across diverse model architectures and real-world web datasets reveal that existing robustness metrics capture distinct and often incompatible failure modes in accuracy and fairness. We find that many online applicants are simultaneously vulnerable to multiple types of adversarial failures, and that inaccurate or unfair outcomes arise due to similar individuals share the same PID but have sharply different PIIs, leading to divergent prediction-change trajectories in which some cross decision boundaries earlier. Finally, we demonstrate that adversarial examples generated by RIFair can strategically manipulate test-set accuracy or fairness by replacing only a small subset of items, creating misleading impressions of model performance. These findings expose fundamental limitations in current robustness evaluations and highlight the need for jointly assessing accuracy and fairness under adversarial perturbations in high-stakes online decision-making.

View on arXiv
Comments on this paper