65

Projected Boosting with Fairness Constraints: Quantifying the Cost of Fair Training Distributions

Amir Asiaee
Kaveh Aryan
Main:7 Pages
3 Figures
Bibliography:2 Pages
2 Tables
Appendix:2 Pages
Abstract

Boosting algorithms enjoy strong theoretical guarantees: when weak learners maintain positive edge, AdaBoost achieves geometric decrease of exponential loss. We study how to incorporate group fairness constraints into boosting while preserving analyzable training dynamics. Our approach, FairBoost, projects the ensemble-induced exponential-weights distribution onto a convex set of distributions satisfying fairness constraints (as a reweighting surrogate), then trains weak learners on this fair distribution. The key theoretical insight is that projecting the training distribution reduces the effective edge of weak learners by a quantity controlled by the KL-divergence of the projection. We prove an exponential-loss bound where the convergence rate depends on weak learner edge minus a "fairness cost" term δt=KL(wtqt)/2\delta_t = \sqrt{\mathrm{KL}(w^t \| q^t)/2}. This directly quantifies the accuracy-fairness tradeoff in boosting dynamics. Experiments on standard benchmarks validate the theoretical predictions and demonstrate competitive fairness-accuracy tradeoffs with stable training curves.

View on arXiv
Comments on this paper