A minimax framework for quantifying risk-fairness trade-off in
regression
- FaML
We propose a theoretical framework for the problem of learning a real-valued function which meets fairness requirements. Leveraging the theory of optimal transport, we introduce a notion of -relative (fairness) improvement of the regression function. With we recover an optimal prediction under Demographic Parity constraint and with we recover the regression function. For the proposed framework allows to continuously interpolate between the two. Within this framework we precisely quantify the cost in risk induced by the introduction of the -relative improvement constraint. We put forward a statistical minimax setup and derive a general problem-dependent lower bound on the risk of any estimator satisfying -relative improvement constraint. We illustrate our framework on a model of linear regression with Gaussian design and systematic group-dependent bias. Finally, we perform a simulation study of the latter setup.
View on arXiv