81

PAC-Bayesian Generalization Guarantees for Fairness on Stochastic and Deterministic Classifiers

Julien Bastian
Benjamin Leblanc
Pascal Germain
Amaury Habrard
Christine Largeron
Guillaume Metzler
Emilie Morvant
Paul Viallard
Main:9 Pages
3 Figures
Bibliography:4 Pages
5 Tables
Appendix:10 Pages
Abstract

Classical PAC generalization bounds on the prediction risk of a classifier are insufficient to provide theoretical guarantees on fairness when the goal is to learn models balancing predictive risk and fairness constraints. We propose a PAC-Bayesian framework for deriving generalization bounds for fairness, covering both stochastic and deterministic classifiers. For stochastic classifiers, we derive a fairness bound using standard PAC-Bayes techniques. Whereas for deterministic classifiers, as usual PAC-Bayes arguments do not apply directly, we leverage a recent advance in PAC-Bayes to extend the fairness bound beyond the stochastic setting. Our framework has two advantages: (i) It applies to a broad class of fairness measures that can be expressed as a risk discrepancy, and (ii) it leads to a self-bounding algorithm in which the learning procedure directly optimizes a trade-off between generalization bounds on the prediction risk and on the fairness. We empirically evaluate our framework with three classical fairness measures, demonstrating not only its usefulness but also the tightness of our bounds.

View on arXiv
Comments on this paper