ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.05474
99
50
v1v2v3v4v5v6 (latest)

Metric-Free Individual Fairness in Online Learning

13 February 2020
Yahav Bechavod
Christopher Jung
Zhiwei Steven Wu
    FaML
ArXiv (abs)PDFHTML
Abstract

We study an online learning problem subject to the constraint of individual fairness, which requires that similar individuals are treated similarly. Unlike prior work on individual fairness, we do not assume the similarity measure among individuals is known, nor do we assume that such measure takes a certain parametric form. Instead, we leverage the existence of an auditor who detects fairness violations without enunciating the quantitative measure. In each round, the auditor examines the learner's decisions and attempts to identify a pair of individuals that are treated unfairly by the learner. We provide a general reduction framework that reduces online classification in our model to standard online classification, which allows us to leverage existing online learning algorithms to achieve sub-linear regret and number of fairness violations. In the stochastic setting where the data are drawn independently from a distribution, we also establish PAC-style fairness and accuracy generalization guarantees for the uniform policy over time, qualitatively matching the bounds of Yona and Rothblum [2018] while removing several of their assumptions. Our results resolve an open question by Gillen et al. [2018] by showing that online learning under an unknown individual fairness constraint is possible even without assuming a strong parametric form of the underlying similarity measure.

View on arXiv
Comments on this paper