ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.04440
  4. Cited By
Are Two Heads the Same as One? Identifying Disparate Treatment in Fair
  Neural Networks
v1v2 (latest)

Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks

Neural Information Processing Systems (NeurIPS), 2022
9 April 2022
Michael Lohaus
Matthäus Kleindessner
K. Kenthapadi
Francesco Locatello
Chris Russell
ArXiv (abs)PDFHTMLGithub (1★)

Papers citing "Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks"

7 / 7 papers shown
Fair Text Classification via Transferable Representations
Fair Text Classification via Transferable Representations
Thibaud Leteno
Michael Perrot
Charlotte Laclau
Antoine Gourru
Christophe Gravier
FaML
519
0
0
10 Mar 2025
Grounding and Evaluation for Large Language Models: Practical Challenges
  and Lessons Learned (Survey)
Grounding and Evaluation for Large Language Models: Practical Challenges and Lessons Learned (Survey)
K. Kenthapadi
M. Sameki
Ankur Taly
HILMELMAILaw
267
37
0
10 Jul 2024
OxonFair: A Flexible Toolkit for Algorithmic Fairness
OxonFair: A Flexible Toolkit for Algorithmic Fairness
Eoin Delaney
Zihao Fu
Sandra Wachter
Brent Mittelstadt
Chris Russell
FaML
310
10
0
30 Jun 2024
Resource-constrained Fairness
Resource-constrained Fairness
Sofie Goethals
Eoin Delaney
Brent Mittelstadt
Christopher Russell
FaML
780
1
0
03 Jun 2024
A Note on Bias to Complete
A Note on Bias to Complete
Jia Xu
Mona Diab
349
2
0
18 Feb 2024
Are demographically invariant models and representations in medical
  imaging fair?
Are demographically invariant models and representations in medical imaging fair?
Eike Petersen
Enzo Ferrante
M. Ganz
Aasa Feragen
MedIm
390
12
0
02 May 2023
I Prefer not to Say: Protecting User Consent in Models with Optional
  Personal Data
I Prefer not to Say: Protecting User Consent in Models with Optional Personal DataAAAI Conference on Artificial Intelligence (AAAI), 2022
Tobias Leemann
Martin Pawelczyk
Christian Thomas Eberle
Gjergji Kasneci
473
2
0
25 Oct 2022
1
Page 1 of 1