ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.00020
  4. Cited By
Training individually fair ML models with Sensitive Subspace Robustness

Training individually fair ML models with Sensitive Subspace Robustness

28 June 2019
Mikhail Yurochkin
Amanda Bower
Yuekai Sun
    FaML
    OOD
ArXivPDFHTML

Papers citing "Training individually fair ML models with Sensitive Subspace Robustness"

26 / 26 papers shown
Title
Local Statistical Parity for the Estimation of Fair Decision Trees
Local Statistical Parity for the Estimation of Fair Decision Trees
Andrea Quintanilla
Johan Van Horebeek
37
0
0
25 Apr 2025
CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models
CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models
Song Wang
Peng Wang
Tong Zhou
Yushun Dong
Zhen Tan
Jundong Li
CoGe
53
7
0
02 Jul 2024
Implementing Responsible AI: Tensions and Trade-Offs Between Ethics
  Aspects
Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects
Conrad Sanderson
David M. Douglas
Qinghua Lu
37
11
0
17 Apr 2023
Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
  Approach
Chasing Fairness Under Distribution Shift: A Model Weight Perturbation Approach
Zhimeng Jiang
Xiaotian Han
Hongye Jin
Guanchu Wang
Rui Chen
Na Zou
Xia Hu
12
13
0
06 Mar 2023
Identifying, measuring, and mitigating individual unfairness for
  supervised learning models and application to credit risk models
Identifying, measuring, and mitigating individual unfairness for supervised learning models and application to credit risk models
Rasoul Shahsavarifar
Jithu Chandran
M. Inchiosa
A. Deshpande
Mario Schlener
V. Gossain
Yara Elias
Vinaya Murali
FaML
16
0
0
11 Nov 2022
InfoOT: Information Maximizing Optimal Transport
InfoOT: Information Maximizing Optimal Transport
Ching-Yao Chuang
Stefanie Jegelka
David Alvarez-Melis
OT
35
12
0
06 Oct 2022
iFlipper: Label Flipping for Individual Fairness
iFlipper: Label Flipping for Individual Fairness
Hantian Zhang
Ki Hyun Tae
Jaeyoung Park
Xu Chu
Steven Euijong Whang
33
6
0
15 Sep 2022
RMExplorer: A Visual Analytics Approach to Explore the Performance and
  the Fairness of Disease Risk Models on Population Subgroups
RMExplorer: A Visual Analytics Approach to Explore the Performance and the Fairness of Disease Risk Models on Population Subgroups
Bum Chul Kwon
U. Kartoun
S. Khurshid
Mikhail Yurochkin
Subha Maity
Deanna G. Brockman
A. Khera
P. Ellinor
S. Lubitz
Kenney Ng
28
11
0
14 Sep 2022
FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms
  for Neural Networks
FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms for Neural Networks
Kiarash Mohammadi
Aishwarya Sivaraman
G. Farnadi
17
5
0
01 Jun 2022
CertiFair: A Framework for Certified Global Fairness of Neural Networks
CertiFair: A Framework for Certified Global Fairness of Neural Networks
Haitham Khedr
Yasser Shoukry
FedML
23
19
0
20 May 2022
Accurate Fairness: Improving Individual Fairness without Trading
  Accuracy
Accurate Fairness: Improving Individual Fairness without Trading Accuracy
Xuran Li
Peng Wu
Jing Su
FaML
33
17
0
18 May 2022
De-biasing "bias" measurement
De-biasing "bias" measurement
K. Lum
Yunfeng Zhang
Amanda Bower
15
26
0
11 May 2022
Individual Fairness Guarantees for Neural Networks
Individual Fairness Guarantees for Neural Networks
Elias Benussi
A. Patané
Matthew Wicker
Luca Laurenti
Marta Kwiatkowska University of Oxford
20
21
0
11 May 2022
Optimising Equal Opportunity Fairness in Model Training
Optimising Equal Opportunity Fairness in Model Training
Aili Shen
Xudong Han
Trevor Cohn
Timothy Baldwin
Lea Frermann
FaML
29
28
0
05 May 2022
SLIDE: a surrogate fairness constraint to ensure fairness consistency
SLIDE: a surrogate fairness constraint to ensure fairness consistency
Kunwoong Kim
Ilsang Ohn
Sara Kim
Yongdai Kim
29
4
0
07 Feb 2022
Latent Space Smoothing for Individually Fair Representations
Latent Space Smoothing for Individually Fair Representations
Momchil Peychev
Anian Ruoss
Mislav Balunović
Maximilian Baader
Martin Vechev
FaML
36
19
0
26 Nov 2021
A Fairness Analysis on Private Aggregation of Teacher Ensembles
A Fairness Analysis on Private Aggregation of Teacher Ensembles
Cuong Tran
M. H. Dinh
Kyle Beiter
Ferdinando Fioretto
21
12
0
17 Sep 2021
Fair Mixup: Fairness via Interpolation
Fair Mixup: Fairness via Interpolation
Ching-Yao Chuang
Youssef Mroueh
21
137
0
11 Mar 2021
Optimism in the Face of Adversity: Understanding and Improving Deep
  Learning through Adversarial Robustness
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
29
48
0
19 Oct 2020
A Distributionally Robust Approach to Fair Classification
A Distributionally Robust Approach to Fair Classification
Bahar Taşkesen
Viet Anh Nguyen
Daniel Kuhn
Jose H. Blanchet
FaML
23
61
0
18 Jul 2020
Two Simple Ways to Learn Individual Fairness Metrics from Data
Two Simple Ways to Learn Individual Fairness Metrics from Data
Debarghya Mukherjee
Mikhail Yurochkin
Moulinath Banerjee
Yuekai Sun
FaML
26
96
0
19 Jun 2020
Auditing ML Models for Individual Bias and Unfairness
Auditing ML Models for Individual Bias and Unfairness
Songkai Xue
Mikhail Yurochkin
Yuekai Sun
MLAU
40
22
0
11 Mar 2020
Learning Certified Individually Fair Representations
Learning Certified Individually Fair Representations
Anian Ruoss
Mislav Balunović
Marc Fischer
Martin Vechev
FaML
15
92
0
24 Feb 2020
Learning Adversarially Fair and Transferable Representations
Learning Adversarially Fair and Transferable Representations
David Madras
Elliot Creager
T. Pitassi
R. Zemel
FaML
233
674
0
17 Feb 2018
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
261
3,109
0
04 Nov 2016
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,082
0
24 Oct 2016
1