ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.01943
  4. Cited By
AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and
  Mitigating Unwanted Algorithmic Bias

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias

3 October 2018
Rachel K. E. Bellamy
Kuntal Dey
Michael Hind
Samuel C. Hoffman
Stephanie Houde
Kalapriya Kannan
P. Lohia
Jacquelyn Martino
S. Mehta
Aleksandra Mojsilović
Seema Nagar
Karthikeyan N. Ramamurthy
John T. Richards
Diptikalyan Saha
P. Sattigeri
Moninder Singh
Kush R. Varshney
Yunfeng Zhang
    FaMLSyDa
ArXiv (abs)PDFHTMLGithub (2589★)

Papers citing "AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias"

50 / 393 papers shown
Practical Guide for Causal Pathways and Sub-group Disparity Analysis
Practical Guide for Causal Pathways and Sub-group Disparity Analysis
Farnaz Kohankhaki
Shaina Raza
Oluwanifemi Bamgbose
D. Pandya
Elham Dolatabadi
CML
347
0
0
02 Jul 2024
FairMedFM: Fairness Benchmarking for Medical Imaging Foundation Models
FairMedFM: Fairness Benchmarking for Medical Imaging Foundation Models
Ruinan Jin
Zikang Xu
Yuan Zhong
Qiongsong Yao
Qi Dou
S. Kevin Zhou
Xiaoxiao Li
VLM
385
42
0
01 Jul 2024
OxonFair: A Flexible Toolkit for Algorithmic Fairness
OxonFair: A Flexible Toolkit for Algorithmic Fairness
Eoin Delaney
Zihao Fu
Sandra Wachter
Brent Mittelstadt
Chris Russell
FaML
258
9
0
30 Jun 2024
AI Data Readiness Inspector (AIDRIN) for Quantitative Assessment of Data Readiness for AI
AI Data Readiness Inspector (AIDRIN) for Quantitative Assessment of Data Readiness for AI
Kaveen Hiniduma
Suren Byna
J. L. Bez
Ravi Madduri
348
12
0
27 Jun 2024
FairX: A comprehensive benchmarking tool for model analysis using
  fairness, utility, and explainability
FairX: A comprehensive benchmarking tool for model analysis using fairness, utility, and explainability
Md Fahim Sikder
R. Ramachandranpillai
Daniel de Leng
Fredrik Heintz
364
4
0
20 Jun 2024
Fairness-Optimized Synthetic EHR Generation for Arbitrary Downstream Predictive Tasks
Fairness-Optimized Synthetic EHR Generation for Arbitrary Downstream Predictive Tasks
Mirza Farhan Bin Tarek
Raphael Poulain
Rahmatollah Beheshti
SyDa
484
2
0
04 Jun 2024
The Life Cycle of Large Language Models: A Review of Biases in Education
The Life Cycle of Large Language Models: A Review of Biases in Education
Jinsook Lee
Yann Hicke
Renzhe Yu
Christopher A. Brooks
René F. Kizilcec
AI4Ed
275
4
0
03 Jun 2024
Resource-constrained Fairness
Resource-constrained Fairness
Sofie Goethals
Eoin Delaney
Brent Mittelstadt
Christopher Russell
FaML
658
1
0
03 Jun 2024
Pragmatic auditing: a pilot-driven approach for auditing Machine
  Learning systems
Pragmatic auditing: a pilot-driven approach for auditing Machine Learning systems
Djalel Benbouzid
Christiane Plociennik
Laura Lucaj
Mihai Maftei
Iris Merget
A. Burchardt
Marc P. Hauer
Abdeldjallil Naceri
Patrick van der Smagt
MLAU
139
0
0
21 May 2024
Trusting Fair Data: Leveraging Quality in Fairness-Driven Data Removal
  Techniques
Trusting Fair Data: Leveraging Quality in Fairness-Driven Data Removal Techniques
Manh Khoi Duong
Stefan Conrad
133
0
0
21 May 2024
Aequitas Flow: Streamlining Fair ML Experimentation
Aequitas Flow: Streamlining Fair ML Experimentation
Sérgio Jesus
Pedro Saleiro
Ines Oliveira e Silva
Beatriz M. Jorge
Rita P. Ribeiro
João Gama
P. Bizarro
Rayid Ghani
143
8
0
09 May 2024
Individual Fairness Through Reweighting and Tuning
Individual Fairness Through Reweighting and Tuning
A. J. Mahamadou
Lea Goetz
Russ B. Altman
211
0
0
02 May 2024
How Could AI Support Design Education? A Study Across Fields Fuels
  Situating Analytics
How Could AI Support Design Education? A Study Across Fields Fuels Situating Analytics
Ajit Jain
Andruid Kerne
Hannah Fowler
Jinsil Seo
Galen Newman
Nic Lupfer
Aaron Perrine
111
3
0
26 Apr 2024
Identifying Fairness Issues in Automatically Generated Testing Content
Identifying Fairness Issues in Automatically Generated Testing Content
Kevin Stowe
Benny Longwill
Alyssa Francis
Tatsuya Aoyama
Debanjan Ghosh
Swapna Somasundaran
204
4
0
23 Apr 2024
The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment
The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment
Nari Johnson
Sanika Moharana
Christina Harrington
Nazanin Andalibi
Hoda Heidari
Motahhare Eslami
247
12
0
21 Apr 2024
OptiGrad: A Fair and more Efficient Price Elasticity Optimization via a
  Gradient Based Learning
OptiGrad: A Fair and more Efficient Price Elasticity Optimization via a Gradient Based Learning
Vincent Grari
Marcin Detyniecki
175
0
0
16 Apr 2024
AI Competitions and Benchmarks: Dataset Development
AI Competitions and Benchmarks: Dataset Development
Romain Egele
Julio C. S. Jacques Junior
Jan N. van Rijn
Isabelle M Guyon
Xavier Baró
Albert Clapés
Dali Wang
Sergio Escalera
T. Moeslund
Jun Wan
173
0
0
15 Apr 2024
Enhancing Fairness and Performance in Machine Learning Models: A
  Multi-Task Learning Approach with Monte-Carlo Dropout and Pareto Optimality
Enhancing Fairness and Performance in Machine Learning Models: A Multi-Task Learning Approach with Monte-Carlo Dropout and Pareto Optimality
Khadija Zanna
Akane Sano
FaML
191
4
0
12 Apr 2024
The Necessity of AI Audit Standards Boards
The Necessity of AI Audit Standards Boards
David Manheim
Sammy Martin
Mark Bailey
Mikhail Samin
Ross Greutzmacher
216
21
0
11 Apr 2024
Data Readiness for AI: A 360-Degree Survey
Data Readiness for AI: A 360-Degree Survey
Kaveen Hiniduma
Suren Byna
J. L. Bez
184
20
0
08 Apr 2024
Data Bias According to Bipol: Men are Naturally Right and It is the Role
  of Women to Follow Their Lead
Data Bias According to Bipol: Men are Naturally Right and It is the Role of Women to Follow Their Lead
Irene Pagliai
G. V. Boven
Tosin Adewumi
Lama Alkhaled
Namrata Gurung
Isabella Sodergren
Elisa Barney
184
2
0
07 Apr 2024
Procedural Fairness in Machine Learning
Procedural Fairness in Machine Learning
Ziming Wang
Changwu Huang
Xin Yao
FaML
189
3
0
02 Apr 2024
Application of the NIST AI Risk Management Framework to Surveillance
  Technology
Application of the NIST AI Risk Management Framework to Surveillance Technology
Nandhini Swaminathan
David Danks
78
5
0
22 Mar 2024
A resource-constrained stochastic scheduling algorithm for homeless
  street outreach and gleaning edible food
A resource-constrained stochastic scheduling algorithm for homeless street outreach and gleaning edible food
Conor M. Artman
Aditya Mate
Ezinne Nwankwo
A. Heching
Tsuyoshi Idé
...
Kush R. Varshney
Lauri Goldkind
Gidi Kroch
Jaclyn Sawyer
Ian Watson
222
0
0
15 Mar 2024
Farsight: Fostering Responsible AI Awareness During AI Application
  Prototyping
Farsight: Fostering Responsible AI Awareness During AI Application Prototyping
Zijie J. Wang
Chinmay Kulkarni
Lauren Wilcox
Michael Terry
Michael A. Madaio
310
71
0
23 Feb 2024
Understanding the Dataset Practitioners Behind Large Language Model
  Development
Understanding the Dataset Practitioners Behind Large Language Model Development
Crystal Qian
Emily Reif
Minsuk Kahng
248
3
0
21 Feb 2024
Fairness Risks for Group-conditionally Missing Demographics
Fairness Risks for Group-conditionally Missing Demographics
Kaiqi Jiang
Wenzhe Fan
Mao Li
Xinhua Zhang
409
0
0
20 Feb 2024
Exploring a Behavioral Model of "Positive Friction" in Human-AI
  Interaction
Exploring a Behavioral Model of "Positive Friction" in Human-AI Interaction
Zeya Chen
Ruth Schmidt
182
11
0
15 Feb 2024
Advancing Explainable AI Toward Human-Like Intelligence: Forging the
  Path to Artificial Brain
Advancing Explainable AI Toward Human-Like Intelligence: Forging the Path to Artificial Brain
Yongchen Zhou
Richard Jiang
294
5
0
07 Feb 2024
Reranking individuals: The effect of fair classification within-groups
Reranking individuals: The effect of fair classification within-groups
S. Goethals
T. Calders
FaML
259
1
0
24 Jan 2024
Falcon: Fair Active Learning using Multi-armed Bandits
Falcon: Fair Active Learning using Multi-armed BanditsProceedings of the VLDB Endowment (PVLDB), 2024
Ki Hyun Tae
Hantian Zhang
Jaeyoung Park
Kexin Rong
Steven Euijong Whang
FaML
315
6
0
23 Jan 2024
Achieve Fairness without Demographics for Dermatological Disease
  Diagnosis
Achieve Fairness without Demographics for Dermatological Disease Diagnosis
Ching-Hao Chiu
Yu-Jen Chen
Yawen Wu
Yiyu Shi
Tsung-Yi Ho
136
10
0
16 Jan 2024
Practical Bias Mitigation through Proxy Sensitive Attribute Label
  Generation
Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation
Bhushan Chaudhary
Anubha Pandey
Deepak L. Bhatt
Darshika Tiwari
211
2
0
26 Dec 2023
Comprehensive Validation on Reweighting Samples for Bias Mitigation via
  AIF360
Comprehensive Validation on Reweighting Samples for Bias Mitigation via AIF360
Christina Hastings Blow
Lijun Qian
Camille Gibson
Pamela Obiomon
Xishuang Dong
209
12
0
19 Dec 2023
GroupMixNorm Layer for Learning Fair Models
GroupMixNorm Layer for Learning Fair Models
Anubha Pandey
Aditi Rai
Maneet Singh
Deepak L. Bhatt
Tanmoy Bhowmik
238
0
0
19 Dec 2023
Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak
  Supervision
Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak SupervisionInternational Conference on Machine Learning (ICML), 2023
Collin Burns
Pavel Izmailov
Jan Hendrik Kirchner
Bowen Baker
Leo Gao
...
Adrien Ecoffet
Manas Joglekar
Jan Leike
Ilya Sutskever
Jeff Wu
ELM
344
382
0
14 Dec 2023
Testing Correctness, Fairness, and Robustness of Speech Emotion Recognition Models
Testing Correctness, Fairness, and Robustness of Speech Emotion Recognition Models
Anna Derington
H. Wierstorf
Ali Özkil
F. Eyben
Felix Burkhardt
Björn W. Schuller
354
2
0
11 Dec 2023
GELDA: A generative language annotation framework to reveal visual
  biases in datasets
GELDA: A generative language annotation framework to reveal visual biases in datasets
Krish Kabra
Kathleen M. Lewis
Guha Balakrishnan
VLM
164
1
0
29 Nov 2023
Automated discovery of trade-off between utility, privacy and fairness
  in machine learning models
Automated discovery of trade-off between utility, privacy and fairness in machine learning models
Bogdan Ficiu
Neil D. Lawrence
Andrei Paleyes
193
2
0
27 Nov 2023
Fair Enough? A map of the current limitations of the requirements to
  have "fair" algorithms
Fair Enough? A map of the current limitations of the requirements to have "fair" algorithms
Alessandro Castelnovo
Nicole Inverardi
Gabriele Nanino
Ilaria Giuseppina Penco
D. Regoli
FaML
278
4
0
21 Nov 2023
Fairness Hacking: The Malicious Practice of Shrouding Unfairness in
  Algorithms
Fairness Hacking: The Malicious Practice of Shrouding Unfairness in Algorithms
Kristof Meding
Thilo Hagendorff
150
8
0
12 Nov 2023
Regression with Cost-based Rejection
Regression with Cost-based Rejection
Xin Cheng
Yuzhou Cao
Haobo Wang
Jianguo Huang
Bo An
Lei Feng
OOD
247
10
0
08 Nov 2023
fairret: a Framework for Differentiable Fairness Regularization Terms
fairret: a Framework for Differentiable Fairness Regularization TermsInternational Conference on Learning Representations (ICLR), 2023
Maarten Buyl
Marybeth Defrance
T. D. Bie
FedML
258
8
0
26 Oct 2023
Confronting LLMs with Traditional ML: Rethinking the Fairness of Large
  Language Models in Tabular Classifications
Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular ClassificationsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2023
Yanchen Liu
Srishti Gautam
Jiaqi Ma
Himabindu Lakkaraju
LMTD
218
20
0
23 Oct 2023
She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and
  Sustainable Language Models
She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models
Maximus Powers
Oluwanifemi Bamgbose
Shaina Raza
ALMELM
373
1
0
20 Oct 2023
Identifying and examining machine learning biases on Adult dataset
Identifying and examining machine learning biases on Adult dataset
Sahil Girhepuje
FaML
128
4
0
13 Oct 2023
Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy
  Risks
Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy RisksInternational Conference on Human Factors in Computing Systems (CHI), 2023
Hao-Ping Lee
Yu-Ju Yang
Thomas Serban Von Davier
Jodi Forlizzi
Sauvik Das
238
93
0
11 Oct 2023
Fair Classifiers that Abstain without Harm
Fair Classifiers that Abstain without HarmInternational Conference on Learning Representations (ICLR), 2023
Tongxin Yin
Jean-François Ton
Ruocheng Guo
Yuanshun Yao
Mingyan Liu
Yang Liu
162
6
0
09 Oct 2023
Estimating and Implementing Conventional Fairness Metrics With
  Probabilistic Protected Features
Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected Features
Hadi Elzayn
Emily Black
Patrick Vossler
Nathanael Jo
Jacob Goldin
Daniel E. Ho
140
7
0
02 Oct 2023
Empowering Many, Biasing a Few: Generalist Credit Scoring through Large
  Language Models
Empowering Many, Biasing a Few: Generalist Credit Scoring through Large Language Models
Duanyu Feng
Yongfu Dai
Jimin Huang
Yifang Zhang
Qianqian Xie
Weiguang Han
Zhengyu Chen
Alejandro Lopez-Lira
Hao Wang
250
19
0
01 Oct 2023
Previous
12345678
Next