ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.10389
  4. Cited By
Does Data Repair Lead to Fair Models? Curating Contextually Fair Data To
  Reduce Model Bias

Does Data Repair Lead to Fair Models? Curating Contextually Fair Data To Reduce Model Bias

20 October 2021
Sharat Agarwal
Sumanyu Muku
Saket Anand
Chetan Arora
ArXivPDFHTML

Papers citing "Does Data Repair Lead to Fair Models? Curating Contextually Fair Data To Reduce Model Bias"

9 / 9 papers shown
Title
Exploiting Contextual Uncertainty of Visual Data for Efficient Training
  of Deep Models
Exploiting Contextual Uncertainty of Visual Data for Efficient Training of Deep Models
Sharat Agarwal
16
0
0
04 Nov 2024
GradBias: Unveiling Word Influence on Bias in Text-to-Image Generative
  Models
GradBias: Unveiling Word Influence on Bias in Text-to-Image Generative Models
Moreno DÍncà
E. Peruzzo
Massimiliano Mancini
Xingqian Xu
Humphrey Shi
N. Sebe
39
0
0
29 Aug 2024
Resampled Datasets Are Not Enough: Mitigating Societal Bias Beyond
  Single Attributes
Resampled Datasets Are Not Enough: Mitigating Societal Bias Beyond Single Attributes
Yusuke Hirota
Jerone T. A. Andrews
Dora Zhao
Orestis Papakyriakopoulos
Apostolos Modas
Yuta Nakashima
Alice Xiang
36
4
0
04 Jul 2024
OpenBias: Open-set Bias Detection in Text-to-Image Generative Models
OpenBias: Open-set Bias Detection in Text-to-Image Generative Models
Moreno DÍncà
E. Peruzzo
Massimiliano Mancini
Dejia Xu
Vidit Goel
Xingqian Xu
Zhangyang Wang
Humphrey Shi
N. Sebe
53
31
0
11 Apr 2024
Improving Fairness using Vision-Language Driven Image Augmentation
Improving Fairness using Vision-Language Driven Image Augmentation
Moreno DÍncà
Christos Tzelepis
Ioannis Patras
N. Sebe
32
12
0
02 Nov 2023
Pushing the Accuracy-Group Robustness Frontier with Introspective
  Self-play
Pushing the Accuracy-Group Robustness Frontier with Introspective Self-play
J. Liu
Krishnamurthy Dvijotham
Jihyeon Janel Lee
Quan Yuan
Martin Strobel
Balaji Lakshminarayanan
Deepak Ramachandran
21
5
0
11 Feb 2023
Debiasing Methods for Fairer Neural Models in Vision and Language
  Research: A Survey
Debiasing Methods for Fairer Neural Models in Vision and Language Research: A Survey
Otávio Parraga
Martin D. Móre
C. M. Oliveira
Nathan Gavenski
L. S. Kupssinskü
Adilson Medronha
L. V. Moura
Gabriel S. Simões
Rodrigo C. Barros
34
11
0
10 Nov 2022
Men Also Do Laundry: Multi-Attribute Bias Amplification
Men Also Do Laundry: Multi-Attribute Bias Amplification
Dora Zhao
Jerone T. A. Andrews
Alice Xiang
FaML
28
20
0
21 Oct 2022
Fairness and Bias in Robot Learning
Fairness and Bias in Robot Learning
Laura Londoño
Juana Valeria Hurtado
Nora Hertz
P. Kellmeyer
S. Voeneky
Abhinav Valada
FaML
21
9
0
07 Jul 2022
1