ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.08503
  4. Cited By
Understanding and Evaluating Racial Biases in Image Captioning

Understanding and Evaluating Racial Biases in Image Captioning

16 June 2021
Dora Zhao
Angelina Wang
Olga Russakovsky
ArXivPDFHTML

Papers citing "Understanding and Evaluating Racial Biases in Image Captioning"

50 / 79 papers shown
Title
Classifier-to-Bias: Toward Unsupervised Automatic Bias Detection for Visual Classifiers
Classifier-to-Bias: Toward Unsupervised Automatic Bias Detection for Visual Classifiers
Quentin Guimard
Moreno DÍncà
Massimiliano Mancini
Elisa Ricci
SSL
72
0
0
29 Apr 2025
Building Trustworthy Multimodal AI: A Review of Fairness, Transparency, and Ethics in Vision-Language Tasks
Building Trustworthy Multimodal AI: A Review of Fairness, Transparency, and Ethics in Vision-Language Tasks
Mohammad Saleha
Azadeh Tabatabaeib
52
0
0
14 Apr 2025
Group-based Distinctive Image Captioning with Memory Difference Encoding and Attention
Group-based Distinctive Image Captioning with Memory Difference Encoding and Attention
Jiuniu Wang
Wenjia Xu
Qingzhong Wang
Antoni B. Chan
38
0
0
03 Apr 2025
Attention IoU: Examining Biases in CelebA using Attention Maps
Attention IoU: Examining Biases in CelebA using Attention Maps
Aaron Serianni
Tyler Zhu
Olga Russakovsky
V. V. Ramaswamy
34
0
0
25 Mar 2025
MASS: Overcoming Language Bias in Image-Text Matching
MASS: Overcoming Language Bias in Image-Text Matching
Jiwan Chung
Seungwon Lim
Sangkyu Lee
Youngjae Yu
VLM
30
0
0
20 Jan 2025
Debiasing Large Vision-Language Models by Ablating Protected Attribute
  Representations
Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Neale Ratzlaff
Matthew Lyle Olson
Musashi Hinck
Shao-Yen Tseng
Vasudev Lal
Phillip Howard
27
0
0
17 Oct 2024
A Unified Debiasing Approach for Vision-Language Models across
  Modalities and Tasks
A Unified Debiasing Approach for Vision-Language Models across Modalities and Tasks
Hoin Jung
T. Jang
Xiaoqian Wang
VLM
23
2
0
10 Oct 2024
Civiverse: A Dataset for Analyzing User Engagement with Open-Source
  Text-to-Image Models
Civiverse: A Dataset for Analyzing User Engagement with Open-Source Text-to-Image Models
Maria-Teresa De Rosa Palmini
Laura Wagner
Eva Cetinic
36
2
0
10 Aug 2024
Fairness and Bias Mitigation in Computer Vision: A Survey
Fairness and Bias Mitigation in Computer Vision: A Survey
Sepehr Dehdashtian
Ruozhen He
Yi Li
Guha Balakrishnan
Nuno Vasconcelos
Vicente Ordonez
Vishnu Naresh Boddeti
29
4
0
05 Aug 2024
MultiHateClip: A Multilingual Benchmark Dataset for Hateful Video
  Detection on YouTube and Bilibili
MultiHateClip: A Multilingual Benchmark Dataset for Hateful Video Detection on YouTube and Bilibili
Han Wang
Tan Rui Yang
Usman Naseem
Roy Ka-Wei Lee
28
5
0
28 Jul 2024
Position: Measure Dataset Diversity, Don't Just Claim It
Position: Measure Dataset Diversity, Don't Just Claim It
Dora Zhao
Jerone T. A. Andrews
Orestis Papakyriakopoulos
Alice Xiang
64
14
0
11 Jul 2024
Resampled Datasets Are Not Enough: Mitigating Societal Bias Beyond
  Single Attributes
Resampled Datasets Are Not Enough: Mitigating Societal Bias Beyond Single Attributes
Yusuke Hirota
Jerone T. A. Andrews
Dora Zhao
Orestis Papakyriakopoulos
Apostolos Modas
Yuta Nakashima
Alice Xiang
36
4
0
04 Jul 2024
GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models
  via Counterfactual Probing
GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing
Yisong Xiao
Aishan Liu
QianJia Cheng
Zhenfei Yin
Siyuan Liang
Jiapeng Li
Jing Shao
Xianglong Liu
Dacheng Tao
36
4
0
30 Jun 2024
From Descriptive Richness to Bias: Unveiling the Dark Side of Generative
  Image Caption Enrichment
From Descriptive Richness to Bias: Unveiling the Dark Side of Generative Image Caption Enrichment
Yusuke Hirota
Ryo Hachiuma
Chao-Han Huck Yang
Yuta Nakashima
VLM
33
3
0
20 Jun 2024
They're All Doctors: Synthesizing Diverse Counterfactuals to Mitigate
  Associative Bias
They're All Doctors: Synthesizing Diverse Counterfactuals to Mitigate Associative Bias
Salma Abdel Magid
Jui-Hsien Wang
Kushal Kafle
Hanspeter Pfister
34
1
0
17 Jun 2024
A Taxonomy of Challenges to Curating Fair Datasets
A Taxonomy of Challenges to Curating Fair Datasets
Dora Zhao
M. Scheuerman
Pooja Chitre
Jerone T. A. Andrews
Georgia Panagiotidou
Shawn Walker
Kathleen H. Pine
Alice Xiang
39
2
0
10 Jun 2024
Think Before You Act: A Two-Stage Framework for Mitigating Gender Bias
  Towards Vision-Language Tasks
Think Before You Act: A Two-Stage Framework for Mitigating Gender Bias Towards Vision-Language Tasks
Yunqi Zhang
Songda Li
Chunyuan Deng
Luyi Wang
Hui Zhao
24
0
0
27 May 2024
More Distinctively Black and Feminine Faces Lead to Increased
  Stereotyping in Vision-Language Models
More Distinctively Black and Feminine Faces Lead to Increased Stereotyping in Vision-Language Models
Messi H.J. Lee
Jacob M. Montgomery
Calvin K Lai
VLM
37
0
0
22 May 2024
FairDeDup: Detecting and Mitigating Vision-Language Fairness Disparities
  in Semantic Dataset Deduplication
FairDeDup: Detecting and Mitigating Vision-Language Fairness Disparities in Semantic Dataset Deduplication
Eric Slyman
Stefan Lee
Scott D. Cohen
Kushal Kafle
VLM
36
5
0
24 Apr 2024
SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety
SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety
Paul Röttger
Fabio Pernisi
Bertie Vidgen
Dirk Hovy
ELM
KELM
58
30
0
08 Apr 2024
Would Deep Generative Models Amplify Bias in Future Models?
Would Deep Generative Models Amplify Bias in Future Models?
Tianwei Chen
Yusuke Hirota
Mayu Otani
Noa Garcia
Yuta Nakashima
27
12
0
04 Apr 2024
A Decade's Battle on Dataset Bias: Are We There Yet?
A Decade's Battle on Dataset Bias: Are We There Yet?
Zhuang Liu
Kaiming He
40
28
0
13 Mar 2024
CLIP the Bias: How Useful is Balancing Data in Multimodal Learning?
CLIP the Bias: How Useful is Balancing Data in Multimodal Learning?
Ibrahim M. Alabdulmohsin
Xiao Wang
Andreas Steiner
Priya Goyal
Alexander DÁmour
Xiao-Qi Zhai
34
16
0
07 Mar 2024
The Visual Experience Dataset: Over 200 Recorded Hours of Integrated Eye
  Movement, Odometry, and Egocentric Video
The Visual Experience Dataset: Over 200 Recorded Hours of Integrated Eye Movement, Odometry, and Egocentric Video
Michelle R. Greene
Benjamin Balas
M. Lescroart
Paul MacNeilage
Jennifer A. Hart
...
Matthew W. Shinkle
Wentao Si
Brian Szekely
Joaquin M. Torres
Eliana Weissmann
MDE
11
2
0
15 Feb 2024
Copycats: the many lives of a publicly available medical imaging dataset
Copycats: the many lives of a publicly available medical imaging dataset
Amelia Jiménez-Sánchez
Natalia-Rozalia Avlona
Dovile Juodelyte
Théo Sourget
Caroline Vang-Larsen
Anna Rogers
Hubert Dariusz Zajkac
V. Cheplygina
27
0
0
09 Feb 2024
Examining Gender and Racial Bias in Large Vision-Language Models Using a
  Novel Dataset of Parallel Images
Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel Images
Kathleen C. Fraser
S. Kiritchenko
41
33
0
08 Feb 2024
Measuring machine learning harms from stereotypes: requires
  understanding who is being harmed by which errors in what ways
Measuring machine learning harms from stereotypes: requires understanding who is being harmed by which errors in what ways
Angelina Wang
Xuechunzi Bai
Solon Barocas
Su Lin Blodgett
FaML
47
5
0
06 Feb 2024
Adventures of Trustworthy Vision-Language Models: A Survey
Adventures of Trustworthy Vision-Language Models: A Survey
Mayank Vatsa
Anubhooti Jain
Richa Singh
22
4
0
07 Dec 2023
SocialCounterfactuals: Probing and Mitigating Intersectional Social
  Biases in Vision-Language Models with Counterfactual Examples
SocialCounterfactuals: Probing and Mitigating Intersectional Social Biases in Vision-Language Models with Counterfactual Examples
Phillip Howard
Avinash Madasu
Tiep Le
Gustavo Lujan Moreno
Anahita Bhiwandiwalla
Vasudev Lal
45
16
0
30 Nov 2023
Women Wearing Lipstick: Measuring the Bias Between an Object and Its
  Related Gender
Women Wearing Lipstick: Measuring the Bias Between an Object and Its Related Gender
Ahmed Sabir
Lluís Padró
18
0
0
29 Oct 2023
Evaluating Bias and Fairness in Gender-Neutral Pretrained
  Vision-and-Language Models
Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models
Laura Cabello
Emanuele Bugliarello
Stephanie Brandl
Desmond Elliott
23
7
0
26 Oct 2023
Evaluating the Fairness of Discriminative Foundation Models in Computer
  Vision
Evaluating the Fairness of Discriminative Foundation Models in Computer Vision
Junaid Ali
Matthäus Kleindessner
F. Wenzel
Kailash Budhathoki
V. Cevher
Chris Russell
VLM
62
10
0
18 Oct 2023
Mitigating stereotypical biases in text to image generative systems
Mitigating stereotypical biases in text to image generative systems
Piero Esposito
Parmida Atighehchian
Anastasis Germanidis
Deepti Ghadiyaram
25
16
0
10 Oct 2023
SegRCDB: Semantic Segmentation via Formula-Driven Supervised Learning
SegRCDB: Semantic Segmentation via Formula-Driven Supervised Learning
Risa Shinoda
Ryo Hayamizu
Kodai Nakashima
Nakamasa Inoue
Rio Yokota
Hirokatsu Kataoka
VLM
18
8
0
29 Sep 2023
Beyond Skin Tone: A Multidimensional Measure of Apparent Skin Color
Beyond Skin Tone: A Multidimensional Measure of Apparent Skin Color
William Thong
Przemyslaw K. Joniak
Alice Xiang
18
19
0
10 Sep 2023
FACET: Fairness in Computer Vision Evaluation Benchmark
FACET: Fairness in Computer Vision Evaluation Benchmark
Laura Gustafson
Chloe Rolland
Nikhila Ravi
Quentin Duval
Aaron B. Adcock
Cheng-Yang Fu
Melissa Hall
Candace Ross
VLM
EGVM
16
36
0
31 Aug 2023
From Fake to Real: Pretraining on Balanced Synthetic Images to Prevent
  Spurious Correlations in Image Recognition
From Fake to Real: Pretraining on Balanced Synthetic Images to Prevent Spurious Correlations in Image Recognition
Maan Qraitem
Kate Saenko
Bryan A. Plummer
39
4
0
08 Aug 2023
Dense Video Object Captioning from Disjoint Supervision
Dense Video Object Captioning from Disjoint Supervision
Xingyi Zhou
Anurag Arnab
Chen Sun
Cordelia Schmid
20
2
0
20 Jun 2023
Balancing the Picture: Debiasing Vision-Language Datasets with Synthetic
  Contrast Sets
Balancing the Picture: Debiasing Vision-Language Datasets with Synthetic Contrast Sets
Brandon Smith
Miguel Farinha
S. Hall
Hannah Rose Kirk
Aleksandar Shtedritski
Max Bain
34
19
0
24 May 2023
Mitigating Test-Time Bias for Fair Image Retrieval
Mitigating Test-Time Bias for Fair Image Retrieval
Fanjie Kong
Shuai Yuan
Weituo Hao
Ricardo Henao
23
16
0
23 May 2023
Inspecting the Geographical Representativeness of Images from
  Text-to-Image Models
Inspecting the Geographical Representativeness of Images from Text-to-Image Models
Aparna Basu
R. Venkatesh Babu
Danish Pruthi
DiffM
17
39
0
18 May 2023
Consensus and Subjectivity of Skin Tone Annotation for ML Fairness
Consensus and Subjectivity of Skin Tone Annotation for ML Fairness
Candice Schumann
Gbolahan O. Olanubi
Auriel Wright
Ellis P. Monk
Courtney Heldreth
Susanna Ricco
15
21
0
16 May 2023
ImageCaptioner$^2$: Image Captioner for Image Captioning Bias
  Amplification Assessment
ImageCaptioner2^22: Image Captioner for Image Captioning Bias Amplification Assessment
Eslam Mohamed Bakr
Pengzhan Sun
Erran L. Li
Mohamed Elhoseiny
17
6
0
10 Apr 2023
Model-Agnostic Gender Debiased Image Captioning
Model-Agnostic Gender Debiased Image Captioning
Yusuke Hirota
Yuta Nakashima
Noa Garcia
FaML
22
18
0
07 Apr 2023
Exposing and Mitigating Spurious Correlations for Cross-Modal Retrieval
Exposing and Mitigating Spurious Correlations for Cross-Modal Retrieval
Jae Myung Kim
A. Sophia Koepke
Cordelia Schmid
Zeynep Akata
70
25
0
06 Apr 2023
Uncurated Image-Text Datasets: Shedding Light on Demographic Bias
Uncurated Image-Text Datasets: Shedding Light on Demographic Bias
Noa Garcia
Yusuke Hirota
Yankun Wu
Yuta Nakashima
EGVM
31
51
0
06 Apr 2023
A View From Somewhere: Human-Centric Face Representations
A View From Somewhere: Human-Centric Face Representations
Jerone T. A. Andrews
Przemyslaw K. Joniak
Alice Xiang
CVBM
13
9
0
30 Mar 2023
Metrics for Dataset Demographic Bias: A Case Study on Facial Expression
  Recognition
Metrics for Dataset Demographic Bias: A Case Study on Facial Expression Recognition
Iris Dominguez-Catena
D. Paternain
M. Galar
18
10
0
28 Mar 2023
Variation of Gender Biases in Visual Recognition Models Before and After
  Finetuning
Variation of Gender Biases in Visual Recognition Models Before and After Finetuning
Jaspreet Ranjit
Tianlu Wang
Baishakhi Ray
Vicente Ordonez
31
2
0
14 Mar 2023
Overwriting Pretrained Bias with Finetuning Data
Overwriting Pretrained Bias with Finetuning Data
Angelina Wang
Olga Russakovsky
21
29
0
10 Mar 2023
12
Next