Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.20152
Cited By
Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals
30 May 2024
Phillip Howard
Kathleen C. Fraser
Anahita Bhiwandiwalla
S. Kiritchenko
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals"
10 / 10 papers shown
Title
When Tom Eats Kimchi: Evaluating Cultural Bias of Multimodal Large Language Models in Cultural Mixture Contexts
Jun Seong Kim
Kyaw Ye Thu
Javad Ismayilzada
Junyeong Park
Eunsu Kim
Huzama Ahmad
Na Min An
James Thorne
Alice H. Oh
32
0
0
21 Mar 2025
LVLM-Compress-Bench: Benchmarking the Broader Impact of Large Vision-Language Model Compression
Souvik Kundu
Anahita Bhiwandiwalla
Sungduk Yu
Phillip Howard
Tiep Le
S. N. Sridhar
David Cobbley
Hao Kang
Vasudev Lal
MQ
49
1
0
06 Mar 2025
Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs)
Leander Girrbach
Yiran Huang
Stephan Alaniz
Trevor Darrell
Zeynep Akata
VLM
21
2
0
25 Oct 2024
Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Neale Ratzlaff
Matthew Lyle Olson
Musashi Hinck
Shao-Yen Tseng
Vasudev Lal
Phillip Howard
18
0
0
17 Oct 2024
Lookism: The overlooked bias in computer vision
Aditya Gulati
Bruno Lepri
Nuria Oliver
13
0
0
21 Aug 2024
BiasDora: Exploring Hidden Biased Associations in Vision-Language Models
Chahat Raj
A. Mukherjee
Aylin Caliskan
Antonios Anastasopoulos
Ziwei Zhu
VLM
19
1
0
02 Jul 2024
A Unified Framework and Dataset for Assessing Societal Bias in Vision-Language Models
Ashutosh Sathe
Prachi Jain
Sunayana Sitaram
26
0
0
21 Feb 2024
SocialCounterfactuals: Probing and Mitigating Intersectional Social Biases in Vision-Language Models with Counterfactual Examples
Phillip Howard
Avinash Madasu
Tiep Le
Gustavo Lujan Moreno
Anahita Bhiwandiwalla
Vasudev Lal
30
5
0
30 Nov 2023
MultiModal Bias: Introducing a Framework for Stereotypical Bias Assessment beyond Gender and Race in Vision Language Models
Sepehr Janghorbani
Gerard de Melo
VLM
14
6
0
16 Mar 2023
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
46
103
0
18 May 2022
1