ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.20152
  4. Cited By
Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals
v1v2 (latest)

Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals

30 May 2024
Phillip Howard
Kathleen C. Fraser
Anahita Bhiwandiwalla
S. Kiritchenko
ArXiv (abs)PDFHTMLGithub

Papers citing "Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals"

40 / 40 papers shown
Person-Centric Annotations of LAION-400M: Auditing Bias and Its Transfer to Models
Person-Centric Annotations of LAION-400M: Auditing Bias and Its Transfer to Models
Leander Girrbach
Stephan Alaniz
Genevieve Smith
Trevor Darrell
Zeynep Akata
258
3
0
04 Oct 2025
Bias Beyond Demographics: Probing Decision Boundaries in Black-Box LVLMs via Counterfactual VQA
Bias Beyond Demographics: Probing Decision Boundaries in Black-Box LVLMs via Counterfactual VQA
Zaiying Zhao
Toshihiko Yamasaki
VLM
228
0
0
05 Aug 2025
A Stereotype Content Analysis on Color-related Social Bias in Large Vision Language Models
A Stereotype Content Analysis on Color-related Social Bias in Large Vision Language Models
Junhyuk Choi
Minju Kim
Yeseon Hong
Bugeun Kim
312
1
0
27 May 2025
Interpreting Social Bias in LVLMs via Information Flow Analysis and Multi-Round Dialogue Evaluation
Interpreting Social Bias in LVLMs via Information Flow Analysis and Multi-Round Dialogue Evaluation
Zhengyang Ji
Yifan Jia
Shang Gao
Yutao Yue
287
0
0
27 May 2025
When Algorithms Play Favorites: Lookism in the Generation and Perception of Faces
When Algorithms Play Favorites: Lookism in the Generation and Perception of Faces
Miriam Doh
Aditya Gulati
M. Mancas
Nuria Oliver
CVBMFaML
192
2
0
20 May 2025
When Tom Eats Kimchi: Evaluating Cultural Bias of Multimodal Large Language Models in Cultural Mixture Contexts
When Tom Eats Kimchi: Evaluating Cultural Bias of Multimodal Large Language Models in Cultural Mixture Contexts
Jun Seong Kim
Kyaw Ye Thu
Javad Ismayilzada
Junyeong Park
Eunsu Kim
Huzama Ahmad
Na Min An
Hyunjung Shim
Alice Oh
214
5
0
21 Mar 2025
LVLM-Compress-Bench: Benchmarking the Broader Impact of Large Vision-Language Model Compression
LVLM-Compress-Bench: Benchmarking the Broader Impact of Large Vision-Language Model CompressionNorth American Chapter of the Association for Computational Linguistics (NAACL), 2025
Souvik Kundu
Anahita Bhiwandiwalla
Sungduk Yu
Phillip Howard
Tiep Le
S. N. Sridhar
David Cobbley
Hao Kang
Vasudev Lal
MQ
245
6
0
06 Mar 2025
Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs)
Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs)International Conference on Learning Representations (ICLR), 2024
Leander Girrbach
Yiran Huang
Stephan Alaniz
Trevor Darrell
Zeynep Akata
VLM
502
11
0
25 Oct 2024
Debiasing Large Vision-Language Models by Ablating Protected Attribute
  Representations
Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Neale Ratzlaff
Matthew Lyle Olson
Musashi Hinck
Shao-Yen Tseng
Vasudev Lal
Phillip Howard
418
4
0
17 Oct 2024
Lookism: The overlooked bias in computer vision
Lookism: The overlooked bias in computer vision
Aditya Gulati
Bruno Lepri
Nuria Oliver
308
4
0
21 Aug 2024
BiasDora: Exploring Hidden Biased Associations in Vision-Language Models
BiasDora: Exploring Hidden Biased Associations in Vision-Language Models
Chahat Raj
A. Mukherjee
Aylin Caliskan
Antonios Anastasopoulos
Ziwei Zhu
VLM
384
15
0
02 Jul 2024
LLaVA-Gemma: Accelerating Multimodal Foundation Models with a Compact
  Language Model
LLaVA-Gemma: Accelerating Multimodal Foundation Models with a Compact Language Model
Musashi Hinck
Matthew Lyle Olson
David Cobbley
Shao-Yen Tseng
Vasudev Lal
VLM
216
16
0
29 Mar 2024
A Unified Framework and Dataset for Assessing Societal Bias in
  Vision-Language Models
A Unified Framework and Dataset for Assessing Societal Bias in Vision-Language Models
Ashutosh Sathe
Prachi Jain
Sunayana Sitaram
375
10
0
21 Feb 2024
Examining Gender and Racial Bias in Large Vision-Language Models Using a
  Novel Dataset of Parallel Images
Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel Images
Kathleen C. Fraser
S. Kiritchenko
418
71
0
08 Feb 2024
SocialCounterfactuals: Probing and Mitigating Intersectional Social
  Biases in Vision-Language Models with Counterfactual Examples
SocialCounterfactuals: Probing and Mitigating Intersectional Social Biases in Vision-Language Models with Counterfactual ExamplesComputer Vision and Pattern Recognition (CVPR), 2023
Phillip Howard
Avinash Madasu
Tiep Le
Gustavo Lujan Moreno
Anahita Bhiwandiwalla
Vasudev Lal
405
52
0
30 Nov 2023
Improved Baselines with Visual Instruction Tuning
Improved Baselines with Visual Instruction TuningComputer Vision and Pattern Recognition (CVPR), 2023
Haotian Liu
Chunyuan Li
Yuheng Li
Yong Jae Lee
VLMMLLM
750
4,923
0
05 Oct 2023
VisoGender: A dataset for benchmarking gender bias in image-text pronoun
  resolution
VisoGender: A dataset for benchmarking gender bias in image-text pronoun resolutionNeural Information Processing Systems (NeurIPS), 2023
Elizaveta Semenova
F. G. Abrantes
Hanwen Zhu
Grace A. Sodunke
Aleksandar Shtedritski
Hannah Rose Kirk
CoGe
523
62
0
21 Jun 2023
Bias Against 93 Stigmatized Groups in Masked Language Models and
  Downstream Sentiment Classification Tasks
Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification TasksConference on Fairness, Accountability and Transparency (FAccT), 2023
Katelyn Mei
Sonia Fereidooni
Aylin Caliskan
254
81
0
08 Jun 2023
InstructBLIP: Towards General-purpose Vision-Language Models with
  Instruction Tuning
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction TuningNeural Information Processing Systems (NeurIPS), 2023
Wenliang Dai
Junnan Li
Dongxu Li
A. M. H. Tiong
Junqi Zhao
Weisheng Wang
Boyang Albert Li
Pascale Fung
Steven C. H. Hoi
MLLMVLM
1.9K
3,275
0
11 May 2023
On the Challenges of Using Black-Box APIs for Toxicity Evaluation in
  Research
On the Challenges of Using Black-Box APIs for Toxicity Evaluation in ResearchConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Luiza Amador Pozzobon
Beyza Ermis
Patrick Lewis
Sara Hooker
243
56
0
24 Apr 2023
Visual Instruction Tuning
Visual Instruction TuningNeural Information Processing Systems (NeurIPS), 2023
Haotian Liu
Chunyuan Li
Qingyang Wu
Yong Jae Lee
SyDaVLMMLLM
1.4K
9,060
0
17 Apr 2023
G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment
G-Eval: NLG Evaluation using GPT-4 with Better Human AlignmentConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Yang Liu
Dan Iter
Yichong Xu
Shuohang Wang
Ruochen Xu
Chenguang Zhu
ELMALMLM&MA
782
2,220
0
29 Mar 2023
DeAR: Debiasing Vision-Language Models with Additive Residuals
DeAR: Debiasing Vision-Language Models with Additive ResidualsComputer Vision and Pattern Recognition (CVPR), 2023
Ashish Seth
Mayur Hemani
Chirag Agarwal
VLM
239
92
0
18 Mar 2023
MultiModal Bias: Introducing a Framework for Stereotypical Bias
  Assessment beyond Gender and Race in Vision Language Models
MultiModal Bias: Introducing a Framework for Stereotypical Bias Assessment beyond Gender and Race in Vision Language ModelsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2023
Sepehr Janghorbani
Gerard de Melo
VLM
251
26
0
16 Mar 2023
GPT-4 Technical Report
GPT-4 Technical Report
OpenAI OpenAI
OpenAI Josh Achiam
Steven Adler
Sandhini Agarwal
Lama Ahmad
...
Shengjia Zhao
Tianhao Zheng
Juntang Zhuang
William Zhuk
Barret Zoph
LLMAGMLLM
5.3K
23,506
0
15 Mar 2023
Is ChatGPT a Good NLG Evaluator? A Preliminary Study
Is ChatGPT a Good NLG Evaluator? A Preliminary Study
Jiaan Wang
Yunlong Liang
Fandong Meng
Zengkui Sun
Haoxiang Shi
Zhixu Li
Jinan Xu
Jianfeng Qu
Jie Zhou
LM&MAELMALMAI4MH
551
611
0
07 Mar 2023
How well can Text-to-Image Generative Models understand Ethical Natural
  Language Interventions?
How well can Text-to-Image Generative Models understand Ethical Natural Language Interventions?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Hritik Bansal
Da Yin
Masoud Monajatipoor
Kai-Wei Chang
305
135
0
27 Oct 2022
"I'm sorry to hear that": Finding New Biases in Language Models with a
  Holistic Descriptor Dataset
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor DatasetConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
391
194
0
18 May 2022
Quantifying Social Biases in NLP: A Generalization and Empirical
  Comparison of Extrinsic Fairness Metrics
Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness MetricsTransactions of the Association for Computational Linguistics (TACL), 2021
Paula Czarnowska
Yogarshi Vyas
Kashif Shah
273
137
0
28 Jun 2021
Computer Vision and Conflicting Values: Describing People with Automated
  Alt Text
Computer Vision and Conflicting Values: Describing People with Automated Alt TextAAAI/ACM Conference on AI, Ethics, and Society (AIES), 2021
Margot Hanley
Solon Barocas
K. Levy
Shiri Azenkot
Helen Nissenbaum
151
38
0
26 May 2021
Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language
  Models
Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models
Tejas Srinivasan
Yonatan Bisk
VLM
368
65
0
18 Apr 2021
Learning Transferable Visual Models From Natural Language Supervision
Learning Transferable Visual Models From Natural Language SupervisionInternational Conference on Machine Learning (ICML), 2021
Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya A. Ramesh
Gabriel Goh
...
Amanda Askell
Pamela Mishkin
Jack Clark
Gretchen Krueger
Ilya Sutskever
CLIPVLM
2.2K
47,325
0
26 Feb 2021
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
  Language Models
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
927
913
0
30 Sep 2020
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Language (Technology) is Power: A Critical Survey of "Bias" in NLPAnnual Meeting of the Association for Computational Linguistics (ACL), 2020
Su Lin Blodgett
Solon Barocas
Hal Daumé
Hanna M. Wallach
1.0K
1,619
0
28 May 2020
StereoSet: Measuring stereotypical bias in pretrained language models
StereoSet: Measuring stereotypical bias in pretrained language modelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2020
Moin Nadeem
Anna Bethke
Siva Reddy
604
1,319
0
20 Apr 2020
Toward Gender-Inclusive Coreference Resolution
Toward Gender-Inclusive Coreference ResolutionAnnual Meeting of the Association for Computational Linguistics (ACL), 2019
Yang Trista Cao
Hal Daumé
542
157
0
30 Oct 2019
Fairness-Aware Ranking in Search & Recommendation Systems with
  Application to LinkedIn Talent Search
Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent SearchKnowledge Discovery and Data Mining (KDD), 2019
S. Geyik
Stuart Ambler
K. Kenthapadi
419
432
0
30 Apr 2019
Counterfactual Fairness in Text Classification through Robustness
Counterfactual Fairness in Text Classification through RobustnessAAAI/ACM Conference on AI, Ethics, and Society (AIES), 2018
Sahaj Garg
Vincent Perot
Nicole Limtiaco
Ankur Taly
Ed H. Chi
Alex Beutel
294
284
0
27 Sep 2018
Gender Bias in Coreference Resolution
Gender Bias in Coreference Resolution
Rachel Rudinger
Jason Naradowsky
Brian Leonard
Benjamin Van Durme
472
729
0
25 Apr 2018
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
Gender Bias in Coreference Resolution: Evaluation and Debiasing MethodsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2018
Jieyu Zhao
Tianlu Wang
Mark Yatskar
Vicente Ordonez
Kai-Wei Chang
422
1,133
0
18 Apr 2018
1
Page 1 of 1