ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.11706
  4. Cited By
A Systematic Study of Bias Amplification
v1v2 (latest)

A Systematic Study of Bias Amplification

27 January 2022
Melissa Hall
Laurens van der Maaten
Laura Gustafson
Maxwell Jones
Aaron B. Adcock
ArXiv (abs)PDFHTMLGithub (9★)

Papers citing "A Systematic Study of Bias Amplification"

33 / 33 papers shown
Robustness of LLM-enabled vehicle trajectory prediction under data security threats
Robustness of LLM-enabled vehicle trajectory prediction under data security threats
Feilong Wang
Fuqiang Liu
AAML
184
0
0
14 Nov 2025
Can SAEs reveal and mitigate racial biases of LLMs in healthcare?
Can SAEs reveal and mitigate racial biases of LLMs in healthcare?
Hiba Ahsan
Byron C. Wallace
LLMSV
247
1
0
31 Oct 2025
Wasserstein Distributionally Robust Optimization Through the Lens of Structural Causal Models and Individual Fairness
Wasserstein Distributionally Robust Optimization Through the Lens of Structural Causal Models and Individual FairnessNeural Information Processing Systems (NeurIPS), 2025
A. Ehyaei
G. Farnadi
Samira Samadi
200
4
0
30 Sep 2025
Don't Change My View: Ideological Bias Auditing in Large Language Models
Don't Change My View: Ideological Bias Auditing in Large Language Models
Paul Kröger
Emilio Barkett
LLMSV
205
0
0
16 Sep 2025
Who's Asking? Investigating Bias Through the Lens of Disability Framed Queries in LLMs
Who's Asking? Investigating Bias Through the Lens of Disability Framed Queries in LLMs
Srikant Panda
Vishnu Hari
Kalpana Panda
Amit Agarwal
Hitesh Laxmichand Patel
319
7
0
18 Aug 2025
AI Should Sense Better, Not Just Scale Bigger: Adaptive Sensing as a Paradigm Shift
AI Should Sense Better, Not Just Scale Bigger: Adaptive Sensing as a Paradigm Shift
Eunsu Baek
Keondo Park
Jeonggil Ko
Min Hwan Oh
Taesik Gong
Hyung-Sin Kim
418
5
0
10 Jul 2025
Bias Analysis in Unconditional Image Generative Models
Bias Analysis in Unconditional Image Generative Models
Xiaofeng Zhang
Michelle Lin
Damien Scieur
Aaron Courville
Yash Goyal
238
0
0
10 Jun 2025
The Lock-in Hypothesis: Stagnation by Algorithm
The Lock-in Hypothesis: Stagnation by Algorithm
Tianyi Qiu
Zhonghao He
Tejasveer Chugh
Max Kleiman-Weiner
203
7
0
06 Jun 2025
When Algorithms Play Favorites: Lookism in the Generation and Perception of Faces
When Algorithms Play Favorites: Lookism in the Generation and Perception of Faces
Miriam Doh
Aditya Gulati
M. Mancas
Nuria Oliver
CVBMFaML
188
2
0
20 May 2025
When majority rules, minority loses: bias amplification of gradient descent
When majority rules, minority loses: bias amplification of gradient descent
François Bachoc
Jérôme Bolte
Ryan Boustany
Jean-Michel Loubes
FaML
526
1
0
19 May 2025
Intrinsic Bias is Predicted by Pretraining Data and Correlates with Downstream Performance in Vision-Language Encoders
Intrinsic Bias is Predicted by Pretraining Data and Correlates with Downstream Performance in Vision-Language EncodersNorth American Chapter of the Association for Computational Linguistics (NAACL), 2025
Kshitish Ghate
Isaac Slaughter
Kyra Wilson
Mona Diab
Aylin Caliskan
621
1
0
11 Feb 2025
Exploring the Influence of Label Aggregation on Minority Voices:
  Implications for Dataset Bias and Model Training
Exploring the Influence of Label Aggregation on Minority Voices: Implications for Dataset Bias and Model Training
Mugdha Pandya
Nafise Sadat Moosavi
Diana Maynard
447
1
0
05 Dec 2024
A dataset of questions on decision-theoretic reasoning in Newcomb-like problems
A dataset of questions on decision-theoretic reasoning in Newcomb-like problems
Caspar Oesterheld
Emery Cooper
Miles Kodama
Linh Chi Nguyen
Ethan Perez
667
1
0
15 Nov 2024
Lookism: The overlooked bias in computer vision
Lookism: The overlooked bias in computer vision
Aditya Gulati
Bruno Lepri
Nuria Oliver
301
4
0
21 Aug 2024
Native Design Bias: Studying the Impact of English Nativeness on Language Model Performance
Native Design Bias: Studying the Impact of English Nativeness on Language Model Performance
Manon Reusens
Philipp Borchert
Jochen De Weerdt
Bart Baesens
536
5
0
25 Jun 2024
Mitigating Bias Using Model-Agnostic Data Attribution
Mitigating Bias Using Model-Agnostic Data Attribution
Sander De Coninck
Wei-Cheng Wang
Pieter Simoens
317
4
0
08 May 2024
Curvature-Aligned Federated Learning (CAFe): Harmonizing Loss Landscapes for Fairness Without Demographics
Curvature-Aligned Federated Learning (CAFe): Harmonizing Loss Landscapes for Fairness Without Demographics
Shaily Roy
Harshit Sharma
Asif Salekin
573
2
0
30 Apr 2024
FairGridSearch: A Framework to Compare Fairness-Enhancing Models
FairGridSearch: A Framework to Compare Fairness-Enhancing Models
Shih-Chi Ma
Tatiana Ermakova
Benjamin Fabian
FaML
260
2
0
04 Jan 2024
Prompt-Propose-Verify: A Reliable Hand-Object-Interaction Data
  Generation Framework using Foundational Models
Prompt-Propose-Verify: A Reliable Hand-Object-Interaction Data Generation Framework using Foundational Models
Gurusha Juneja
Sukrit Kumar
DiffM
187
0
0
23 Dec 2023
NLP for Maternal Healthcare: Perspectives and Guiding Principles in the
  Age of LLMs
NLP for Maternal Healthcare: Perspectives and Guiding Principles in the Age of LLMs
Maria Antoniak
Aakanksha Naik
Carla S. Alvarado
Lucy Lu Wang
Irene Y. Chen
AILaw
328
33
0
19 Dec 2023
FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in
  Medical Image Analysis
FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in Medical Image AnalysisInternational Conference on Learning Representations (ICLR), 2023
Raman Dutt
Ondrej Bohdal
Sotirios A. Tsaftaris
Timothy M. Hospedales
582
30
0
08 Oct 2023
NLPositionality: Characterizing Design Biases of Datasets and Models
NLPositionality: Characterizing Design Biases of Datasets and ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Sebastin Santy
Jenny T Liang
Ronan Le Bras
Katharina Reinecke
Maarten Sap
407
114
0
02 Jun 2023
Auditing and Generating Synthetic Data with Controllable Trust
  Trade-offs
Auditing and Generating Synthetic Data with Controllable Trust Trade-offsIEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS), 2023
Brian M. Belgodere
Pierre Dognin
Adam Ivankay
Igor Melnyk
Youssef Mroueh
...
Mattia Rigotti
Jerret Ross
Yair Schiff
Radhika Vedpathak
Richard A. Young
599
20
0
21 Apr 2023
Model-Agnostic Gender Debiased Image Captioning
Model-Agnostic Gender Debiased Image CaptioningComputer Vision and Pattern Recognition (CVPR), 2023
Yusuke Hirota
Yuta Nakashima
Noa Garcia
FaML
435
26
0
07 Apr 2023
Bias mitigation techniques in image classification: fair machine
  learning in human heritage collections
Bias mitigation techniques in image classification: fair machine learning in human heritage collectionsJournal of WSCG (WSCG), 2023
Dalia Ortiz Pablo
Sushruth Badri
Erik Norén
Christoph Nötzli
303
2
0
20 Mar 2023
Towards Reliable Assessments of Demographic Disparities in Multi-Label
  Image Classifiers
Towards Reliable Assessments of Demographic Disparities in Multi-Label Image Classifiers
Melissa Hall
Bobbie Chern
Laura Gustafson
Denisse Ventura
Harshad Kulkarni
Candace Ross
Nicolas Usunier
303
6
0
16 Feb 2023
Vision-Language Models Performing Zero-Shot Tasks Exhibit Gender-based
  Disparities
Vision-Language Models Performing Zero-Shot Tasks Exhibit Gender-based Disparities
Melissa Hall
Laura Gustafson
Aaron B. Adcock
Ishan Misra
Candace Ross
VLM
272
30
0
26 Jan 2023
A Comparative Analysis of Bias Amplification in Graph Neural Network
  Approaches for Recommender Systems
A Comparative Analysis of Bias Amplification in Graph Neural Network Approaches for Recommender Systems
Nikzad Chizari
Niloufar Shoeibi
María N. Moreno-García
199
17
0
18 Jan 2023
Simplicity Bias Leads to Amplified Performance Disparities
Simplicity Bias Leads to Amplified Performance DisparitiesConference on Fairness, Accountability and Transparency (FAccT), 2022
Samuel J. Bell
Levent Sagun
300
13
0
13 Dec 2022
Men Also Do Laundry: Multi-Attribute Bias Amplification
Men Also Do Laundry: Multi-Attribute Bias AmplificationInternational Conference on Machine Learning (ICML), 2022
Dora Zhao
Jerone T. A. Andrews
Alice Xiang
FaML
364
29
0
21 Oct 2022
Language Generation Models Can Cause Harm: So What Can We Do About It?
  An Actionable Survey
Language Generation Models Can Cause Harm: So What Can We Do About It? An Actionable SurveyConference of the European Chapter of the Association for Computational Linguistics (EACL), 2022
Sachin Kumar
Vidhisha Balachandran
Lucille Njoo
Antonios Anastasopoulos
Yulia Tsvetkov
ELM
488
112
0
14 Oct 2022
Data Feedback Loops: Model-driven Amplification of Dataset Biases
Data Feedback Loops: Model-driven Amplification of Dataset BiasesInternational Conference on Machine Learning (ICML), 2022
Rohan Taori
Tatsunori B. Hashimoto
442
66
0
08 Sep 2022
Fairness and Explainability in Automatic Decision-Making Systems. A
  challenge for computer science and law
Fairness and Explainability in Automatic Decision-Making Systems. A challenge for computer science and lawEURO Journal on Decision Processes (EJDP), 2022
Thierry Kirat
Olivia Tambou
Virginie Do
A. Tsoukiás
FaML
206
27
0
14 May 2022
1
Page 1 of 1