ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.13954
21
1

Measuring Social Biases in Masked Language Models by Proxy of Prediction Quality

21 February 2024
Rahul Zalkikar
Kanchan Chandra
ArXivPDFHTML
Abstract

Transformer language models have achieved state-of-the-art performance for a variety of natural language tasks but have been shown to encode unwanted biases. We evaluate the social biases encoded by transformers trained with the masked language modeling objective using proposed proxy functions within an iterative masking experiment to measure the quality of transformer models' predictions and assess the preference of MLMs towards disadvantaged and advantaged groups. We find all models encode concerning social biases. We compare bias estimations with those produced by other evaluation methods using benchmark datasets and assess their alignment with human annotated biases. We extend previous work by evaluating social biases introduced after retraining an MLM under the masked language modeling objective and find proposed measures produce more accurate and sensitive estimations of biases introduced by retraining MLMs based on relative preference for biased sentences between models, while other methods tend to underestimate biases after retraining on sentences biased towards disadvantaged groups.

View on arXiv
@article{zalkikar2025_2402.13954,
  title={ Measuring Social Biases in Masked Language Models by Proxy of Prediction Quality },
  author={ Rahul Zalkikar and Kanchan Chandra },
  journal={arXiv preprint arXiv:2402.13954},
  year={ 2025 }
}
Comments on this paper