
Title |
|---|
![]() Measuring Machine Learning Harms from Stereotypes Requires Understanding Who Is Harmed by Which Errors in What WaysConference on Fairness, Accountability and Transparency (FAccT), 2024 |
![]() Conceptor-Aided Debiasing of Large Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Bridging Fairness and Environmental Sustainability in Natural Language
ProcessingConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Fair NLP Models with Differentially Private Text EncodersConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Theories of "Gender" in NLP Bias ResearchConference on Fairness, Accountability and Transparency (FAccT), 2022 |
![]() Sustainable Modular Debiasing of Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2021 |
![]() A Survey of Race, Racism, and Anti-Racism in NLPAnnual Meeting of the Association for Computational Linguistics (ACL), 2021 |
![]() RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of
Conversational Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2021 |
![]() Unmasking the Mask -- Evaluating Social Biases in Masked Language ModelsAAAI Conference on Artificial Intelligence (AAAI), 2021 |
![]() Debiasing Pre-trained Contextualised EmbeddingsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2021 |
![]() Exploring the Linear Subspace Hypothesis in Gender Bias MitigationConference on Empirical Methods in Natural Language Processing (EMNLP), 2020 |
![]() Language (Technology) is Power: A Critical Survey of "Bias" in NLPAnnual Meeting of the Association for Computational Linguistics (ACL), 2020 |
![]() Joint Multiclass Debiasing of Word EmbeddingsInternational Syposium on Methodologies for Intelligent Systems (ISMIS), 2020 |