ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.09192
  4. Cited By
Entropy-based Attention Regularization Frees Unintended Bias Mitigation
  from Lists

Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists

17 March 2022
Giuseppe Attanasio
Debora Nozza
Dirk Hovy
Elena Baralis
ArXivPDFHTML

Papers citing "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"

13 / 13 papers shown
Title
The Lou Dataset -- Exploring the Impact of Gender-Fair Language in
  German Text Classification
The Lou Dataset -- Exploring the Impact of Gender-Fair Language in German Text Classification
Andreas Waldis
Joel Birrer
Anne Lauscher
Iryna Gurevych
25
1
0
26 Sep 2024
Does Liking Yellow Imply Driving a School Bus? Semantic Leakage in Language Models
Does Liking Yellow Imply Driving a School Bus? Semantic Leakage in Language Models
Hila Gonen
Terra Blevins
Alisa Liu
Luke Zettlemoyer
Noah A. Smith
31
5
0
12 Aug 2024
Towards Understanding Task-agnostic Debiasing Through the Lenses of
  Intrinsic Bias and Forgetfulness
Towards Understanding Task-agnostic Debiasing Through the Lenses of Intrinsic Bias and Forgetfulness
Guangliang Liu
Milad Afshari
Xitong Zhang
Zhiyu Xue
Avrajit Ghosh
Bidhan Bashyal
Rongrong Wang
K. Johnson
27
0
0
06 Jun 2024
Progressive Feature Self-reinforcement for Weakly Supervised Semantic
  Segmentation
Progressive Feature Self-reinforcement for Weakly Supervised Semantic Segmentation
Jingxuan He
Lechao Cheng
Chaowei Fang
Zunlei Feng
Tingting Mu
Min-Gyoo Song
13
7
0
14 Dec 2023
Weigh Your Own Words: Improving Hate Speech Counter Narrative Generation
  via Attention Regularization
Weigh Your Own Words: Improving Hate Speech Counter Narrative Generation via Attention Regularization
Helena Bonaldi
Giuseppe Attanasio
Debora Nozza
Marco Guerini
20
6
0
05 Sep 2023
A Survey on Fairness in Large Language Models
A Survey on Fairness in Large Language Models
Yingji Li
Mengnan Du
Rui Song
Xin Wang
Ying Wang
ALM
52
59
0
20 Aug 2023
XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in
  Large Language Models
XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models
Paul Röttger
Hannah Rose Kirk
Bertie Vidgen
Giuseppe Attanasio
Federico Bianchi
Dirk Hovy
ALM
ELM
AILaw
25
125
0
02 Aug 2023
Training-free Diffusion Model Adaptation for Variable-Sized
  Text-to-Image Synthesis
Training-free Diffusion Model Adaptation for Variable-Sized Text-to-Image Synthesis
Zhiyu Jin
Xuli Shen
Bin Li
Xiangyang Xue
24
36
0
14 Jun 2023
Target-Agnostic Gender-Aware Contrastive Learning for Mitigating Bias in
  Multilingual Machine Translation
Target-Agnostic Gender-Aware Contrastive Learning for Mitigating Bias in Multilingual Machine Translation
Minwoo Lee
Hyukhun Koh
Kang-il Lee
Dongdong Zhang
Minsu Kim
Kyomin Jung
24
9
0
23 May 2023
Should We Attend More or Less? Modulating Attention for Fairness
Should We Attend More or Less? Modulating Attention for Fairness
A. Zayed
Gonçalo Mordido
Samira Shabanian
Sarath Chandar
37
10
0
22 May 2023
Domain Classification-based Source-specific Term Penalization for Domain
  Adaptation in Hate-speech Detection
Domain Classification-based Source-specific Term Penalization for Domain Adaptation in Hate-speech Detection
Tulika Bose
Nikolaos Aletras
Irina Illina
Dominique Fohr
11
0
0
18 Sep 2022
Debiasing Pre-trained Contextualised Embeddings
Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Danushka Bollegala
215
138
0
23 Jan 2021
The Woman Worked as a Babysitter: On Biases in Language Generation
The Woman Worked as a Babysitter: On Biases in Language Generation
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
214
616
0
03 Sep 2019
1