ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.11708
  4. Cited By
Detect and Perturb: Neutral Rewriting of Biased and Sensitive Text via
  Gradient-based Decoding

Detect and Perturb: Neutral Rewriting of Biased and Sensitive Text via Gradient-based Decoding

24 September 2021
Zexue He
Bodhisattwa Prasad Majumder
Julian McAuley
ArXiv (abs)PDFHTML

Papers citing "Detect and Perturb: Neutral Rewriting of Biased and Sensitive Text via Gradient-based Decoding"

9 / 9 papers shown
Title
GeNRe: A French Gender-Neutral Rewriting System Using Collective Nouns
GeNRe: A French Gender-Neutral Rewriting System Using Collective Nouns
Enzo Doyen
Amalia Todirascu
53
0
0
29 May 2025
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Damin Zhang
Yi Zhang
Geetanjali Bihani
Julia Taylor Rayz
162
3
0
06 May 2024
Analyzing Sentiment Polarity Reduction in News Presentation through
  Contextual Perturbation and Large Language Models
Analyzing Sentiment Polarity Reduction in News Presentation through Contextual Perturbation and Large Language Models
Alapan Kuila
Somnath Jena
Sudeshna Sarkar
P. Chakrabarti
AAML
44
2
0
03 Feb 2024
Synthetic Pre-Training Tasks for Neural Machine Translation
Synthetic Pre-Training Tasks for Neural Machine Translation
Zexue He
Graeme W. Blackwood
Yikang Shen
Julian McAuley
Rogerio Feris
56
4
0
19 Dec 2022
Style Transfer as Data Augmentation: A Case Study on Named Entity
  Recognition
Style Transfer as Data Augmentation: A Case Study on Named Entity Recognition
Shuguang Chen
Leonardo Neves
Thamar Solorio
110
4
0
14 Oct 2022
Language Generation Models Can Cause Harm: So What Can We Do About It?
  An Actionable Survey
Language Generation Models Can Cause Harm: So What Can We Do About It? An Actionable Survey
Sachin Kumar
Vidhisha Balachandran
Lucille Njoo
Antonios Anastasopoulos
Yulia Tsvetkov
ELM
187
91
0
14 Oct 2022
Controlling Bias Exposure for Fair Interpretable Predictions
Controlling Bias Exposure for Fair Interpretable Predictions
Zexue He
Yu Wang
Julian McAuley
Bodhisattwa Prasad Majumder
58
19
0
14 Oct 2022
InterFair: Debiasing with Natural Language Feedback for Fair
  Interpretable Predictions
InterFair: Debiasing with Natural Language Feedback for Fair Interpretable Predictions
Bodhisattwa Prasad Majumder
Zexue He
Julian McAuley
56
6
0
14 Oct 2022
Text Style Transfer for Bias Mitigation using Masked Language Modeling
Text Style Transfer for Bias Mitigation using Masked Language Modeling
E. Tokpo
T. Calders
64
35
0
21 Jan 2022
1