ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.03302
  4. Cited By
Necessity and Sufficiency for Explaining Text Classifiers: A Case Study
  in Hate Speech Detection

Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection

6 May 2022
Esma Balkir
I. Nejadgholi
Kathleen C. Fraser
S. Kiritchenko
    FAtt
ArXivPDFHTML

Papers citing "Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection"

4 / 4 papers shown
Title
Promoting Security and Trust on Social Networks: Explainable Cyberbullying Detection Using Large Language Models in a Stream-Based Machine Learning Framework
Promoting Security and Trust on Social Networks: Explainable Cyberbullying Detection Using Large Language Models in a Stream-Based Machine Learning Framework
Silvia García-Méndez
Francisco de Arriba-Pérez
17
0
0
07 Apr 2025
InterroLang: Exploring NLP Models and Datasets through Dialogue-based
  Explanations
InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations
Nils Feldhus
Qianli Wang
Tatiana Anikina
Sahil Chopra
Cennet Oguz
Sebastian Möller
26
9
0
09 Oct 2023
Towards Procedural Fairness: Uncovering Biases in How a Toxic Language
  Classifier Uses Sentiment Information
Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
I. Nejadgholi
Esma Balkir
Kathleen C. Fraser
S. Kiritchenko
23
3
0
19 Oct 2022
Challenges in Applying Explainability Methods to Improve the Fairness of
  NLP Models
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
16
36
0
08 Jun 2022
1