ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.11097
  4. Cited By
A Fine-grained Interpretability Evaluation Benchmark for Neural NLP

A Fine-grained Interpretability Evaluation Benchmark for Neural NLP

23 May 2022
Lijie Wang
Yaozong Shen
Shu-ping Peng
Shuai Zhang
Xinyan Xiao
Hao Liu
Hongxuan Tang
Ying Chen
Hua-Hong Wu
Haifeng Wang
    ELM
ArXivPDFHTML

Papers citing "A Fine-grained Interpretability Evaluation Benchmark for Neural NLP"

8 / 8 papers shown
Title
Inpainting the Gaps: A Novel Framework for Evaluating Explanation
  Methods in Vision Transformers
Inpainting the Gaps: A Novel Framework for Evaluating Explanation Methods in Vision Transformers
Lokesh Badisa
Sumohana S. Channappayya
45
0
0
17 Jun 2024
Explainable Causal Analysis of Mental Health on Social Media Data
Explainable Causal Analysis of Mental Health on Social Media Data
Chandni Saxena
Muskan Garg
G. Saxena
CML
29
8
0
16 Oct 2022
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit
  Reasoning Strategies
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies
Mor Geva
Daniel Khashabi
Elad Segal
Tushar Khot
Dan Roth
Jonathan Berant
RALM
250
673
0
06 Jan 2021
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
257
620
0
04 Dec 2018
Generating Natural Language Adversarial Examples
Generating Natural Language Adversarial Examples
M. Alzantot
Yash Sharma
Ahmed Elgohary
Bo-Jhang Ho
Mani B. Srivastava
Kai-Wei Chang
AAML
245
914
0
21 Apr 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
A causal framework for explaining the predictions of black-box
  sequence-to-sequence models
A causal framework for explaining the predictions of black-box sequence-to-sequence models
David Alvarez-Melis
Tommi Jaakkola
CML
229
201
0
06 Jul 2017
A Survey on Deep Learning in Medical Image Analysis
A Survey on Deep Learning in Medical Image Analysis
G. Litjens
Thijs Kooi
B. Bejnordi
A. Setio
F. Ciompi
Mohsen Ghafoorian
Jeroen van der Laak
Bram van Ginneken
C. I. Sánchez
OOD
295
10,618
0
19 Feb 2017
1