ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.00893
  4. Cited By
Evaluating Explanations: How much do explanations from the teacher aid
  students?
v1v2 (latest)

Evaluating Explanations: How much do explanations from the teacher aid students?

Transactions of the Association for Computational Linguistics (TACL), 2020
1 December 2020
Danish Pruthi
Rachit Bansal
Bhuwan Dhingra
Livio Baldini Soares
Michael Collins
Zachary Chase Lipton
Graham Neubig
William W. Cohen
    FAttXAI
ArXiv (abs)PDFHTML

Papers citing "Evaluating Explanations: How much do explanations from the teacher aid students?"

50 / 87 papers shown
Learning from Sufficient Rationales: Analysing the Relationship Between Explanation Faithfulness and Token-level Regularisation Strategies
Learning from Sufficient Rationales: Analysing the Relationship Between Explanation Faithfulness and Token-level Regularisation Strategies
Jonathan Kamp
Lisa Beinborn
Antske Fokkens
190
1
0
20 Nov 2025
TeSent: A Benchmark Dataset for Fairness-aware Explainable Sentiment Classification in Telugu
TeSent: A Benchmark Dataset for Fairness-aware Explainable Sentiment Classification in Telugu
Vallabhaneni Raj Kumar
Ashwin S
Supriya Manna
Niladri Sett
Cheedella V S N M S Hema Harshitha
...
Anand Kumar Sharma
Basina Deepakraj
Tanuj Sarkar
Bondada Navaneeth Krishna
Samanthapudi Shakeer
306
0
0
02 Aug 2025
Can human clinical rationales improve the performance and explainability of clinical text classification models?
Can human clinical rationales improve the performance and explainability of clinical text classification models?
Christoph Metzner
Shang Gao
Drahomira Herrmannova
Heidi A. Hanson
161
0
0
28 Jul 2025
A Necessary Step toward Faithfulness: Measuring and Improving Consistency in Free-Text Explanations
A Necessary Step toward Faithfulness: Measuring and Improving Consistency in Free-Text Explanations
Lingjun Zhao
Hal Daumé III
481
2
0
25 May 2025
On the reliability of feature attribution methods for speech classification
On the reliability of feature attribution methods for speech classification
Gaofei Shen
Hosein Mohebbi
Arianna Bisazza
Afra Alishahi
Grzegorz Chrupała
460
0
0
22 May 2025
VirtualXAI: A User-Centric Framework for Explainability Assessment Leveraging GPT-Generated Personas
VirtualXAI: A User-Centric Framework for Explainability Assessment Leveraging GPT-Generated Personas
Georgios Makridis
Vasileios Koukos
G. Fatouros
D. Kyriazis
443
2
0
06 Mar 2025
Corrections Meet Explanations: A Unified Framework for Explainable Grammatical Error Correction
Corrections Meet Explanations: A Unified Framework for Explainable Grammatical Error Correction
Jingheng Ye
Shang Qin
Hai-Tao Zheng
Hai-Tao Zheng
Shen Wang
Qingsong Wen
375
4
0
24 Feb 2025
GraphNarrator: Generating Textual Explanations for Graph Neural Networks
GraphNarrator: Generating Textual Explanations for Graph Neural NetworksAnnual Meeting of the Association for Computational Linguistics (ACL), 2024
Bo Pan
Zhen Xiong
Guanchen Wu
Zheng Zhang
Yifei Zhang
Liang Zhao
FAtt
321
1
0
20 Oct 2024
Explanation Regularisation through the Lens of Attributions
Explanation Regularisation through the Lens of Attributions
Pedro Ferreira
Wilker Aziz
Ivan Titov
595
2
0
23 Jul 2024
Data-Centric Human Preference with Rationales for Direct Preference Alignment
Data-Centric Human Preference with Rationales for Direct Preference Alignment
H. Just
Ming Jin
Anit Kumar Sahu
Huy Phan
Ruoxi Jia
573
3
0
19 Jul 2024
Retrieved In-Context Principles from Previous Mistakes
Retrieved In-Context Principles from Previous Mistakes
Hao Sun
Yong Jiang
Bo Wang
Yingyan Hou
Yan Zhang
Pengjun Xie
Fei Huang
375
2
0
08 Jul 2024
CAVE: Controllable Authorship Verification Explanations
CAVE: Controllable Authorship Verification Explanations
Sahana Ramnath
Kartik Pandey
Elizabeth Boschee
Xiang Ren
442
3
0
24 Jun 2024
Evaluating Input Feature Explanations through a Unified Diagnostic Evaluation Framework
Evaluating Input Feature Explanations through a Unified Diagnostic Evaluation Framework
Jingyi Sun
Pepa Atanasova
Isabelle Augenstein
FAttAAML
203
0
0
21 Jun 2024
Understanding Understanding: A Pragmatic Framework Motivated by Large
  Language Models
Understanding Understanding: A Pragmatic Framework Motivated by Large Language Models
Kevin Leyton-Brown
Y. Shoham
ELM
336
0
0
16 Jun 2024
Evaluating Saliency Explanations in NLP by Crowdsourcing
Evaluating Saliency Explanations in NLP by CrowdsourcingInternational Conference on Language Resources and Evaluation (LREC), 2024
Xiaotian Lu
Jiyi Li
Zhen Wan
Xiaofeng Lin
Koh Takeuchi
Hisashi Kashima
XAIFAttLRM
282
3
0
17 May 2024
Explanation based Bias Decoupling Regularization for Natural Language
  Inference
Explanation based Bias Decoupling Regularization for Natural Language Inference
Jianxiang Zang
Hui Liu
229
1
0
20 Apr 2024
The Role of Syntactic Span Preferences in Post-Hoc Explanation
  Disagreement
The Role of Syntactic Span Preferences in Post-Hoc Explanation Disagreement
Jonathan Kamp
Lisa Beinborn
Antske Fokkens
252
3
0
28 Mar 2024
RORA: Robust Free-Text Rationale Evaluation
RORA: Robust Free-Text Rationale Evaluation
Zhengping Jiang
Yining Lu
Hanjie Chen
Daniel Khashabi
Benjamin Van Durme
Anqi Liu
314
7
0
28 Feb 2024
TPD: Enhancing Student Language Model Reasoning via Principle Discovery
  and Guidance
TPD: Enhancing Student Language Model Reasoning via Principle Discovery and Guidance
Haorui Wang
Rongzhi Zhang
Yinghao Li
Lingkai Kong
Yuchen Zhuang
Xiusi Chen
Chao Zhang
LRM
290
7
0
24 Jan 2024
Generating Zero-shot Abstractive Explanations for Rumour Verification
Generating Zero-shot Abstractive Explanations for Rumour Verification
I. Bilal
Preslav Nakov
Rob Procter
Maria Liakata
270
1
0
23 Jan 2024
ALMANACS: A Simulatability Benchmark for Language Model Explainability
ALMANACS: A Simulatability Benchmark for Language Model Explainability
Edmund Mills
Shiye Su
Stuart J. Russell
Scott Emmons
614
9
0
20 Dec 2023
Evaluating the Utility of Model Explanations for Model Development
Evaluating the Utility of Model Explanations for Model Development
Shawn Im
Jacob Andreas
Yilun Zhou
XAIFAttELM
359
2
0
10 Dec 2023
Proto-lm: A Prototypical Network-Based Framework for Built-in
  Interpretability in Large Language Models
Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Sean Xie
Soroush Vosoughi
Saeed Hassanpour
368
8
0
03 Nov 2023
REFER: An End-to-end Rationale Extraction Framework for Explanation
  Regularization
REFER: An End-to-end Rationale Extraction Framework for Explanation RegularizationConference on Computational Natural Language Learning (CoNLL), 2023
Mohammad Reza Ghasemi Madani
Pasquale Minervini
291
5
0
22 Oct 2023
Rephrase, Augment, Reason: Visual Grounding of Questions for
  Vision-Language Models
Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language ModelsInternational Conference on Learning Representations (ICLR), 2023
Archiki Prasad
Elias Stengel-Eskin
Mohit Bansal
ReLMLRM
327
13
0
09 Oct 2023
Measuring Information in Text Explanations
Measuring Information in Text Explanations
Zining Zhu
Frank Rudzicz
FAtt
275
1
0
06 Oct 2023
Faithful Explanations of Black-box NLP Models Using LLM-generated
  Counterfactuals
Faithful Explanations of Black-box NLP Models Using LLM-generated CounterfactualsInternational Conference on Learning Representations (ICLR), 2023
Y. Gat
Nitay Calderon
Amir Feder
Alexander Chapanin
Amit Sharma
Roi Reichart
430
50
0
01 Oct 2023
Learning by Self-Explaining
Learning by Self-Explaining
Wolfgang Stammer
Felix Friedrich
David Steinmann
Manuel Brack
Hikaru Shindo
Kristian Kersting
521
17
0
15 Sep 2023
Goodhart's Law Applies to NLP's Explanation Benchmarks
Goodhart's Law Applies to NLP's Explanation BenchmarksFindings (Findings), 2023
Jennifer Hsia
Danish Pruthi
Aarti Singh
Zachary Chase Lipton
259
8
0
28 Aug 2023
Can Authorship Representation Learning Capture Stylistic Features?
Can Authorship Representation Learning Capture Stylistic Features?Transactions of the Association for Computational Linguistics (TACL), 2023
Andrew Wang
Cristina Aggazzotti
R. Kotula
Rafael Rivera Soto
M. Bishop
Nicholas Andrews
AI4TS
300
22
0
22 Aug 2023
Exploring the Landscape of Natural Language Processing Research
Exploring the Landscape of Natural Language Processing ResearchRecent Advances in Natural Language Processing (RANLP), 2023
Tim Schopf
Karim Arabi
Florian Matthes
540
19
0
20 Jul 2023
Can Language Models Teach Weaker Agents? Teacher Explanations Improve
  Students via Personalization
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
Swarnadeep Saha
Peter Hase
Mohit Bansal
LRM
321
16
0
15 Jun 2023
CREST: A Joint Framework for Rationalization and Counterfactual Text
  Generation
CREST: A Joint Framework for Rationalization and Counterfactual Text GenerationAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Marcos Vinícius Treviso
Alexis Ross
Nuno M. Guerreiro
André F.T. Martins
323
28
0
26 May 2023
Counterfactuals of Counterfactuals: a back-translation-inspired approach
  to analyse counterfactual editors
Counterfactuals of Counterfactuals: a back-translation-inspired approach to analyse counterfactual editorsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Giorgos Filandrianos
Edmund Dervakos
Orfeas Menis Mastromichalakis
Chrysoula Zerva
Giorgos Stamou
AAML
204
6
0
26 May 2023
Quantifying the Intrinsic Usefulness of Attributional Explanations for
  Graph Neural Networks with Artificial Simulatability Studies
Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies
Jonas Teufel
Luca Torresi
Pascal Friederich
FAtt
251
1
0
25 May 2023
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model SizesAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
840
821
0
03 May 2023
Answering Questions by Meta-Reasoning over Multiple Chains of Thought
Answering Questions by Meta-Reasoning over Multiple Chains of ThoughtConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Ori Yoran
Tomer Wolfson
Ben Bogin
Uri Katz
Daniel Deutch
Jonathan Berant
ReLMLRMKELM
457
126
0
25 Apr 2023
Computational modeling of semantic change
Computational modeling of semantic changeConference of the European Chapter of the Association for Computational Linguistics (EACL), 2023
Nina Tahmasebi
Haim Dubossarsky
362
8
0
13 Apr 2023
Training Language Models with Language Feedback at Scale
Training Language Models with Language Feedback at Scale
Jérémy Scheurer
Jon Ander Campos
Tomasz Korbak
Jun Shern Chan
Angelica Chen
Dong Wang
Ethan Perez
ALM
462
130
0
28 Mar 2023
Improving Code Generation by Training with Natural Language Feedback
Improving Code Generation by Training with Natural Language Feedback
Angelica Chen
Jérémy Scheurer
Tomasz Korbak
Jon Ander Campos
Jun Shern Chan
Samuel R. Bowman
Kyunghyun Cho
Ethan Perez
SyDaALMAI4CE
390
94
0
28 Mar 2023
Quantifying Context Mixing in Transformers
Quantifying Context Mixing in TransformersConference of the European Chapter of the Association for Computational Linguistics (EACL), 2023
Hosein Mohebbi
Willem H. Zuidema
Grzegorz Chrupała
Afra Alishahi
550
40
0
30 Jan 2023
MEGAN: Multi-Explanation Graph Attention Network
MEGAN: Multi-Explanation Graph Attention Network
Jonas Teufel
Luca Torresi
Patrick Reiser
Pascal Friederich
232
9
0
23 Nov 2022
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales
PINTO: Faithful Language Reasoning Using Prompt-Generated RationalesInternational Conference on Learning Representations (ICLR), 2022
Peifeng Wang
Aaron Chan
Filip Ilievski
Muhao Chen
Xiang Ren
LRMReLM
489
73
0
03 Nov 2022
Does Self-Rationalization Improve Robustness to Spurious Correlations?
Does Self-Rationalization Improve Robustness to Spurious Correlations?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Alexis Ross
Matthew E. Peters
Ana Marasović
LRM
296
16
0
24 Oct 2022
Large Language Models Can Self-Improve
Large Language Models Can Self-ImproveConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Jiaxin Huang
S. Gu
Le Hou
Yuexin Wu
Xuezhi Wang
Hongkun Yu
Jiawei Han
ReLMAI4MHLRM
796
813
0
20 Oct 2022
Challenges in Explanation Quality Evaluation
Challenges in Explanation Quality Evaluation
Hendrik Schuff
Heike Adel
Peng Qi
Ngoc Thang Vu
XAI
303
3
0
13 Oct 2022
Assessing Out-of-Domain Language Model Performance from Few Examples
Assessing Out-of-Domain Language Model Performance from Few ExamplesConference of the European Chapter of the Association for Computational Linguistics (EACL), 2022
Prasann Singhal
Jarad Forristal
Xi Ye
Greg Durrett
LRM
233
6
0
13 Oct 2022
REV: Information-Theoretic Evaluation of Free-Text Rationales
REV: Information-Theoretic Evaluation of Free-Text RationalesAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Hanjie Chen
Faeze Brahman
Xiang Ren
Yangfeng Ji
Yejin Choi
Swabha Swayamdipta
337
30
0
10 Oct 2022
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A SurveyComputational Linguistics (CL), 2022
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
627
187
0
22 Sep 2022
Responsibility: An Example-based Explainable AI approach via Training
  Process Inspection
Responsibility: An Example-based Explainable AI approach via Training Process Inspection
Faraz Khadivpour
Arghasree Banerjee
Matthew J. Guzdial
XAI
216
2
0
07 Sep 2022
12
Next
Page 1 of 2