ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.08912
  4. Cited By
Rationalization for Explainable NLP: A Survey

Rationalization for Explainable NLP: A Survey

21 January 2023
Sai Gurrapu
Ajay Kulkarni
Lifu Huang
Ismini Lourentzou
Laura J. Freeman
Feras A. Batarseh
ArXivPDFHTML

Papers citing "Rationalization for Explainable NLP: A Survey"

11 / 11 papers shown
Title
DiReCT: Diagnostic Reasoning for Clinical Notes via Large Language Models
DiReCT: Diagnostic Reasoning for Clinical Notes via Large Language Models
Bowen Wang
Jiuyang Chang
Yiming Qian
Guoxin Chen
Junhao Chen
Zhouqiang Jiang
Jiahao Zhang
Yuta Nakashima
Hajime Nagahara
LM&MA
ELM
LRM
38
3
0
04 Aug 2024
Evaluating the Reliability of Self-Explanations in Large Language Models
Evaluating the Reliability of Self-Explanations in Large Language Models
Korbinian Randl
John Pavlopoulos
Aron Henriksson
Tony Lindgren
LRM
35
0
0
19 Jul 2024
End-to-End Multimodal Fact-Checking and Explanation Generation: A
  Challenging Dataset and Models
End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models
Barry Menglong Yao
Aditya Shah
Lichao Sun
Jin-Hee Cho
Lifu Huang
MLLM
LRM
28
77
0
25 May 2022
A Survey on AI Assurance
A Survey on AI Assurance
Feras A. Batarseh
Laura J. Freeman
27
65
0
15 Nov 2021
Measuring Association Between Labels and Free-Text Rationales
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
274
170
0
24 Oct 2020
Invariant Rationalization
Invariant Rationalization
Shiyu Chang
Yang Zhang
Mo Yu
Tommi Jaakkola
179
197
0
22 Mar 2020
Text Summarization with Pretrained Encoders
Text Summarization with Pretrained Encoders
Yang Liu
Mirella Lapata
MILM
254
1,417
0
22 Aug 2019
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
252
618
0
04 Dec 2018
A causal framework for explaining the predictions of black-box
  sequence-to-sequence models
A causal framework for explaining the predictions of black-box sequence-to-sequence models
David Alvarez-Melis
Tommi Jaakkola
CML
219
201
0
06 Jul 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,658
0
28 Feb 2017
Learning Attitudes and Attributes from Multi-Aspect Reviews
Learning Attitudes and Attributes from Multi-Aspect Reviews
Julian McAuley
J. Leskovec
Dan Jurafsky
193
292
0
15 Oct 2012
1