ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.07550
  4. Cited By
The Irrationality of Neural Rationale Models
v1v2 (latest)

The Irrationality of Neural Rationale Models

14 October 2021
Yiming Zheng
Serena Booth
J. Shah
Yilun Zhou
ArXiv (abs)PDFHTMLGithub (3★)

Papers citing "The Irrationality of Neural Rationale Models"

16 / 16 papers shown
Title
Adversarial Cooperative Rationalization: The Risk of Spurious Correlations in Even Clean Datasets
Adversarial Cooperative Rationalization: The Risk of Spurious Correlations in Even Clean Datasets
Wen Liu
Zhongyu Niu
Lang Gao
Zhiying Deng
Jun Wang
Haobo Wang
Ruixuan Li
1.2K
4
0
04 May 2025
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AIInternational Conference on Learning Representations (ICLR), 2024
Xu Zheng
Farhad Shirani
Zhuomin Chen
Chaohao Lin
Wei Cheng
Wenbo Guo
Dongsheng Luo
AAML
289
12
0
03 Oct 2024
Local Feature Selection without Label or Feature Leakage for
  Interpretable Machine Learning Predictions
Local Feature Selection without Label or Feature Leakage for Interpretable Machine Learning Predictions
Harrie Oosterhuis
Lijun Lyu
Avishek Anand
FAtt
285
3
0
16 Jul 2024
TextGenSHAP: Scalable Post-hoc Explanations in Text Generation with Long
  Documents
TextGenSHAP: Scalable Post-hoc Explanations in Text Generation with Long DocumentsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
James Enouen
Hootan Nakhost
Sayna Ebrahimi
Sercan O. Arik
Yan Liu
Tomas Pfister
265
14
0
03 Dec 2023
Can Large Language Models Explain Themselves? A Study of LLM-Generated
  Self-Explanations
Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations
Shiyuan Huang
Siddarth Mamidanna
Shreedhar Jangam
Yilun Zhou
Leilani H. Gilpin
LRMMILMELM
288
104
0
17 Oct 2023
D-Separation for Causal Self-Explanation
D-Separation for Causal Self-ExplanationNeural Information Processing Systems (NeurIPS), 2023
Wei Liu
Jun Wang
Yining Qi
Rui Li
Zhiying Deng
YuanKai Zhang
Yang Qiu
267
25
0
23 Sep 2023
Unsupervised Selective Rationalization with Noise Injection
Unsupervised Selective Rationalization with Noise InjectionAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Adam Storek
Melanie Subbiah
Kathleen McKeown
159
5
0
27 May 2023
Decoupled Rationalization with Asymmetric Learning Rates: A Flexible
  Lipschitz Restraint
Decoupled Rationalization with Asymmetric Learning Rates: A Flexible Lipschitz RestraintKnowledge Discovery and Data Mining (KDD), 2023
Wei Liu
Jun Wang
Yining Qi
Rui Li
Yang Qiu
Yuankai Zhang
Jie Han
Yixiong Zou
188
17
0
23 May 2023
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A SurveyComputational Linguistics (CL), 2022
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
386
160
0
22 Sep 2022
The Solvability of Interpretability Evaluation Metrics
The Solvability of Interpretability Evaluation MetricsFindings (Findings), 2022
Yilun Zhou
J. Shah
171
9
0
18 May 2022
SparCAssist: A Model Risk Assessment Assistant Based on Sparse Generated
  Counterfactuals
SparCAssist: A Model Risk Assessment Assistant Based on Sparse Generated CounterfactualsAnnual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2022
Zijian Zhang
Vinay Setty
Avishek Anand
125
7
0
03 May 2022
ExSum: From Local Explanations to Model Understanding
ExSum: From Local Explanations to Model UnderstandingNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022
Yilun Zhou
Marco Tulio Ribeiro
J. Shah
FAttLRM
191
26
0
30 Apr 2022
Human-AI Collaboration via Conditional Delegation: A Case Study of
  Content Moderation
Human-AI Collaboration via Conditional Delegation: A Case Study of Content ModerationInternational Conference on Human Factors in Computing Systems (CHI), 2022
Vivian Lai
Samuel Carton
Rajat Bhatnagar
Vera Liao
Yunfeng Zhang
Chenhao Tan
260
153
0
25 Apr 2022
Understanding Interlocking Dynamics of Cooperative Rationalization
Understanding Interlocking Dynamics of Cooperative Rationalization
Mo Yu
Yang Zhang
Shiyu Chang
Tommi Jaakkola
259
47
0
26 Oct 2021
Do Feature Attribution Methods Correctly Attribute Features?
Do Feature Attribution Methods Correctly Attribute Features?AAAI Conference on Artificial Intelligence (AAAI), 2021
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAttXAI
348
151
0
27 Apr 2021
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.9K
19,183
0
16 Feb 2016
1