ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.01809
  4. Cited By
Exploring Distantly-Labeled Rationales in Neural Network Models

Exploring Distantly-Labeled Rationales in Neural Network Models

Annual Meeting of the Association for Computational Linguistics (ACL), 2021
3 June 2021
Quzhe Huang
Shengqi Zhu
Yansong Feng
Dongyan Zhao
ArXiv (abs)PDFHTMLGithub

Papers citing "Exploring Distantly-Labeled Rationales in Neural Network Models"

9 / 9 papers shown
Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Mohsen Fayyaz
Fan Yin
Jiao Sun
Nanyun Peng
469
14
0
28 Jun 2024
Exploring the Trade-off Between Model Performance and Explanation
  Plausibility of Text Classifiers Using Human Rationales
Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales
Lucas Resck
Marcos M. Raimundo
Jorge Poco
366
8
0
03 Apr 2024
Using Interpretation Methods for Model Enhancement
Using Interpretation Methods for Model EnhancementConference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Zhuo Chen
Chengyue Jiang
Kewei Tu
380
3
0
02 Apr 2024
Identifying Self-Disclosures of Use, Misuse and Addiction in
  Community-based Social Media Posts
Identifying Self-Disclosures of Use, Misuse and Addiction in Community-based Social Media Posts
Chenghao Yang
Tuhin Chakrabarty
K. Hochstatter
M. Slavin
N. El-Bassel
Smaranda Muresan
448
4
0
15 Nov 2023
REFER: An End-to-end Rationale Extraction Framework for Explanation
  Regularization
REFER: An End-to-end Rationale Extraction Framework for Explanation RegularizationConference on Computational Natural Language Learning (CoNLL), 2023
Mohammad Reza Ghasemi Madani
Pasquale Minervini
313
5
0
22 Oct 2023
XMD: An End-to-End Framework for Interactive Explanation-Based Debugging
  of NLP Models
XMD: An End-to-End Framework for Interactive Explanation-Based Debugging of NLP ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Dong-Ho Lee
Akshen Kadakia
Brihi Joshi
Aaron Chan
Ziyi Liu
...
Takashi Shibuya
Ryosuke Mitani
Toshiyuki Sekiya
Jay Pujara
Xiang Ren
LRM
269
11
0
30 Oct 2022
Investigating the Benefits of Free-Form Rationales
Investigating the Benefits of Free-Form RationalesConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Jiao Sun
Swabha Swayamdipta
Jonathan May
Xuezhe Ma
355
17
0
25 May 2022
ER-Test: Evaluating Explanation Regularization Methods for Language
  Models
ER-Test: Evaluating Explanation Regularization Methods for Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Brihi Joshi
Aaron Chan
Ziyi Liu
Shaoliang Nie
Maziar Sanjabi
Hamed Firooz
Xiang Ren
AAML
440
7
0
25 May 2022
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
2.7K
21,359
0
16 Feb 2016
1
Page 1 of 1