ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.03117
  4. Cited By
Are Human Explanations Always Helpful? Towards Objective Evaluation of
  Human Natural Language Explanations

Are Human Explanations Always Helpful? Towards Objective Evaluation of Human Natural Language Explanations

4 May 2023
Bingsheng Yao
Prithviraj Sen
Lucian Popa
James A. Hendler
Dakuo Wang
    XAI
    ELM
    FAtt
ArXivPDFHTML

Papers citing "Are Human Explanations Always Helpful? Towards Objective Evaluation of Human Natural Language Explanations"

12 / 12 papers shown
Title
Corrections Meet Explanations: A Unified Framework for Explainable Grammatical Error Correction
Corrections Meet Explanations: A Unified Framework for Explainable Grammatical Error Correction
Jingheng Ye
Shang Qin
Yinghui Li
Hai-Tao Zheng
Shen Wang
Qingsong Wen
52
0
0
24 Feb 2025
Understanding the Effect of Algorithm Transparency of Model Explanations
  in Text-to-SQL Semantic Parsing
Understanding the Effect of Algorithm Transparency of Model Explanations in Text-to-SQL Semantic Parsing
Daking Rai
Rydia R. Weiland
Kayla Margaret Gabriella Herrera
Tyler H. Shaw
Ziyu Yao
24
1
0
05 Oct 2024
ChatGPT Rates Natural Language Explanation Quality Like Humans: But on
  Which Scales?
ChatGPT Rates Natural Language Explanation Quality Like Humans: But on Which Scales?
Fan Huang
Haewoon Kwak
Kunwoo Park
Jisun An
ALM
ELM
AI4MH
18
12
0
26 Mar 2024
More Samples or More Prompts? Exploring Effective In-Context Sampling
  for LLM Few-Shot Prompt Engineering
More Samples or More Prompts? Exploring Effective In-Context Sampling for LLM Few-Shot Prompt Engineering
Bingsheng Yao
Guiming Hardy Chen
Ruishi Zou
Yuxuan Lu
Jiachen Li
Shao Zhang
Yisi Sang
Sijia Liu
James A. Hendler
Dakuo Wang
35
13
0
16 Nov 2023
Large Language Models are In-context Teachers for Knowledge Reasoning
Large Language Models are In-context Teachers for Knowledge Reasoning
Jiachen Zhao
Zonghai Yao
Zhichao Yang
Hong-ye Yu
ReLM
LRM
27
1
0
12 Nov 2023
Beyond Labels: Empowering Human Annotators with Natural Language
  Explanations through a Novel Active-Learning Architecture
Beyond Labels: Empowering Human Annotators with Natural Language Explanations through a Novel Active-Learning Architecture
Bingsheng Yao
Ishan Jindal
Lucian Popa
Yannis Katsis
Sayan Ghosh
...
Yuxuan Lu
Shashank Srivastava
Yunyao Li
James A. Hendler
Dakuo Wang
32
10
0
22 May 2023
Are All Spurious Features in Natural Language Alike? An Analysis through
  a Causal Lens
Are All Spurious Features in Natural Language Alike? An Analysis through a Causal Lens
Nitish Joshi
X. Pan
Hengxing He
CML
44
28
0
25 Oct 2022
Measuring Association Between Labels and Free-Text Rationales
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
274
170
0
24 Oct 2020
Human-AI Collaboration in Data Science: Exploring Data Scientists'
  Perceptions of Automated AI
Human-AI Collaboration in Data Science: Exploring Data Scientists' Perceptions of Automated AI
Dakuo Wang
Justin D. Weisz
Michael J. Muller
Parikshit Ram
Werner Geyer
Casey Dugan
Y. Tausczik
Horst Samulowitz
Alexander G. Gray
166
309
0
05 Sep 2019
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
252
620
0
04 Dec 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,672
0
28 Feb 2017
1