ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.13876
  4. Cited By
Knowledge-Grounded Self-Rationalization via Extractive and Natural
  Language Explanations
v1v2v3v4 (latest)

Knowledge-Grounded Self-Rationalization via Extractive and Natural Language Explanations

International Conference on Machine Learning (ICML), 2021
25 June 2021
Bodhisattwa Prasad Majumder
Oana-Maria Camburu
Thomas Lukasiewicz
Julian McAuley
ArXiv (abs)PDFHTML

Papers citing "Knowledge-Grounded Self-Rationalization via Extractive and Natural Language Explanations"

22 / 22 papers shown
Title
Corrections Meet Explanations: A Unified Framework for Explainable Grammatical Error Correction
Corrections Meet Explanations: A Unified Framework for Explainable Grammatical Error Correction
Jingheng Ye
Shang Qin
Hai-Tao Zheng
Hai-Tao Zheng
Shen Wang
Qingsong Wen
214
1
0
24 Feb 2025
Beyond Metrics: Evaluating LLMs' Effectiveness in Culturally Nuanced,
  Low-Resource Real-World Scenarios
Beyond Metrics: Evaluating LLMs' Effectiveness in Culturally Nuanced, Low-Resource Real-World Scenarios
Millicent Ochieng
Varun Gumma
Sunayana Sitaram
Jindong Wang
Vishrav Chaudhary
K. Ronen
Kalika Bali
Jacki OÑeill
248
7
0
01 Jun 2024
How Interpretable are Reasoning Explanations from Prompting Large
  Language Models?
How Interpretable are Reasoning Explanations from Prompting Large Language Models?
Yeo Wei Jie
Frank Xing
Rick Mong
Xiaoshi Zhong
ReLMLRM
276
34
0
19 Feb 2024
FaithLM: Towards Faithful Explanations for Large Language Models
FaithLM: Towards Faithful Explanations for Large Language Models
Yu-Neng Chuang
Guanchu Wang
Chia-Yuan Chang
Ruixiang Tang
Shaochen Zhong
Fan Yang
Mengnan Du
Xuanting Cai
Helen Zhou
Xia Hu
LRM
212
4
0
07 Feb 2024
Understanding Your Agent: Leveraging Large Language Models for Behavior
  Explanation
Understanding Your Agent: Leveraging Large Language Models for Behavior Explanation
Xijia Zhang
Yue (Sophie) Guo
Simon Stepputtis
Katia Sycara
Joseph Campbell
LLMAGLM&Ro
153
2
0
29 Nov 2023
Using Natural Language Explanations to Improve Robustness of In-context
  Learning
Using Natural Language Explanations to Improve Robustness of In-context LearningAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Xuanli He
Yuxiang Wu
Oana-Maria Camburu
Pasquale Minervini
Pontus Stenetorp
AAML
183
1
0
13 Nov 2023
To Tell The Truth: Language of Deception and Language Models
To Tell The Truth: Language of Deception and Language ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2023
Sanchaita Hazra
Bodhisattwa Prasad Majumder
HILM
155
7
0
13 Nov 2023
Characterizing Large Language Models as Rationalizers of
  Knowledge-intensive Tasks
Characterizing Large Language Models as Rationalizers of Knowledge-intensive Tasks
Aditi Mishra
Sajjadur Rahman
H. Kim
Kushan Mitra
Estevam R. Hruschka
271
10
0
09 Nov 2023
Learning to Follow Object-Centric Image Editing Instructions Faithfully
Learning to Follow Object-Centric Image Editing Instructions FaithfullyConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Tuhin Chakrabarty
Kanishk Singh
Arkadiy Saakyan
Smaranda Muresan
DiffM
153
10
0
29 Oct 2023
Explaining Agent Behavior with Large Language Models
Explaining Agent Behavior with Large Language Models
Xijia Zhang
Yue (Sophie) Guo
Simon Stepputtis
Katia Sycara
Joseph Campbell
LM&RoLLMAG
169
7
0
19 Sep 2023
ICLEF: In-Context Learning with Expert Feedback for Explainable Style
  Transfer
ICLEF: In-Context Learning with Expert Feedback for Explainable Style TransferAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Arkadiy Saakyan
Smaranda Muresan
208
4
0
15 Sep 2023
Learning by Self-Explaining
Learning by Self-Explaining
Wolfgang Stammer
Felix Friedrich
David Steinmann
Manuel Brack
Hikaru Shindo
Kristian Kersting
351
15
0
15 Sep 2023
DeViL: Decoding Vision features into Language
DeViL: Decoding Vision features into Language
Meghal Dani
Isabel Rio-Torto
Stephan Alaniz
Zeynep Akata
VLM
143
11
0
04 Sep 2023
Faithfulness Tests for Natural Language Explanations
Faithfulness Tests for Natural Language ExplanationsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Pepa Atanasova
Oana-Maria Camburu
Christina Lioma
Thomas Lukasiewicz
J. Simonsen
Isabelle Augenstein
FAtt
297
83
0
29 May 2023
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly
  Generating Predictions and Natural Language Explanations
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language ExplanationsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Jesus Solano
Oana-Maria Camburu
Pasquale Minervini
164
4
0
22 May 2023
Bridging the Transparency Gap: What Can Explainable AI Learn From the AI
  Act?
Bridging the Transparency Gap: What Can Explainable AI Learn From the AI Act?European Conference on Artificial Intelligence (ECAI), 2023
Bálint Gyevnár
Nick Ferguson
Burkhard Schafer
211
27
0
21 Feb 2023
KNIFE: Distilling Reasoning Knowledge From Free-Text Rationales
KNIFE: Distilling Reasoning Knowledge From Free-Text Rationales
Aaron Chan
Zhiyuan Zeng
Wyatt Lake
Brihi Joshi
Hanjie Chen
Xiang Ren
ReLMLRM
159
1
0
19 Dec 2022
Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level
  Natural Language Explanations
Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations
Björn Plüster
Jakob Ambsdorf
Lukas Braach
Jae Hee Lee
S. Wermter
178
6
0
08 Dec 2022
Saliency Map Verbalization: Comparing Feature Importance Representations
  from Model-free and Instruction-based Methods
Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods
Nils Feldhus
Leonhard Hennig
Maximilian Dustin Nasert
Christopher Ebert
Robert Schwarzenberg
Sebastian Möller
FAtt
159
22
0
13 Oct 2022
Explaining Chest X-ray Pathologies in Natural Language
Explaining Chest X-ray Pathologies in Natural LanguageInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2022
Maxime Kayser
Cornelius Emde
Oana-Maria Camburu
Guy Parsons
B. Papież
Thomas Lukasiewicz
MedIm
116
32
0
09 Jul 2022
Few-Shot Out-of-Domain Transfer Learning of Natural Language
  Explanations in a Label-Abundant Setup
Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations in a Label-Abundant Setup
Yordan Yordanov
Vid Kocijan
Thomas Lukasiewicz
Oana-Maria Camburu
190
20
0
12 Dec 2021
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.8K
19,183
0
16 Feb 2016
1