Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.04990
Cited By
Explanation-based Finetuning Makes Models More Robust to Spurious Cues
8 May 2023
Josh Magnus Ludan
Yixuan Meng
Nguyen Tai
Saurabh Shah
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
AAML
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Explanation-based Finetuning Makes Models More Robust to Spurious Cues"
7 / 7 papers shown
Title
ALMANACS: A Simulatability Benchmark for Language Model Explainability
Edmund Mills
Shiye Su
Stuart J. Russell
Scott Emmons
40
7
0
20 Dec 2023
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
315
8,402
0
28 Jan 2022
Toward Annotator Group Bias in Crowdsourcing
Haochen Liu
J. Thekinen
Sinem Mollaoglu
Da Tang
Ji Yang
Youlong Cheng
Hui Liu
Jiliang Tang
39
16
0
08 Oct 2021
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
274
170
0
24 Oct 2020
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
Mor Geva
Yoav Goldberg
Jonathan Berant
235
319
0
21 Aug 2019
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
252
620
0
04 Dec 2018
Hypothesis Only Baselines in Natural Language Inference
Adam Poliak
Jason Naradowsky
Aparajita Haldar
Rachel Rudinger
Benjamin Van Durme
187
576
0
02 May 2018
1