Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.12558
Cited By
Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong
19 October 2023
Chenglei Si
Navita Goyal
Sherry Tongshuang Wu
Chen Zhao
Shi Feng
Hal Daumé
Jordan L. Boyd-Graber
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong"
7 / 7 papers shown
Title
Improving LLM Personas via Rationalization with Psychological Scaffolds
Brihi Joshi
Xiang Ren
Swabha Swayamdipta
Rik Koncel-Kedziorski
Tim Paek
68
0
0
25 Apr 2025
Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Zana Buçinca
S. Swaroop
Amanda E. Paluch
Finale Doshi-Velez
Krzysztof Z. Gajos
48
2
0
05 Oct 2024
STORYSUMM: Evaluating Faithfulness in Story Summarization
Melanie Subbiah
Faisal Ladhak
Akankshya Mishra
Griffin Adams
Lydia B. Chilton
Kathleen McKeown
34
4
0
09 Jul 2024
More Victories, Less Cooperation: Assessing Cicero's Diplomacy Play
Wichayaporn Wongkamjan
Feng Gu
Yanze Wang
Ulf Hermjakob
Jonathan May
Brandon M. Stewart
Jonathan K. Kummerfeld
Denis Peskoff
Jordan L. Boyd-Graber
45
3
0
07 Jun 2024
On the Risk of Misinformation Pollution with Large Language Models
Yikang Pan
Liangming Pan
Wenhu Chen
Preslav Nakov
Min-Yen Kan
W. Wang
DeLMO
190
109
0
23 May 2023
Re-Examining Calibration: The Case of Question Answering
Chenglei Si
Chen Zhao
Sewon Min
Jordan L. Boyd-Graber
51
30
0
25 May 2022
Human Interpretation of Saliency-based Explanation Over Text
Hendrik Schuff
Alon Jacovi
Heike Adel
Yoav Goldberg
Ngoc Thang Vu
MILM
XAI
FAtt
144
39
0
27 Jan 2022
1