Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.00219
Cited By
Evaluating Human Alignment and Model Faithfulness of LLM Rationale
28 June 2024
Mohsen Fayyaz
Fan Yin
Jiao Sun
Nanyun Peng
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Evaluating Human Alignment and Model Faithfulness of LLM Rationale"
5 / 5 papers shown
Title
Multimodal LLM Augmented Reasoning for Interpretable Visual Perception Analysis
Shravan Chaudhari
Trilokya Akula
Yoon Kim
Tom Blake
LRM
40
0
0
16 Apr 2025
A Multimodal Symphony: Integrating Taste and Sound through Generative AI
Matteo Spanio
Massimiliano Zampini
Antonio Rodà
Franco Pierucci
31
0
0
04 Mar 2025
Unearthing Skill-Level Insights for Understanding Trade-Offs of Foundation Models
Mazda Moayeri
Vidhisha Balachandran
Varun Chandrasekaran
Safoora Yousefi
Thomas Fel
S. Feizi
Besmira Nushi
Neel Joshi
Vibhav Vineet
15
2
0
17 Oct 2024
Incorporating Residual and Normalization Layers into Analysis of Masked Language Models
Goro Kobayashi
Tatsuki Kuribayashi
Sho Yokoi
Kentaro Inui
155
45
0
15 Sep 2021
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
252
618
0
04 Dec 2018
1