ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.10496
  4. Cited By
Incorporating Attribution Importance for Improving Faithfulness Metrics

Incorporating Attribution Importance for Improving Faithfulness Metrics

Annual Meeting of the Association for Computational Linguistics (ACL), 2023
17 May 2023
Zhixue Zhao
Nikolaos Aletras
ArXiv (abs)PDFHTMLGithub (3★)

Papers citing "Incorporating Attribution Importance for Improving Faithfulness Metrics"

12 / 12 papers shown
Title
Framework for Machine Evaluation of Reasoning Completeness in Large Language Models For Classification Tasks
Framework for Machine Evaluation of Reasoning Completeness in Large Language Models For Classification Tasks
Avinash Patil
LRM
40
0
0
23 Oct 2025
BF-Max: an Efficient Bit Flipping Decoder with Predictable Decoding Failure RateInternational Symposium on Information Theory (ISIT), 2025
Alessio Baldelli
Marco Baldi
F. Chiaraluce
Paolo Santini
324
2
0
11 Jun 2025
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc MethodsConference on Fairness, Accountability and Transparency (FAccT), 2025
Mahdi Dhaini
Ege Erdogan
Nils Feldhus
Gjergji Kasneci
281
1
0
02 May 2025
Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Yiyou Sun
Y. Gai
Lijie Chen
Abhilasha Ravichander
Yejin Choi
Basel Alomair
HILM
320
10
0
17 Apr 2025
Noiser: Bounded Input Perturbations for Attributing Large Language Models
Noiser: Bounded Input Perturbations for Attributing Large Language Models
Mohammad Reza Ghasemi Madani
Aryo Pradipta Gema
Gabriele Sarti
Yu Zhao
Pasquale Minervini
Baptiste Caramiaux
AAML
289
2
0
03 Apr 2025
Normalized AOPC: Fixing Misleading Faithfulness Metrics for Feature Attribution Explainability
Normalized AOPC: Fixing Misleading Faithfulness Metrics for Feature Attribution ExplainabilityAnnual Meeting of the Association for Computational Linguistics (ACL), 2024
Joakim Edin
Andreas Geert Motzfeldt
Casper L. Christensen
Tuukka Ruotsalo
Lars Maaløe
Maria Maistro
384
4
0
15 Aug 2024
ExU: AI Models for Examining Multilingual Disinformation Narratives and
  Understanding their Spread
ExU: AI Models for Examining Multilingual Disinformation Narratives and Understanding their Spread
Jake Vasilakes
Zhixue Zhao
Ivan Vykopal
Michal Gregor
Martin Hyben
Carolina Scarton
135
0
0
30 May 2024
Latent Concept-based Explanation of NLP Models
Latent Concept-based Explanation of NLP Models
Xuemin Yu
Fahim Dalvi
Nadir Durrani
Marzia Nouri
Hassan Sajjad
LRMFAtt
140
10
0
18 Apr 2024
Comparing Explanation Faithfulness between Multilingual and Monolingual
  Fine-tuned Language Models
Comparing Explanation Faithfulness between Multilingual and Monolingual Fine-tuned Language Models
Zhixue Zhao
Nikolaos Aletras
183
8
0
19 Mar 2024
ReAGent: A Model-agnostic Feature Attribution Method for Generative
  Language Models
ReAGent: A Model-agnostic Feature Attribution Method for Generative Language Models
Zhixue Zhao
Boxuan Shan
277
10
0
01 Feb 2024
Investigating Hallucinations in Pruned Large Language Models for
  Abstractive Summarization
Investigating Hallucinations in Pruned Large Language Models for Abstractive SummarizationTransactions of the Association for Computational Linguistics (TACL), 2023
G. Chrysostomou
Zhixue Zhao
Miles Williams
Nikolaos Aletras
HILM
187
20
0
15 Nov 2023
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
2.1K
19,394
0
16 Feb 2016
1