ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.21131
  4. Cited By
Towards Unifying Evaluation of Counterfactual Explanations: Leveraging Large Language Models for Human-Centric Assessments
v1v2v3 (latest)

Towards Unifying Evaluation of Counterfactual Explanations: Leveraging Large Language Models for Human-Centric Assessments

AAAI Conference on Artificial Intelligence (AAAI), 2024
28 October 2024
M. Domnich
Julius Valja
Rasmus Moorits Veski
Giacomo Magnifico
Kadi Tulver
Eduard Barbu
Raul Vicente
    LRMELM
ArXiv (abs)PDFHTML

Papers citing "Towards Unifying Evaluation of Counterfactual Explanations: Leveraging Large Language Models for Human-Centric Assessments"

27 / 27 papers shown
Title
Through a Compressed Lens: Investigating The Impact of Quantization on Factual Knowledge Recall
Through a Compressed Lens: Investigating The Impact of Quantization on Factual Knowledge Recall
Qianli Wang
Mingyang Wang
Nils Feldhus
Simon Ostermann
Yuan Cao
Hinrich Schütze
Sebastian Möller
Vera Schmitt
MQ
220
2
0
20 May 2025
Truth or Twist? Optimal Model Selection for Reliable Label Flipping Evaluation in LLM-based Counterfactuals
Truth or Twist? Optimal Model Selection for Reliable Label Flipping Evaluation in LLM-based Counterfactuals
Qianli Wang
Van Bach Nguyen
Nils Feldhus
Luis Felipe Villa-Arenas
Christin Seifert
Sebastian Möller
Vera Schmitt
365
2
0
20 May 2025
Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities
Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities
M. Domnich
Rasmus Moorits Veski
Julius Valja
Kadi Tulver
Raul Vicente
FAtt
254
0
0
07 Apr 2025
FitCF: A Framework for Automatic Feature Importance-guided Counterfactual Example Generation
FitCF: A Framework for Automatic Feature Importance-guided Counterfactual Example GenerationAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Qianli Wang
Nils Feldhus
Simon Ostermann
Luis Felipe Villa-Arenas
Sebastian Möller
Vera Schmitt
AAML
462
4
0
01 Jan 2025
Lusifer: LLM-based User SImulated Feedback Environment for online Recommender systems
Lusifer: LLM-based User SImulated Feedback Environment for online Recommender systems
Danial Ebrat
Luis Rueda
Luis Rueda
272
6
0
22 May 2024
Aligning LLM Agents by Learning Latent Preference from User Edits
Aligning LLM Agents by Learning Latent Preference from User Edits
Ge Gao
Alexey Taymanov
Eduardo Salinas
Paul Mineiro
Dipendra Kumar Misra
LLMAG
275
47
0
23 Apr 2024
Enhancing Counterfactual Explanation Search with Diffusion Distance and
  Directional Coherence
Enhancing Counterfactual Explanation Search with Diffusion Distance and Directional Coherence
M. Domnich
Raul Vicente
187
4
0
19 Apr 2024
CountARFactuals -- Generating plausible model-agnostic counterfactual
  explanations with adversarial random forests
CountARFactuals -- Generating plausible model-agnostic counterfactual explanations with adversarial random forests
Susanne Dandl
Kristin Blesch
Timo Freiesleben
Gunnar Konig
Jan Kapar
J. Herbinger
Marvin N. Wright
AAML
277
6
0
04 Apr 2024
Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open
  Challenges and Interdisciplinary Research Directions
Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research DirectionsInformation Fusion (Inf. Fusion), 2023
Luca Longo
Mario Brcic
Federico Cabitza
Jaesik Choi
Roberto Confalonieri
...
Andrés Páez
Wojciech Samek
Johannes Schneider
Timo Speith
Simone Stumpf
458
356
0
30 Oct 2023
Towards LLM-guided Causal Explainability for Black-box Text Classifiers
Towards LLM-guided Causal Explainability for Black-box Text Classifiers
Amrita Bhattacharjee
Raha Moraffah
Joshua Garland
Huan Liu
267
48
0
23 Sep 2023
A Survey on Large Language Model based Autonomous Agents
A Survey on Large Language Model based Autonomous Agents
Lei Wang
Chengbang Ma
Xueyang Feng
Zeyu Zhang
Hao-ran Yang
...
Xu Chen
Yankai Lin
Wayne Xin Zhao
Zhewei Wei
Ji-Rong Wen
LLMAGAI4CELM&Ro
650
2,043
0
22 Aug 2023
Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language
  Models' Alignment
Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment
Yang Liu
Yuanshun Yao
Jean-François Ton
Xiaoying Zhang
Ruocheng Guo
Hao Cheng
Yegor Klochkov
Muhammad Faaiz Taufiq
Hanguang Li
ALM
350
457
0
10 Aug 2023
For Better or Worse: The Impact of Counterfactual Explanations'
  Directionality on User Behavior in xAI
For Better or Worse: The Impact of Counterfactual Explanations' Directionality on User Behavior in xAI
Ulrike Kuhl
André Artelt
Barbara Hammer
202
5
0
13 Jun 2023
QLoRA: Efficient Finetuning of Quantized LLMs
QLoRA: Efficient Finetuning of Quantized LLMsNeural Information Processing Systems (NeurIPS), 2023
Tim Dettmers
Artidoro Pagnoni
Ari Holtzman
Luke Zettlemoyer
ALM
551
3,612
0
23 May 2023
TalkToModel: Explaining Machine Learning Models with Interactive Natural
  Language Conversations
TalkToModel: Explaining Machine Learning Models with Interactive Natural Language ConversationsNature Machine Intelligence (Nat. Mach. Intell.), 2022
Dylan Slack
Satyapriya Krishna
Himabindu Lakkaraju
Sameer Singh
293
110
0
08 Jul 2022
Features of Explainability: How users understand counterfactual and
  causal explanations for categorical and continuous features in XAI
Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI
Greta Warren
Mark T. Keane
R. Byrne
CML
145
27
0
21 Apr 2022
DNSMOS P.835: A Non-Intrusive Perceptual Objective Speech Quality Metric
  to Evaluate Noise Suppressors
DNSMOS P.835: A Non-Intrusive Perceptual Objective Speech Quality Metric to Evaluate Noise Suppressors
Chandan K. A. Reddy
Vishak Gopal
Ross Cutler
440
325
0
05 Oct 2021
CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
  Human Trust in Image Recognition Models
CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing Human Trust in Image Recognition Models
Arjun Reddy Akula
Keze Wang
Changsong Liu
Sari Saba-Sadiya
Hongjing Lu
S. Todorovic
J. Chai
Song-Chun Zhu
199
56
0
03 Sep 2021
CARE: Coherent Actionable Recourse based on Sound Counterfactual
  Explanations
CARE: Coherent Actionable Recourse based on Sound Counterfactual Explanations
P. Rasouli
Ingrid Chieh Yu
113
32
0
18 Aug 2021
Understanding Consumer Preferences for Explanations Generated by XAI
  Algorithms
Understanding Consumer Preferences for Explanations Generated by XAI Algorithms
Yanou Ramon
T. Vermeire
Olivier Toubia
David Martens
Theodoros Evgeniou
137
12
0
06 Jul 2021
If Only We Had Better Counterfactual Explanations: Five Key Deficits to
  Rectify in the Evaluation of Counterfactual XAI Techniques
If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI TechniquesInternational Joint Conference on Artificial Intelligence (IJCAI), 2021
Mark T. Keane
Eoin M. Kenny
Eoin Delaney
Barry Smyth
CML
278
165
0
26 Feb 2021
PRINCE: Provider-side Interpretability with Counterfactual Explanations
  in Recommender Systems
PRINCE: Provider-side Interpretability with Counterfactual Explanations in Recommender SystemsWeb Search and Data Mining (WSDM), 2019
Azin Ghazimatin
Oana Balalau
Rishiraj Saha Roy
Gerhard Weikum
FAtt
398
107
0
19 Nov 2019
Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of
  Key Ideas and Publications, and Bibliography for Explainable AI
Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI
Shane T. Mueller
R. Hoffman
W. Clancey
Abigail Emrey
Gary Klein
XAI
233
305
0
05 Feb 2019
Metrics for Explainable AI: Challenges and Prospects
Metrics for Explainable AI: Challenges and Prospects
R. Hoffman
Shane T. Mueller
Gary Klein
Jordan Litman
XAI
292
821
0
11 Dec 2018
A review of possible effects of cognitive biases on the interpretation
  of rule-based machine learning models
A review of possible effects of cognitive biases on the interpretation of rule-based machine learning models
Tomáš Kliegr
Š. Bahník
Johannes Furnkranz
254
116
0
09 Apr 2018
Counterfactual Explanations without Opening the Black Box: Automated
  Decisions and the GDPR
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
861
2,709
0
01 Nov 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
725
4,783
0
22 Jun 2017
1