ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.18156
33
0

Can LLMs Explain Themselves Counterfactually?

25 February 2025
Zahra Dehghanighobadi
Asja Fischer
Muhammad Bilal Zafar
    LRM
ArXivPDFHTML
Abstract

Explanations are an important tool for gaining insights into the behavior of ML models, calibrating user trust and ensuring regulatory compliance. Past few years have seen a flurry of post-hoc methods for generating model explanations, many of which involve computing model gradients or solving specially designed optimization problems. However, owing to the remarkable reasoning abilities of Large Language Model (LLMs), self-explanation, that is, prompting the model to explain its outputs has recently emerged as a new paradigm. In this work, we study a specific type of self-explanations, self-generated counterfactual explanations (SCEs). We design tests for measuring the efficacy of LLMs in generating SCEs. Analysis over various LLM families, model sizes, temperature settings, and datasets reveals that LLMs sometimes struggle to generate SCEs. Even when they do, their prediction often does not agree with their own counterfactual reasoning.

View on arXiv
@article{dehghanighobadi2025_2502.18156,
  title={ Can LLMs Explain Themselves Counterfactually? },
  author={ Zahra Dehghanighobadi and Asja Fischer and Muhammad Bilal Zafar },
  journal={arXiv preprint arXiv:2502.18156},
  year={ 2025 }
}
Comments on this paper