ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.01890
  4. Cited By
Have We Learned to Explain?: How Interpretability Methods Can Learn to
  Encode Predictions in their Interpretations

Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations

2 March 2021
N. Jethani
Mukund Sudarshan
Yindalon Aphinyanagphongs
Rajesh Ranganath
    FAtt
ArXivPDFHTML

Papers citing "Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations"

2 / 2 papers shown
Title
A New Approach to Backtracking Counterfactual Explanations: A Causal Framework for Efficient Model Interpretability
A New Approach to Backtracking Counterfactual Explanations: A Causal Framework for Efficient Model Interpretability
Pouria Fatemi
Ehsan Sharifian
Mohammad Hossein Yassaee
31
0
0
05 May 2025
On the Challenges and Opportunities in Generative AI
On the Challenges and Opportunities in Generative AI
Laura Manduchi
Kushagra Pandey
Robert Bamler
Ryan Cotterell
Sina Daubener
...
F. Wenzel
Frank Wood
Stephan Mandt
Vincent Fortuin
Vincent Fortuin
35
17
0
28 Feb 2024
1