ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.11642
  4. Cited By
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And
  Dataset

Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset

25 April 2022
Leon Sixt
M. Schuessler
Oana-Iuliana Popescu
Philipp Weiß
Tim Landgraf
    FAtt
ArXivPDFHTML

Papers citing "Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset"

10 / 10 papers shown
Title
What Sketch Explainability Really Means for Downstream Tasks
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay
Pinaki Nath Chowdhury
A. Bhunia
Aneeshan Sain
Tao Xiang
Yi-Zhe Song
25
4
0
14 Mar 2024
Feature Accentuation: Revealing 'What' Features Respond to in Natural
  Images
Feature Accentuation: Revealing 'What' Features Respond to in Natural Images
Christopher Hamblin
Thomas Fel
Srijani Saha
Talia Konkle
George A. Alvarez
FAtt
17
2
0
15 Feb 2024
Unlocking Feature Visualization for Deeper Networks with MAgnitude
  Constrained Optimization
Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization
Thomas Fel
Thibaut Boissin
Victor Boutin
Agustin Picard
Paul Novello
...
Drew Linsley
Tom Rousseau
Rémi Cadène
Laurent Gardes
Thomas Serre
FAtt
8
18
0
11 Jun 2023
A Holistic Approach to Unifying Automatic Concept Extraction and Concept
  Importance Estimation
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation
Thomas Fel
Victor Boutin
Mazda Moayeri
Rémi Cadène
Louis Bethune
Léo Andéol
Mathieu Chalvidal
Thomas Serre
FAtt
10
48
0
11 Jun 2023
CRAFT: Concept Recursive Activation FacTorization for Explainability
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
6
102
0
17 Nov 2022
Towards Human-centered Explainable AI: A Survey of User Studies for
  Model Explanations
Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations
Yao Rong
Tobias Leemann
Thai-trang Nguyen
Lisa Fiedler
Peizhu Qian
Vaibhav Unhelkar
Tina Seidel
Gjergji Kasneci
Enkelejda Kasneci
ELM
20
91
0
20 Oct 2022
Learning Unsupervised Hierarchies of Audio Concepts
Learning Unsupervised Hierarchies of Audio Concepts
Darius Afchar
Romain Hennequin
Vincent Guigue
12
2
0
21 Jul 2022
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation
  Framework for Explainability Methods
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods
Julien Colin
Thomas Fel
Rémi Cadène
Thomas Serre
17
97
0
06 Dec 2021
Sanity Simulations for Saliency Methods
Sanity Simulations for Saliency Methods
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
FAtt
22
17
0
13 May 2021
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,658
0
28 Feb 2017
1