ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.07769
  4. Cited By
SCOUT: Self-aware Discriminant Counterfactual Explanations

SCOUT: Self-aware Discriminant Counterfactual Explanations

16 April 2020
Pei Wang
Nuno Vasconcelos
    FAtt
ArXivPDFHTML

Papers citing "SCOUT: Self-aware Discriminant Counterfactual Explanations"

9 / 9 papers shown
Title
Attention IoU: Examining Biases in CelebA using Attention Maps
Attention IoU: Examining Biases in CelebA using Attention Maps
Aaron Serianni
Tyler Zhu
Olga Russakovsky
V. V. Ramaswamy
42
0
0
25 Mar 2025
Treble Counterfactual VLMs: A Causal Approach to Hallucination
Treble Counterfactual VLMs: A Causal Approach to Hallucination
Li Li
Jiashu Qu
Yuxiao Zhou
Yuehan Qin
Tiankai Yang
Yue Zhao
90
2
0
08 Mar 2025
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
79
0
0
23 Nov 2024
ML-Based Teaching Systems: A Conceptual Framework
ML-Based Teaching Systems: A Conceptual Framework
Philipp Spitzer
Niklas Kühl
Daniel Heinz
G. Satzger
14
6
0
12 May 2023
ICICLE: Interpretable Class Incremental Continual Learning
ICICLE: Interpretable Class Incremental Continual Learning
Dawid Rymarczyk
Joost van de Weijer
Bartosz Zieliñski
Bartlomiej Twardowski
CLL
26
28
0
14 Mar 2023
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts
Mikolaj Sacha
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
VLM
32
29
0
28 Jan 2023
Diffusion Models for Counterfactual Explanations
Diffusion Models for Counterfactual Explanations
Guillaume Jeanneret
Loïc Simon
F. Jurie
DiffM
32
55
0
29 Mar 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Fast Real-time Counterfactual Explanations
Fast Real-time Counterfactual Explanations
Yunxia Zhao
17
15
0
11 Jul 2020
1