ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.12380
  4. Cited By
OCTET: Object-aware Counterfactual Explanations

OCTET: Object-aware Counterfactual Explanations

22 November 2022
Mehdi Zemni
Mickaël Chen
Éloi Zablocki
H. Ben-younes
Patrick Pérez
Matthieu Cord
    AAML
ArXivPDFHTML

Papers citing "OCTET: Object-aware Counterfactual Explanations"

9 / 9 papers shown
Title
Disentangling Visual Transformers: Patch-level Interpretability for Image Classification
Disentangling Visual Transformers: Patch-level Interpretability for Image Classification
Guillaume Jeanneret
Loïc Simon
F. Jurie
ViT
44
0
0
24 Feb 2025
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
69
0
0
23 Nov 2024
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Jayneel Parekh
Quentin Bouniot
Pavlo Mozharovskyi
A. Newson
Florence dÁlché-Buc
SSL
49
1
0
01 Jul 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and
  Beyond: A Survey
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
41
5
0
02 May 2024
COIN: Counterfactual inpainting for weakly supervised semantic
  segmentation for medical images
COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images
Dmytro Shvetsov
Joonas Ariva
M. Domnich
Raul Vicente
Dmytro Fishman
MedIm
27
0
0
19 Apr 2024
Safety Implications of Explainable Artificial Intelligence in End-to-End Autonomous Driving
Safety Implications of Explainable Artificial Intelligence in End-to-End Autonomous Driving
Shahin Atakishiyev
Mohammad Salameh
Randy Goebel
58
5
0
18 Mar 2024
Designing an Encoder for StyleGAN Image Manipulation
Designing an Encoder for StyleGAN Image Manipulation
Omer Tov
Yuval Alaluf
Yotam Nitzan
Or Patashnik
Daniel Cohen-Or
191
775
0
04 Feb 2021
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,658
0
28 Feb 2017
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
L. V. D. van der Maaten
Kilian Q. Weinberger
PINN
3DV
244
35,884
0
25 Aug 2016
1