ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.12200
  4. Cited By
This Probably Looks Exactly Like That: An Invertible Prototypical
  Network

This Probably Looks Exactly Like That: An Invertible Prototypical Network

16 July 2024
Zachariah Carmichael
Timothy Redgrave
Daniel Gonzalez Cedre
Walter J. Scheirer
    BDL
ArXivPDFHTML

Papers citing "This Probably Looks Exactly Like That: An Invertible Prototypical Network"

8 / 8 papers shown
Title
This looks like what? Challenges and Future Research Directions for Part-Prototype Models
This looks like what? Challenges and Future Research Directions for Part-Prototype Models
Khawla Elhadri
Tomasz Michalski
Adam Wróbel
Jorg Schlotterer
Bartosz Zieliñski
C. Seifert
79
0
0
13 Feb 2025
Keep the Faith: Faithful Explanations in Convolutional Neural Networks
  for Case-Based Reasoning
Keep the Faith: Faithful Explanations in Convolutional Neural Networks for Case-Based Reasoning
Tom Nuno Wolf
Fabian Bongratz
Anne-Marie Rickmann
Sebastian Polsterl
Christian Wachinger
AAML
FAtt
25
6
0
15 Dec 2023
Human-Centered Evaluation of XAI Methods
Human-Centered Evaluation of XAI Methods
Karam Dawoud
Wojciech Samek
Peter Eisert
Sebastian Lapuschkin
Sebastian Bosse
35
4
0
11 Oct 2023
DiffMIC: Dual-Guidance Diffusion Network for Medical Image
  Classification
DiffMIC: Dual-Guidance Diffusion Network for Medical Image Classification
Yijun Yang
H. Fu
Angelica I. Aviles-Rivero
Carola-Bibiane Schönlieb
Lei Zhu
MedIm
80
47
0
19 Mar 2023
Diffusion Models: A Comprehensive Survey of Methods and Applications
Diffusion Models: A Comprehensive Survey of Methods and Applications
Ling Yang
Zhilong Zhang
Yingxia Shao
Shenda Hong
Runsheng Xu
Yue Zhao
Wentao Zhang
Bin Cui
Ming-Hsuan Yang
DiffM
MedIm
213
1,277
0
02 Sep 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
58
112
0
06 Dec 2021
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,337
0
11 Nov 2021
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
286
4,143
0
23 Aug 2019
1