ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.10268
  4. Cited By
B-cos Networks: Alignment is All We Need for Interpretability

B-cos Networks: Alignment is All We Need for Interpretability

20 May 2022
Moritz D Boehle
Mario Fritz
Bernt Schiele
ArXivPDFHTML

Papers citing "B-cos Networks: Alignment is All We Need for Interpretability"

12 / 62 papers shown
Title
False Sense of Security: Leveraging XAI to Analyze the Reasoning and
  True Performance of Context-less DGA Classifiers
False Sense of Security: Leveraging XAI to Analyze the Reasoning and True Performance of Context-less DGA Classifiers
Arthur Drichel
Ulrike Meyer
17
7
0
10 Jul 2023
B-cos Alignment for Inherently Interpretable CNNs and Vision
  Transformers
B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers
Moritz D Boehle
Navdeeppal Singh
Mario Fritz
Bernt Schiele
54
27
0
19 Jun 2023
Towards credible visual model interpretation with path attribution
Towards credible visual model interpretation with path attribution
Naveed Akhtar
Muhammad A. A. K. Jalwana
FAtt
19
4
0
23 May 2023
Take 5: Interpretable Image Classification with a Handful of Features
Take 5: Interpretable Image Classification with a Handful of Features
Thomas Norrenbrock
Marco Rudolph
Bodo Rosenhahn
FAtt
29
7
0
23 Mar 2023
Adversarial Counterfactual Visual Explanations
Adversarial Counterfactual Visual Explanations
Guillaume Jeanneret
Loïc Simon
F. Jurie
DiffM
36
27
0
17 Mar 2023
ICICLE: Interpretable Class Incremental Continual Learning
ICICLE: Interpretable Class Incremental Continual Learning
Dawid Rymarczyk
Joost van de Weijer
Bartosz Zieliñski
Bartlomiej Twardowski
CLL
24
28
0
14 Mar 2023
Learning Support and Trivial Prototypes for Interpretable Image
  Classification
Learning Support and Trivial Prototypes for Interpretable Image Classification
Chong Wang
Yuyuan Liu
Yuanhong Chen
Fengbei Liu
Yu Tian
Davis J. McCarthy
Helen Frazer
G. Carneiro
34
24
0
08 Jan 2023
Implicit Mixture of Interpretable Experts for Global and Local
  Interpretability
Implicit Mixture of Interpretable Experts for Global and Local Interpretability
N. Elazar
Kerry Taylor
MoE
26
0
0
01 Dec 2022
"Help Me Help the AI": Understanding How Explainability Can Support
  Human-AI Interaction
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
A. Monroy-Hernández
36
107
0
02 Oct 2022
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers
  for Interpretable Image Recognition
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition
Mengqi Xue
Qihan Huang
Haofei Zhang
Lechao Cheng
Jie Song
Ming-hui Wu
Mingli Song
ViT
25
52
0
22 Aug 2022
A Cognitive Explainer for Fetal ultrasound images classifier Based on
  Medical Concepts
A Cognitive Explainer for Fetal ultrasound images classifier Based on Medical Concepts
Ying-Shuai Wanga
Yunxia Liua
Licong Dongc
Xuzhou Wua
Huabin Zhangb
Qiongyu Yed
Desheng Sunc
Xiaobo Zhoue
Kehong Yuan
19
0
0
19 Jan 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Previous
12