ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.10268
  4. Cited By
B-cos Networks: Alignment is All We Need for Interpretability

B-cos Networks: Alignment is All We Need for Interpretability

20 May 2022
Moritz D Boehle
Mario Fritz
Bernt Schiele
ArXivPDFHTML

Papers citing "B-cos Networks: Alignment is All We Need for Interpretability"

50 / 62 papers shown
Title
From Pixels to Perception: Interpretable Predictions via Instance-wise Grouped Feature Selection
From Pixels to Perception: Interpretable Predictions via Instance-wise Grouped Feature Selection
Moritz Vandenhirtz
Julia E. Vogt
35
0
0
19 May 2025
Towards Spatially-Aware and Optimally Faithful Concept-Based Explanations
Towards Spatially-Aware and Optimally Faithful Concept-Based Explanations
Shubham Kumar
Dwip Dalal
Narendra Ahuja
21
0
0
15 Apr 2025
VITAL: More Understandable Feature Visualization through Distribution Alignment and Relevant Information Flow
VITAL: More Understandable Feature Visualization through Distribution Alignment and Relevant Information Flow
Ada Gorgun
Bernt Schiele
Jonas Fischer
34
0
0
28 Mar 2025
Beyond Accuracy: What Matters in Designing Well-Behaved Models?
Beyond Accuracy: What Matters in Designing Well-Behaved Models?
Robin Hesse
Doğukan Bağcı
Bernt Schiele
Simone Schaub-Meyer
Stefan Roth
VLM
57
0
0
21 Mar 2025
Not Only Text: Exploring Compositionality of Visual Representations in Vision-Language Models
Not Only Text: Exploring Compositionality of Visual Representations in Vision-Language Models
Davide Berasi
Matteo Farina
Massimiliano Mancini
Elisa Ricci
Nicola Strisciuglio
CoGe
66
0
0
21 Mar 2025
Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes
Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes
Nhi Pham
Bernt Schiele
Adam Kortylewski
Jonas Fischer
56
0
0
17 Mar 2025
Now you see me! A framework for obtaining class-relevant saliency maps
Nils Philipp Walter
Jilles Vreeken
Jonas Fischer
FAtt
40
0
0
10 Mar 2025
Rashomon Sets for Prototypical-Part Networks: Editing Interpretable Models in Real-Time
J. Donnelly
Zhicheng Guo
A. Barnett
Hayden McTavish
Chaofan Chen
Cynthia Rudin
61
0
0
03 Mar 2025
Invariance Pair-Guided Learning: Enhancing Robustness in Neural Networks
Invariance Pair-Guided Learning: Enhancing Robustness in Neural Networks
Martin Surner
Abdelmajid Khelil
Ludwig Bothmann
OOD
51
0
0
26 Feb 2025
Disentangling Visual Transformers: Patch-level Interpretability for Image Classification
Disentangling Visual Transformers: Patch-level Interpretability for Image Classification
Guillaume Jeanneret
Loïc Simon
F. Jurie
ViT
44
0
0
24 Feb 2025
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
Shreyash Arya
Sukrut Rao
Moritz Bohle
Bernt Schiele
68
2
0
28 Jan 2025
COMIX: Compositional Explanations using Prototypes
COMIX: Compositional Explanations using Prototypes
S. Sivaprasad
D. Kangin
Plamen Angelov
Mario Fritz
130
0
0
10 Jan 2025
Label-free Concept Based Multiple Instance Learning for Gigapixel Histopathology
Label-free Concept Based Multiple Instance Learning for Gigapixel Histopathology
Susu Sun
Leslie Tessier
Frédérique Meeuwsen
Clément Grisi
Dominique van Midden
G. Litjens
Christian F. Baumgartner
24
2
0
06 Jan 2025
LineArt: A Knowledge-guided Training-free High-quality Appearance
  Transfer for Design Drawing with Diffusion Model
LineArt: A Knowledge-guided Training-free High-quality Appearance Transfer for Design Drawing with Diffusion Model
Xi Wang
H. Li
Heng Fang
Yichen Peng
H. Xie
Xi Yang
Chuntao Li
DiffM
72
0
0
16 Dec 2024
OMENN: One Matrix to Explain Neural Networks
OMENN: One Matrix to Explain Neural Networks
Adam Wróbel
Mikołaj Janusz
Bartosz Zieliñski
Dawid Rymarczyk
FAtt
AAML
75
0
0
03 Dec 2024
Unlearning-based Neural Interpretations
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
42
0
0
10 Oct 2024
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
Łukasz Struski
Dawid Rymarczyk
Jacek Tabor
51
0
0
16 Sep 2024
Revisiting FunnyBirds evaluation framework for prototypical parts
  networks
Revisiting FunnyBirds evaluation framework for prototypical parts networks
Szymon Opłatek
Dawid Rymarczyk
Bartosz Zieliñski
28
3
0
21 Aug 2024
Comprehensive Attribution: Inherently Explainable Vision Model with
  Feature Detector
Comprehensive Attribution: Inherently Explainable Vision Model with Feature Detector
Xianren Zhang
Dongwon Lee
Suhang Wang
VLM
FAtt
40
3
0
27 Jul 2024
Benchmarking the Attribution Quality of Vision Models
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
29
3
0
16 Jul 2024
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Jayneel Parekh
Quentin Bouniot
Pavlo Mozharovskyi
A. Newson
Florence dÁlché-Buc
SSL
55
1
0
01 Jul 2024
Conceptual Learning via Embedding Approximations for Reinforcing
  Interpretability and Transparency
Conceptual Learning via Embedding Approximations for Reinforcing Interpretability and Transparency
Maor Dikter
Tsachi Blau
Chaim Baskin
41
0
0
13 Jun 2024
How Video Meetings Change Your Expression
How Video Meetings Change Your Expression
Sumit Sarin
Utkarsh Mall
Purva Tendulkar
Carl Vondrick
CVBM
32
0
0
03 Jun 2024
Can Implicit Bias Imply Adversarial Robustness?
Can Implicit Bias Imply Adversarial Robustness?
Hancheng Min
René Vidal
34
3
0
24 May 2024
LucidPPN: Unambiguous Prototypical Parts Network for User-centric
  Interpretable Computer Vision
LucidPPN: Unambiguous Prototypical Parts Network for User-centric Interpretable Computer Vision
Mateusz Pach
Dawid Rymarczyk
K. Lewandowska
Jacek Tabor
Bartosz Zieliñski
32
7
0
23 May 2024
IMAFD: An Interpretable Multi-stage Approach to Flood Detection from
  time series Multispectral Data
IMAFD: An Interpretable Multi-stage Approach to Flood Detection from time series Multispectral Data
Ziyang Zhang
Plamen Angelov
D. Kangin
Nicolas Longépé
AI4CE
36
1
0
13 May 2024
Enhanced Online Test-time Adaptation with Feature-Weight Cosine
  Alignment
Enhanced Online Test-time Adaptation with Feature-Weight Cosine Alignment
Weiqin Chuah
Ruwan Tennakoon
A. Bab-Hadiashar
21
1
0
12 May 2024
Visual Concept Connectome (VCC): Open World Concept Discovery and their
  Interlayer Connections in Deep Models
Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models
M. Kowal
Richard P. Wildes
Konstantinos G. Derpanis
GNN
30
8
0
02 Apr 2024
Towards Explaining Hypercomplex Neural Networks
Towards Explaining Hypercomplex Neural Networks
Eleonora Lopez
Eleonora Grassucci
D. Capriotti
Danilo Comminiello
38
3
0
26 Mar 2024
What Sketch Explainability Really Means for Downstream Tasks
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay
Pinaki Nath Chowdhury
A. Bhunia
Aneeshan Sain
Tao Xiang
Yi-Zhe Song
30
4
0
14 Mar 2024
Unsupervised Domain Adaptation within Deep Foundation Latent Spaces
Unsupervised Domain Adaptation within Deep Foundation Latent Spaces
D. Kangin
Plamen Angelov
16
1
0
22 Feb 2024
Good Teachers Explain: Explanation-Enhanced Knowledge Distillation
Good Teachers Explain: Explanation-Enhanced Knowledge Distillation
Amin Parchami-Araghi
Moritz Bohle
Sukrut Rao
Bernt Schiele
FAtt
8
3
0
05 Feb 2024
InterpretCC: Intrinsic User-Centric Interpretability through Global
  Mixture of Experts
InterpretCC: Intrinsic User-Centric Interpretability through Global Mixture of Experts
Vinitra Swamy
Syrielle Montariol
Julian Blackwell
Jibril Frej
Martin Jaggi
Tanja Kaser
31
3
0
05 Feb 2024
B-Cos Aligned Transformers Learn Human-Interpretable Features
B-Cos Aligned Transformers Learn Human-Interpretable Features
Manuel Tran
Amal Lahiani
Yashin Dicente Cid
Melanie Boxberg
Peter Lienemann
C. Matek
S. J. Wagner
Fabian J. Theis
Eldad Klaiman
Tingying Peng
MedIm
ViT
13
2
0
16 Jan 2024
Q-SENN: Quantized Self-Explaining Neural Networks
Q-SENN: Quantized Self-Explaining Neural Networks
Thomas Norrenbrock
Marco Rudolph
Bodo Rosenhahn
FAtt
AAML
MILM
23
6
0
21 Dec 2023
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image Recognition
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image Recognition
Chong Wang
Yuanhong Chen
Fengbei Liu
Yuyuan Liu
Davis J. McCarthy
Helen Frazer
Gustavo Carneiro
18
1
0
30 Nov 2023
Towards interpretable-by-design deep learning algorithms
Towards interpretable-by-design deep learning algorithms
Plamen Angelov
D. Kangin
Ziyang Zhang
16
6
0
19 Nov 2023
Greedy PIG: Adaptive Integrated Gradients
Greedy PIG: Adaptive Integrated Gradients
Kyriakos Axiotis
Sami Abu-El-Haija
Lin Chen
Matthew Fahrbach
Gang Fu
FAtt
26
0
0
10 Nov 2023
Frozen Transformers in Language Models Are Effective Visual Encoder
  Layers
Frozen Transformers in Language Models Are Effective Visual Encoder Layers
Ziqi Pang
Ziyang Xie
Yunze Man
Yu-xiong Wang
45
25
0
19 Oct 2023
Latent Diffusion Counterfactual Explanations
Latent Diffusion Counterfactual Explanations
Karim Farid
Simon Schrodi
Max Argus
Thomas Brox
DiffM
33
12
0
10 Oct 2023
Don't Miss Out on Novelty: Importance of Novel Features for Deep Anomaly
  Detection
Don't Miss Out on Novelty: Importance of Novel Features for Deep Anomaly Detection
S. Sivaprasad
Mario Fritz
AAML
18
0
0
01 Oct 2023
From Classification to Segmentation with Explainable AI: A Study on
  Crack Detection and Growth Monitoring
From Classification to Segmentation with Explainable AI: A Study on Crack Detection and Growth Monitoring
Florent Forest
Hugo Porta
D. Tuia
Olga Fink
19
7
0
20 Sep 2023
Text-to-Image Models for Counterfactual Explanations: a Black-Box
  Approach
Text-to-Image Models for Counterfactual Explanations: a Black-Box Approach
Guillaume Jeanneret
Loïc Simon
Frédéric Jurie
DiffM
28
12
0
14 Sep 2023
Interpretability-Aware Vision Transformer
Interpretability-Aware Vision Transformer
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
ViT
80
7
0
14 Sep 2023
PDiscoNet: Semantically consistent part discovery for fine-grained
  recognition
PDiscoNet: Semantically consistent part discovery for fine-grained recognition
Robert van der Klis
Stephan Alaniz
Massimiliano Mancini
C. Dantas
Dino Ienco
Zeynep Akata
Diego Marcos
17
11
0
06 Sep 2023
DeViL: Decoding Vision features into Language
DeViL: Decoding Vision features into Language
Meghal Dani
Isabel Rio-Torto
Stephan Alaniz
Zeynep Akata
VLM
40
7
0
04 Sep 2023
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of
  Explainable AI Methods
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
AAML
32
32
0
11 Aug 2023
Discriminative Feature Attributions: Bridging Post Hoc Explainability
  and Inherent Interpretability
Discriminative Feature Attributions: Bridging Post Hoc Explainability and Inherent Interpretability
Usha Bhalla
Suraj Srinivas
Himabindu Lakkaraju
FAtt
CML
26
6
0
27 Jul 2023
What's meant by explainable model: A Scoping Review
What's meant by explainable model: A Scoping Review
Mallika Mainali
Rosina O. Weber
XAI
29
0
0
18 Jul 2023
Single Domain Generalization via Normalised Cross-correlation Based
  Convolutions
Single Domain Generalization via Normalised Cross-correlation Based Convolutions
Weiqin Chuah
Ruwan Tennakoon
R. Hoseinnezhad
David Suter
A. Bab-Hadiashar
OOD
17
3
0
12 Jul 2023
12
Next