ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.10025
  4. Cited By
ProtoS-ViT: Visual foundation models for sparse self-explainable
  classifications

ProtoS-ViT: Visual foundation models for sparse self-explainable classifications

14 June 2024
Hugues Turbé
Mina Bjelogrlic
G. Mengaldo
Christian Lovis
    ViT
ArXiv (abs)PDFHTMLGithub (1★)

Papers citing "ProtoS-ViT: Visual foundation models for sparse self-explainable classifications"

8 / 8 papers shown
Title
This EEG Looks Like These EEGs: Interpretable Interictal Epileptiform Discharge Detection With ProtoEEG-kNN
This EEG Looks Like These EEGs: Interpretable Interictal Epileptiform Discharge Detection With ProtoEEG-kNNInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2025
Dennis Tang
Jon Donnelly
A. Barnett
Lesia Semenova
J. Jing
...
Ioannis Karakis
Olga Selioutski
Kehan Zhao
M. P. M. Brandon Westover
Cynthia Rudin
73
0
0
21 Oct 2025
Interpretable Few-Shot Image Classification via Prototypical Concept-Guided Mixture of LoRA Experts
Zhong Ji
Rongshuai Wei
Jingren Liu
Yanwei Pang
Jungong Han
236
0
0
05 Jun 2025
Birds look like cars: Adversarial analysis of intrinsically interpretable deep learning
Birds look like cars: Adversarial analysis of intrinsically interpretable deep learning
Hubert Baniecki
P. Biecek
AAML
388
1
0
11 Mar 2025
XAI4Extremes: An interpretable machine learning framework for understanding extreme-weather precursors under climate change
Jiawen Wei
Aniruddha Bora
Vivek Oommen
Chenyu Dong
Juntao Yang
Jeff Adie
Chen Chen
Simon See
George Karniadakis
G. Mengaldo
AI4Cl
321
0
0
11 Mar 2025
BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs
BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs
Sheng Zhang
Yanbo Xu
Naoto Usuyama
Hanwen Xu
J. Bagga
...
Carlo Bifulco
M. Lungren
Tristan Naumann
Sheng Wang
Hoifung Poon
LM&MAMedIm
729
413
0
10 Jan 2025
Revisiting the robustness of post-hoc interpretability methods
Revisiting the robustness of post-hoc interpretability methods
Jiawen Wei
Hugues Turbé
G. Mengaldo
AAML
336
7
0
29 Jul 2024
Explainable Natural Language Processing for Corporate Sustainability
  Analysis
Explainable Natural Language Processing for Corporate Sustainability Analysis
Keane Ong
Rui Mao
Frank Xing
Ricardo Shirota Filho
Erik Cambria
Johan Sulaeman
G. Mengaldo
261
16
0
03 Jul 2024
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
2.9K
28,650
0
22 May 2017
1