ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.13233
  4. Cited By
Now You See Me (CME): Concept-based Model Extraction

Now You See Me (CME): Concept-based Model Extraction

25 October 2020
Dmitry Kazhdan
B. Dimanov
M. Jamnik
Pietro Lió
Adrian Weller
ArXivPDFHTML

Papers citing "Now You See Me (CME): Concept-based Model Extraction"

15 / 15 papers shown
Title
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
M. Zarlenga
Gabriele Dominici
Pietro Barbiero
Z. Shams
M. Jamnik
KELM
164
0
0
24 Apr 2025
Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations
Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations
Xin-Chao Xu
Yi Qin
Lu Mi
Hao Wang
X. Li
74
9
0
03 Jan 2025
Understanding Multimodal Deep Neural Networks: A Concept Selection View
Understanding Multimodal Deep Neural Networks: A Concept Selection View
Chenming Shang
Hengyuan Zhang
Hao Wen
Yujiu Yang
43
5
0
13 Apr 2024
Concept Distillation: Leveraging Human-Centered Explanations for Model
  Improvement
Concept Distillation: Leveraging Human-Centered Explanations for Model Improvement
Avani Gupta
Saurabh Saini
P. J. Narayanan
25
6
0
26 Nov 2023
LR-XFL: Logical Reasoning-based Explainable Federated Learning
LR-XFL: Logical Reasoning-based Explainable Federated Learning
Yanci Zhang
Hanyou Yu
LRM
24
7
0
24 Aug 2023
Coherent Concept-based Explanations in Medical Image and Its Application
  to Skin Lesion Diagnosis
Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis
Cristiano Patrício
João C. Neves
Luís F. Teixeira
MedIm
FAtt
24
17
0
10 Apr 2023
Towards Human-Interpretable Prototypes for Visual Assessment of Image
  Classification Models
Towards Human-Interpretable Prototypes for Visual Assessment of Image Classification Models
Poulami Sinhamahapatra
Lena Heidemann
Maureen Monnet
Karsten Roscher
39
5
0
22 Nov 2022
The Influence of Explainable Artificial Intelligence: Nudging Behaviour
  or Boosting Capability?
The Influence of Explainable Artificial Intelligence: Nudging Behaviour or Boosting Capability?
Matija Franklin
TDI
23
1
0
05 Oct 2022
When are Post-hoc Conceptual Explanations Identifiable?
When are Post-hoc Conceptual Explanations Identifiable?
Tobias Leemann
Michael Kirchhof
Yao Rong
Enkelejda Kasneci
Gjergji Kasneci
50
10
0
28 Jun 2022
Concept Embedding Analysis: A Review
Concept Embedding Analysis: A Review
Gesina Schwalbe
29
28
0
25 Mar 2022
A Framework for Learning Ante-hoc Explainable Models via Concepts
A Framework for Learning Ante-hoc Explainable Models via Concepts
Anirban Sarkar
Deepak Vijaykeerthy
Anindya Sarkar
V. Balasubramanian
LRM
BDL
19
46
0
25 Aug 2021
Logic Explained Networks
Logic Explained Networks
Gabriele Ciravegna
Pietro Barbiero
Francesco Giannini
Marco Gori
Pietro Lió
Marco Maggini
S. Melacci
37
69
0
11 Aug 2021
Entropy-based Logic Explanations of Neural Networks
Entropy-based Logic Explanations of Neural Networks
Pietro Barbiero
Gabriele Ciravegna
Francesco Giannini
Pietro Lió
Marco Gori
S. Melacci
FAtt
XAI
25
78
0
12 Jun 2021
Failing Conceptually: Concept-Based Explanations of Dataset Shift
Failing Conceptually: Concept-Based Explanations of Dataset Shift
Maleakhi A. Wijaya
Dmitry Kazhdan
B. Dimanov
M. Jamnik
9
7
0
18 Apr 2021
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
1