ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.00756
  4. Cited By
VICE: Variational Interpretable Concept Embeddings

VICE: Variational Interpretable Concept Embeddings

2 May 2022
Lukas Muttenthaler
C. Zheng
Patrick McClure
Robert A. Vandermeulen
M. Hebart
Francisco Câmara Pereira
ArXivPDFHTML

Papers citing "VICE: Variational Interpretable Concept Embeddings"

10 / 10 papers shown
Title
Dimensions underlying the representational alignment of deep neural networks with humans
Dimensions underlying the representational alignment of deep neural networks with humans
F. Mahner
Lukas Muttenthaler
Umut Güçlü
M. Hebart
48
4
0
28 Jan 2025
Connecting Concept Convexity and Human-Machine Alignment in Deep Neural
  Networks
Connecting Concept Convexity and Human-Machine Alignment in Deep Neural Networks
Teresa Dorszewski
Lenka Tětková
Lorenz Linhardt
Lars Kai Hansen
HAI
36
0
0
10 Sep 2024
CoCoG: Controllable Visual Stimuli Generation based on Human Concept
  Representations
CoCoG: Controllable Visual Stimuli Generation based on Human Concept Representations
Chen Wei
Jiachen Zou
Dietmar Heinke
Quanying Liu
48
3
0
25 Apr 2024
An Analysis of Human Alignment of Latent Diffusion Models
An Analysis of Human Alignment of Latent Diffusion Models
Lorenz Linhardt
Marco Morik
Sidney Bender
Naima Elosegui Borras
DiffM
36
3
0
13 Mar 2024
Getting aligned on representational alignment
Getting aligned on representational alignment
Ilia Sucholutsky
Lukas Muttenthaler
Adrian Weller
Andi Peng
Andreea Bobu
...
Thomas Unterthiner
Andrew Kyle Lampinen
Klaus-Robert Muller
M. Toneva
Thomas L. Griffiths
61
74
0
18 Oct 2023
Set Learning for Accurate and Calibrated Models
Set Learning for Accurate and Calibrated Models
Lukas Muttenthaler
Robert A. Vandermeulen
Qiuyi Zhang
Thomas Unterthiner
Klaus-Robert Muller
34
2
0
05 Jul 2023
Improving neural network representations using human similarity
  judgments
Improving neural network representations using human similarity judgments
Lukas Muttenthaler
Lorenz Linhardt
Jonas Dippel
Robert A. Vandermeulen
Katherine L. Hermann
Andrew Kyle Lampinen
Simon Kornblith
40
29
0
07 Jun 2023
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Lorenz Linhardt
Klaus-Robert Muller
G. Montavon
AAML
26
7
0
12 Apr 2023
When are Post-hoc Conceptual Explanations Identifiable?
When are Post-hoc Conceptual Explanations Identifiable?
Tobias Leemann
Michael Kirchhof
Yao Rong
Enkelejda Kasneci
Gjergji Kasneci
50
10
0
28 Jun 2022
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,138
0
06 Jun 2015
1