ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.19087
  4. Cited By
Dimensions underlying the representational alignment of deep neural networks with humans

Dimensions underlying the representational alignment of deep neural networks with humans

28 January 2025
F. Mahner
Lukas Muttenthaler
Umut Güçlü
M. Hebart
ArXivPDFHTML

Papers citing "Dimensions underlying the representational alignment of deep neural networks with humans"

4 / 4 papers shown
Title
Human-like object concept representations emerge naturally in multimodal
  large language models
Human-like object concept representations emerge naturally in multimodal large language models
Changde Du
Kaicheng Fu
Bincheng Wen
Yi Sun
Jie Peng
...
Chuncheng Zhang
Jinpeng Li
Shuang Qiu
Le Chang
Huiguang He
19
1
0
01 Jul 2024
StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets
StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets
Axel Sauer
Katja Schwarz
Andreas Geiger
174
354
0
01 Feb 2022
Revisiting the Importance of Individual Units in CNNs via Ablation
Revisiting the Importance of Individual Units in CNNs via Ablation
Bolei Zhou
Yiyou Sun
David Bau
Antonio Torralba
FAtt
49
108
0
07 Jun 2018
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
225
2,069
0
24 Jun 2017
1