ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.01254
  4. Cited By
Under the Hood of Neural Networks: Characterizing Learned
  Representations by Functional Neuron Populations and Network Ablations

Under the Hood of Neural Networks: Characterizing Learned Representations by Functional Neuron Populations and Network Ablations

2 April 2020
Richard Meyes
Constantin Waubert de Puiseau
Andres Felipe Posada-Moreno
Tobias Meisen
    AI4CE
ArXivPDFHTML

Papers citing "Under the Hood of Neural Networks: Characterizing Learned Representations by Functional Neuron Populations and Network Ablations"

4 / 4 papers shown
Title
Studying Small Language Models with Susceptibilities
Studying Small Language Models with Susceptibilities
Garrett Baker
George Wang
Jesse Hoogland
Daniel Murfet
AAML
73
1
0
25 Apr 2025
Neuron-level Interpretation of Deep NLP Models: A Survey
Neuron-level Interpretation of Deep NLP Models: A Survey
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
MILM
AI4CE
22
79
0
30 Aug 2021
Contrastive Explanations for Model Interpretability
Contrastive Explanations for Model Interpretability
Alon Jacovi
Swabha Swayamdipta
Shauli Ravfogel
Yanai Elazar
Yejin Choi
Yoav Goldberg
33
95
0
02 Mar 2021
Explainable deep learning models in medical image analysis
Explainable deep learning models in medical image analysis
Amitojdeep Singh
S. Sengupta
Vasudevan Lakshminarayanan
XAI
13
482
0
28 May 2020
1