ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.00808
8
0

A Mathematical Philosophy of Explanations in Mechanistic Interpretability -- The Strange Science Part I.i

1 May 2025
Kola Ayonrinde
Louis Jaburi
    MILM
ArXivPDFHTML
Abstract

Mechanistic Interpretability aims to understand neural networks through causal explanations. We argue for the Explanatory View Hypothesis: that Mechanistic Interpretability research is a principled approach to understanding models because neural networks contain implicit explanations which can be extracted and understood. We hence show that Explanatory Faithfulness, an assessment of how well an explanation fits a model, is well-defined. We propose a definition of Mechanistic Interpretability (MI) as the practice of producing Model-level, Ontic, Causal-Mechanistic, and Falsifiable explanations of neural networks, allowing us to distinguish MI from other interpretability paradigms and detail MI's inherent limits. We formulate the Principle of Explanatory Optimism, a conjecture which we argue is a necessary precondition for the success of Mechanistic Interpretability.

View on arXiv
@article{ayonrinde2025_2505.00808,
  title={ A Mathematical Philosophy of Explanations in Mechanistic Interpretability -- The Strange Science Part I.i },
  author={ Kola Ayonrinde and Louis Jaburi },
  journal={arXiv preprint arXiv:2505.00808},
  year={ 2025 }
}
Comments on this paper