ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.00738
6
2

Composition of Relational Features with an Application to Explaining Black-Box Predictors

1 June 2022
A. Srinivasan
A. Baskar
T. Dash
Devanshu Shah
    CoGe
ArXivPDFHTML
Abstract

Relational machine learning programs like those developed in Inductive Logic Programming (ILP) offer several advantages: (1) The ability to model complex relationships amongst data instances; (2) The use of domain-specific relations during model construction; and (3) The models constructed are human-readable, which is often one step closer to being human-understandable. However, these ILP-like methods have not been able to capitalise fully on the rapid hardware, software and algorithmic developments fuelling current developments in deep neural networks. In this paper, we treat relational features as functions and use the notion of generalised composition of functions to derive complex functions from simpler ones. We formulate the notion of a set of M\text{M}M-simple features in a mode language M\text{M}M and identify two composition operators (ρ1\rho_1ρ1​ and ρ2\rho_2ρ2​) from which all possible complex features can be derived. We use these results to implement a form of "explainable neural network" called Compositional Relational Machines, or CRMs, which are labelled directed-acyclic graphs. The vertex-label for any vertex jjj in the CRM contains a feature-function fjf_jfj​ and a continuous activation function gjg_jgj​. If jjj is a "non-input" vertex, then fjf_jfj​ is the composition of features associated with vertices in the direct predecessors of jjj. Our focus is on CRMs in which input vertices (those without any direct predecessors) all have M\text{M}M-simple features in their vertex-labels. We provide a randomised procedure for constructing and learning such CRMs. Using a notion of explanations based on the compositional structure of features in a CRM, we provide empirical evidence on synthetic data of the ability to identify appropriate explanations; and demonstrate the use of CRMs as éxplanation machines' for black-box models that do not provide explanations for their predictions.

View on arXiv
Comments on this paper