ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.07851
24
0

Independence Is Not an Issue in Neurosymbolic AI

10 April 2025
Håkan Karlsson Faronius
Pedro Zuidberg Dos Martires
ArXivPDFHTML
Abstract

A popular approach to neurosymbolic AI is to take the output of the last layer of a neural network, e.g. a softmax activation, and pass it through a sparse computation graph encoding certain logical constraints one wishes to enforce. This induces a probability distribution over a set of random variables, which happen to be conditionally independent of each other in many commonly used neurosymbolic AI models. Such conditionally independent random variables have been deemed harmful as their presence has been observed to co-occur with a phenomenon dubbed deterministic bias, where systems learn to deterministically prefer one of the valid solutions from the solution space over the others. We provide evidence contesting this conclusion and show that the phenomenon of deterministic bias is an artifact of improperly applying neurosymbolic AI.

View on arXiv
@article{faronius2025_2504.07851,
  title={ Independence Is Not an Issue in Neurosymbolic AI },
  author={ Håkan Karlsson Faronius and Pedro Zuidberg Dos Martires },
  journal={arXiv preprint arXiv:2504.07851},
  year={ 2025 }
}
Comments on this paper