ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.17393
74
1

Evolving Form and Function: Dual-Objective Optimization in Neural Symbolic Regression Networks

24 February 2025
Amanda Bertschinger
James P. Bagrow
Joshua Bongard
ArXivPDFHTML
Abstract

Data increasingly abounds, but distilling their underlying relationships down to something interpretable remains challenging. One approach is genetic programming, which `symbolically regresses' a data set down into an equation.However, symbolic regression (SR) faces the issue of requiring training from scratch for each new dataset. To generalize across all datasets, deep learning techniques have been applied to SR.These networks, however, are only able to be trained using a symbolic objective: NN-generated and target equations are symbolically compared. But this does not consider the predictive power of these equations, which could be measured by a behavioral objective that compares the generated equation's predictions to actual data.Here we introduce a method that combines gradient descent and evolutionary computation to yield neural networks that minimize the symbolic and behavioral errors of the equations they generate from data.As a result, these evolved networks are shown to generate more symbolically and behaviorally accurate equations than those generated by networks trained by state-of-the-art gradient based neural symbolic regression methods.We hope this method suggests that evolutionary algorithms, combined with gradient descent, can improve SR results by yielding equations with more accurate form and function.

View on arXiv
@article{bertschinger2025_2502.17393,
  title={ Evolving Form and Function: Dual-Objective Optimization in Neural Symbolic Regression Networks },
  author={ Amanda Bertschinger and James Bagrow and Joshua Bongard },
  journal={arXiv preprint arXiv:2502.17393},
  year={ 2025 }
}
Comments on this paper