ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.07554
42
0

An Empirical Comparison of Cost Functions in Inductive Logic Programming

10 March 2025
Céline Hocquette
Andrew Cropper
ArXivPDFHTML
Abstract

Recent inductive logic programming (ILP) approaches learn optimal hypotheses. An optimal hypothesis minimises a given cost function on the training data. There are many cost functions, such as minimising training error, textual complexity, or the description length of hypotheses. However, selecting an appropriate cost function remains a key question. To address this gap, we extend a constraint-based ILP system to learn optimal hypotheses for seven standard cost functions. We then empirically compare the generalisation error of optimal hypotheses induced under these standard cost functions. Our results on over 20 domains and 1000 tasks, including game playing, program synthesis, and image reasoning, show that, while no cost function consistently outperforms the others, minimising training error or description length has the best overall performance. Notably, our results indicate that minimising the size of hypotheses does not always reduce generalisation error.

View on arXiv
@article{hocquette2025_2503.07554,
  title={ An Empirical Comparison of Cost Functions in Inductive Logic Programming },
  author={ Céline Hocquette and Andrew Cropper },
  journal={arXiv preprint arXiv:2503.07554},
  year={ 2025 }
}
Comments on this paper