ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10422
5
0

Decomposed Inductive Procedure Learning: Learning Academic Tasks with Human-Like Data Efficiency

15 May 2025
Daniel Weitekamp
Christopher James Maclellan
Erik Harpstead
Kenneth R. Koedinger
ArXivPDFHTML
Abstract

Human learning relies on specialization -- distinct cognitive mechanisms working together to enable rapid learning. In contrast, most modern neural networks rely on a single mechanism: gradient descent over an objective function. This raises the question: might human learners' relatively rapid learning from just tens of examples instead of tens of thousands in data-driven deep learning arise from our ability to use multiple specialized mechanisms of learning in combination? We investigate this question through an ablation analysis of inductive human learning simulations in online tutoring environments. Comparing reinforcement learning to a more data-efficient 3-mechanism symbolic rule induction approach, we find that decomposing learning into multiple distinct mechanisms significantly improves data efficiency, bringing it in line with human learning. Furthermore, we show that this decomposition has a greater impact on efficiency than the distinction between symbolic and subsymbolic learning alone. Efforts to align data-driven machine learning with human learning often overlook the stark difference in learning efficiency. Our findings suggest that integrating multiple specialized learning mechanisms may be key to bridging this gap.

View on arXiv
@article{weitekamp2025_2505.10422,
  title={ Decomposed Inductive Procedure Learning: Learning Academic Tasks with Human-Like Data Efficiency },
  author={ Daniel Weitekamp and Christopher MacLellan and Erik Harpstead and Kenneth Koedinger },
  journal={arXiv preprint arXiv:2505.10422},
  year={ 2025 }
}
Comments on this paper