The Philosophical Foundations of Growing AI Like A Child

Despite excelling in high-level reasoning, current language models lack robustness in real-world scenarios and perform poorly on fundamental problem-solving tasks that are intuitive to humans. This paper argues that both challenges stem from a core discrepancy between human and machine cognitive development. While both systems rely on increasing representational power, the absence of core knowledge-foundational cognitive structures in humans-prevents language models from developing robust, generalizable abilities, where complex skills are grounded in simpler ones within their respective domains. It explores empirical evidence of core knowledge in humans, analyzes why language models fail to acquire it, and argues that this limitation is not an inherent architectural constraint. Finally, it outlines a workable proposal for systematically integrating core knowledge into future multi-modal language models through the large-scale generation of synthetic training data using a cognitive prototyping strategy.
View on arXiv@article{luo2025_2502.10742, title={ The Philosophical Foundations of Growing AI Like A Child }, author={ Dezhi Luo and Yijiang Li and Hokin Deng }, journal={arXiv preprint arXiv:2502.10742}, year={ 2025 } }