Implicit In-Context Learning: Evidence from Artificial Language Experiments

Abstract
Humans acquire language through implicit learning, absorbing complex patterns without explicit awareness. While LLMs demonstrate impressive linguistic capabilities, it remains unclear whether they exhibit human-like pattern recognition during in-context learning at inferencing level. We adapted three classic artificial language learning experiments spanning morphology, morphosyntax, and syntax to systematically evaluate implicit learning at inferencing level in two state-of-the-art OpenAI models: gpt-4o and o3-mini. Our results reveal linguistic domain-specific alignment between models and human behaviors, o3-mini aligns better in morphology while both models align in syntax.
View on arXiv@article{ma2025_2503.24190, title={ Implicit In-Context Learning: Evidence from Artificial Language Experiments }, author={ Xiaomeng Ma and Qihui Xu }, journal={arXiv preprint arXiv:2503.24190}, year={ 2025 } }
Comments on this paper