Learning to acquire novel cognitive tasks with evolution, plasticity and
meta-meta-learning
A hallmark of intelligence is the ability to learn new flexible, cognitive behaviors - that is, behaviors that require memorizing and exploiting a certain information item for each new instance of the task. In meta-learning, agents are trained with an external, human-designed reinforcement learning algorithm to learn a specific cognitive task. However, animals are able to pick up such cognitive tasks automatically, from stimuli and rewards alone, as a result of their evolved neural architecture and synaptic plasticity mechanisms: evolution has designed animal brains as self-contained reinforcement (meta-)learning systems, capable not just of performing specific cognitive tasks, but of acquiring novel cognitive tasks, including tasks never seen during evolution. Can we harness this process to generate artificial agents with such abilities? Here we evolve neural networks, endowed with plastic connections and neuromodulation, over a sizable set of simple meta-learning tasks based on a framework from computational neuroscience. The resulting evolved networks can automatically acquire a novel simple cognitive task, never seen during evolution, through the spontaneous operation of their evolved neural organization and plasticity system. We suggest that attending to the multiplicity of loops involved in natural learning may provide useful insight into the emergence of intelligent behavior.
View on arXiv