Cognitive systems evolve complex representations for adaptive behavior
Representations are internal models of the world that provide context to a sensory stream and are formed over evolutionary time as well as learned. We argue here that representations are the expected consequence of an adaptive process, give a formal definition of representation based on information theory, and quantify it using our new measure R. To measure how R changes over time, we evolve two types of networks---a recurrent artificial neural network and a network of hidden Markov gates---to solve a categorization task using a Genetic Algorithm. We find that the capacity to represent increases during evolutionary adaptation and that representations build up during the lifetime of these agents. We examine the concepts that are being represented, how they are logically encoded in the networks, and how they form as an agent behaves to solve a task. We conclude that any successful cognitive system that represents its environment within internal states should have a positive R.
View on arXiv