Evolution of Goal-Directed Behavior Using Limited Information in a Complex Environment - Robotics Institute Carnegie Mellon University

Evolution of Goal-Directed Behavior Using Limited Information in a Complex Environment

Matthew Glickman and Katia Sycara
Conference Paper, Proceedings of 1st Annual Genetic and Evolutionary Computation Conference (GECCO '99), pp. 1281 - 1288, July, 1999

Abstract

In this paper, we apply an evolutionary algorithm to learning behavior on a novel, interesting task to explore the general issue of learning effective behaviors in a complex environment that provides only limited perception and goal-feedback. Our specific approach evolves behavior in the form Artificial Neural Networks with recurrent connections. We apply our approach to learn effective behavior for a non-standard maze-navigation problem that is characterized by aspects of problems that are difficult to approach via other methods. Difficult aspects of the specified problem include the inability to sense all task-relevant state at any given time (the problem of "hidden state"), and limited feedback with respect to success or failure. We observe evolved networks to perform very well on the target problem. Further findings include adaptation to noise in action selection, performance proportional to memory capacity, and improved performance when network weights are transferred from training on one maze to another.

BibTeX

@conference{Glickman-1999-14951,
author = {Matthew Glickman and Katia Sycara},
title = {Evolution of Goal-Directed Behavior Using Limited Information in a Complex Environment},
booktitle = {Proceedings of 1st Annual Genetic and Evolutionary Computation Conference (GECCO '99)},
year = {1999},
month = {July},
pages = {1281 - 1288},
}