Carnegie Mellon Robotics Institute
Cognitive models are models of human performance that represent human knowledge and internal information management processes. They provide integrated representations of the knowledge, procedure, strategies, and problem-solving skills used by humans in domain or task situations. The execution of the model takes into consideration the cognitive capabilities and limitations of humans. The key problem in developing models of human behavior is that there is no 'standard' cognitive method that is appropriate for all situations. Usually labor-intensive task analysis is undertaken to make a detailed mapping of what humans are doing to complete a set of tasks. Every model is coded as a new set of production rules and executable procedures (e.g. operators and methods). This approach is time-consuming, effort- intensive, and does not scale well for the development of large models.
The objective of our research is to provide (1) a methodology, grounded in computational theory and meeting cognitive requirements of human performance, to decompose behavior functionality and map it to a set of software agents and (2) robust computational methods and software tools that enable increased automation in reuse and composition of the agents, so that the composite system reflects the observable behavior of the modeled humans. A key feature of agent-based development is the encapsulation of knowledge and processing in autonomous units of behavior/processing. This distinguishes agents from traditional software components that are typically passive pieces of code. Current cognitive models are "monolithic" rather than "compositional" and do not take advantage of the possibilities of model combination and reuse.
Our overall research hypothesis is that cognitive and behavioral functionalities can be decomposed in appropriate ways and that these fragments of behavior can be composed in a semi-automated fashion similar to the ways that software components can be composed together. Certain behavior can be treated as modules and composed to simulate behavior that is more complex. In particular, our hypothesis is that such reuse and composition will be facilitated by agent-based software development methods. We propose to enable cognitive modules to be automatically invoked and composed in accordance with the task and situation at hand. We have taken some important steps in this direction with the development of the RETSINA multi-agent infrastructure.
|The Robotics Institute is part of the School of Computer Science, Carnegie Mellon University.|
Contact Us | Update Instructions