doctoral dissertation, tech. report CMU-RI-TR-98-06, Robotics Institute, Carnegie Mellon University, May, 1998
|In this thesis, we apply machine learning techniques and statistical analysis towards the learn-ing and validation of human control strategy (HCS) models. This work has potential impact in a number of applications ranging from space telerobotics and real-time training to the Intelli-gent Vehicle Highway System (IVHS) and human-machine interfacing. We specifically focus on the following two important questions: (1) how to efficiently and effectively model human control strategies from real-time human control data and (2) how to validate the performance of the learned HCS models in the feedback control loop. To these ends, we propose two dis-crete- time modeling frameworks, one for continuous and another for discontinuous human control strategies. For the continuous case, we propose and develop an efficient neural-network learning architecture that combines flexible cascade neural networks with extended Kalman fil-tering. This learning architecture demonstrates convergence to better local minima in many fewer epochs than alternative, competing neural network learning regimes. For the discontinu-ous case, we propose and develop a statistical framework that models control actions by indi-vidual statistical models. A stochastic selection criterion, based on the posterior probabilities for each model, then selects a particular control action at each time step.
Next, we propose and develop a stochastic similarity measure ?based on Hidden Markov Model (HMM) analysis ?that compares dynamic, stochastic control trajectories. We derive important properties for this similarity measure, and then, by quantifying the similarity between model-generated control trajectories and corresponding human control data, apply this measure towards validating the learned models of human control strategy. The degree of similarity (or dissimilarity) between a model and its training data indicates how appropriate a specific modeling approach is within a specific context. Throughout, the learning and valida-tion methods proposed herein are tested on human control data, collected through a dynamic, graphic driving simulator that we have developed for this purpose. In addition, we analyze actual driving data collected through the Navlab project at Carnegie Mellon University.
|Michael Nechyba, "Learning and Validation of Human Control Strategies," doctoral dissertation, tech. report CMU-RI-TR-98-06, Robotics Institute, Carnegie Mellon University, May, 1998|
author = "Michael Nechyba",
title = "Learning and Validation of Human Control Strategies",
booktitle = "",
school = "Robotics Institute, Carnegie Mellon University",
month = "May",
year = "1998",
address= "Pittsburgh, PA",
|The Robotics Institute is part of the School of Computer Science, Carnegie Mellon University.|
Contact Us | Update Instructions