Memory-based Stochastic Optimization - Robotics Institute Carnegie Mellon University

Memory-based Stochastic Optimization

Andrew Moore and Jeff Schneider
Conference Paper, Proceedings of (NeurIPS) Neural Information Processing Systems, pp. 1066 - 1072, November, 1995

Abstract

In this paper we introduce new algorithms for optimizing noisy plants in which each experiment is very expensive. The algorithms build a global non-linear model of the expected output at the same time as using Bayesian linear regression analysis of locally weighted polynomial models. The local model answers queries about confidence, noise, gradient and Hessians, and use them to make automated decisions similar to those made by a practitioner of Response Surface Methodology. The global and local models are combined naturally as a locally weighted regression. We examine the question of whether the global model can really help optimization, and we extend it to the case of time-varying functions. We compare the new algorithms with a highly tuned higher-order stochastic optimization algorithm on randomly-generated functions and a simulated manufacturing task. We note significant improvements in total regret, time to converge, and final solution quality.

BibTeX

@conference{Moore-1995-16129,
author = {Andrew Moore and Jeff Schneider},
title = {Memory-based Stochastic Optimization},
booktitle = {Proceedings of (NeurIPS) Neural Information Processing Systems},
year = {1995},
month = {November},
pages = {1066 - 1072},
}