Boosted Backpropagation Learning for Training Deep Modular Networks

Alexander Grubb and J. Andrew (Drew) Bagnell
Proceedings of the 27th International Conference on Machine Learning, May, 2010.


Download
  • Adobe portable document format (pdf) (1MB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
Divide-and-conquer is key to building sophisticated learning machines: hard problems are solved by composing a network of modules that solve simpler problems (LeCun et al., 1998; Rohde, 2002; Bradley, 2009). Many such existing systems rely on learning algorithms which are based on simple parametric gradient descent where the parametrization must be predetermined, or more specialized per-application algorithms which are usually ad-hoc and complicated. We present a novel approach for training generic modular networks that uses two existing techniques: the error propagation strategy of backpropagation and more recent research on descent in spaces of functions (Mason et al., 1999; Scholkopf & Smola, 2001). Combining these two methods of optimization gives a simple algorithm for training heterogeneous networks of functional modules using simple gradient propagation mechanics and established learning algorithms. The resulting separation of concerns between learning individual modules and error propagation mechanics eases implementation, enables a larger class of modular learning strategies, and allows per-module control of complexity/regularization. We derive and demonstrate this functional backpropagation and contrast it with traditional gradient descent in parameter space, observing that in our example domain the method is signi cantly more robust to local optima.

Keywords
machine learning, boosting, optimization, backpropagation, deep learning, modular learning

Notes
Sponsor: Office of Naval Research MURI
Associated Project(s): Multidisciplinary University Research Initiative

Text Reference
Alexander Grubb and J. Andrew (Drew) Bagnell, "Boosted Backpropagation Learning for Training Deep Modular Networks," Proceedings of the 27th International Conference on Machine Learning, May, 2010.

BibTeX Reference
@inproceedings{Grubb_2010_6844,
   author = "Alexander Grubb and J. Andrew (Drew) Bagnell",
   title = "Boosted Backpropagation Learning for Training Deep Modular Networks",
   booktitle = "Proceedings of the 27th International Conference on Machine Learning",
   month = "May",
   year = "2010",
}