Model-Based Vision by Cooperative Processing of Evidence and Hypotheses Using Configuration Spaces

Yoshinori Kuno, Katsushi Ikeuchi, and Takeo Kanade
Proceedings of SPlE Vol. 938 Digital and Optical Shape Representation and Pattern Recognition, April, 1988, pp. 444 - 453.


Download
  • Adobe portable document format (pdf) (300KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
This paper presents a model-based object recognition method which combines a bottom-up evidence accumulation process and a top-down hypothesis verification process. The hypothesize-and-test paradigm is fundamental in model-based vision. However, research issues remain on how the bottom-up process gathers pieces of evidence and when the topdown process should take the lead. To accumulate pieces of evidence, we use a configuration space whose points represent a configuration of an object (ie. position and orientation of an object in an image). If a feature is found which matches a part of an object model, the configuration space is updated to reflect the possible configurations of the object. A region in the configuration space where multiple pieces of evidence from such feature-part matches overlap suggests a high probability that the object exists in an image with a configuration in that region. The cost of the bottom-up process to further accumulate evidence for localization, and that of the topdown process to recognize the object by verification, are compared by considering the size of the search region and the probability of success of verification. If the cost of the top-down process becomes lower, hypotheses are generated and their verification processes are started. The first version of the recognition program has been written and applied to the recognition of a jet airplane in synthetic aperture radar (SAR) images. In creating a model of an object, we have used a SAR simulator as a sensor model. so that we can predict those object features which are reliably detectable by the sensors. The program is being tested with simulated SAR images, and shows promising performance.

Notes
Associated Center(s) / Consortia: Vision and Autonomous Systems Center

Text Reference
Yoshinori Kuno, Katsushi Ikeuchi, and Takeo Kanade, "Model-Based Vision by Cooperative Processing of Evidence and Hypotheses Using Configuration Spaces," Proceedings of SPlE Vol. 938 Digital and Optical Shape Representation and Pattern Recognition, April, 1988, pp. 444 - 453.

BibTeX Reference
@inproceedings{Ikeuchi_1988_4230,
   author = "Yoshinori Kuno and Katsushi Ikeuchi and Takeo Kanade",
   title = "Model-Based Vision by Cooperative Processing of Evidence and Hypotheses Using Configuration Spaces",
   booktitle = "Proceedings of SPlE Vol. 938 Digital and Optical Shape Representation and Pattern Recognition",
   pages = "444 - 453",
   month = "April",
   year = "1988",
}