Creating Benchmarking Problems in Machine Vision: Scientific Challenge Problems - Robotics Institute Carnegie Mellon University

Creating Benchmarking Problems in Machine Vision: Scientific Challenge Problems

O. Firschein, M. Fischler, and Takeo Kanade
Workshop Paper, DARPA Image Understanding Workshop (IUW '93), April, 1993

Abstract

We discuss the need for a new series of benchmarks in the vision field. to provide a direct quantitative measure of progress understandable to sponsors of research as well as a guide to practitioners in the field. A first set of benchmarks in two categories is proposed (1) static scenes containing manmade objects, and (2) static naturalloutdoor scenes. The tests are "end-to-end" and involve determining how well a system can identify instances (an item or condition is present or absent) in selected regwns of an image. The scoring would be set up so that the automatic setting of adjustable parameters is rewarded and manual tuning is penalized. To show how far machine vision has yet to go, a Benchmark 2000 problem is also suggested using children's "what is wrong" puzzles in which defective objects in a line drawing of a scene must be found.

BibTeX

@workshop{Firschein-1993-13470,
author = {O. Firschein and M. Fischler and Takeo Kanade},
title = {Creating Benchmarking Problems in Machine Vision: Scientific Challenge Problems},
booktitle = {Proceedings of DARPA Image Understanding Workshop (IUW '93)},
year = {1993},
month = {April},
}