Hand Shape Estimation under Complex Backgrounds for Sign Language Recognition - Robotics Institute Carnegie Mellon University

Hand Shape Estimation under Complex Backgrounds for Sign Language Recognition

Y. Hamada, Nobutaka Shimada, and Y. Shirai
Conference Paper, Proceedings of 6th IEEE International Conference on Automatic Face and Gesture Recognition (FG '04), pp. 589 - 594, May, 2004

Abstract

This work presents a method of hand shape estimation under complex backgrounds which may include a face. We reduce matching candidate models by using a shape transition network. When the hand moves fast, a hand image is blurred and the hand contour is not available. In such a case, no candidate matches to the input image. By adding models having only the position and velocity of the hand, matched models are correctly traced in the transition network. For each matching candidate, the best-matched position is determined. For selecting the best matched model, conventional methods assumed that prominent edges are extracted only from true hand contour. However, the prominent edges may often be extracted on the background and some parts may not be extracted on the hand contour. We propose a matching criterion defined as the length of the part of the contour covering the true hand contour by considering edge existence probability in the background. We show experimental results to support the effectiveness of the proposed criterion.

BibTeX

@conference{Hamada-2004-16957,
author = {Y. Hamada and Nobutaka Shimada and Y. Shirai},
title = {Hand Shape Estimation under Complex Backgrounds for Sign Language Recognition},
booktitle = {Proceedings of 6th IEEE International Conference on Automatic Face and Gesture Recognition (FG '04)},
year = {2004},
month = {May},
pages = {589 - 594},
}