Activity Recognition from Sensor Fusion on Fireman's Helmet - Robotics Institute Carnegie Mellon University

Activity Recognition from Sensor Fusion on Fireman’s Helmet

Sean Hackett, Yang Cai, and Mel Siegel
Conference Paper, Proceedings of 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI '19), October, 2019

Abstract

Recognizing human activities in emergency situations is critical for first responders to ensure their safety and well-being. In many cases, the thick smoke in a burning building impairs computer vision algorithms for activity recognition. Here we present a helmet-based sensor fusion method with IMU and time-of-fly laser distance sensor. We use a Decision Tree to as a classifier and select the most significant features. Our test shows that the method can recognize over seven activities: walking, running, crawling, duck walking, standing, walking upstairs and downstairs, with an accuracy between 81.7% and 93.6%. With limited training data and a lightweight requirement for implementation on the fireman's helmet the Decision Tree provided an accurate and reliable result. The use of the 1-D Lidar, which is not feasible in typical activity recognition application but essential for the helmet, combined with the 10-DOF IMU sensors improved the robustness of the classifier. We found this sensor fusion approach needs much less training data, compared to methods such as Deep Learning. Once implemented on the helmet the activity recognition is executed in real-time at sampling rate at 50 Hz within a 2-second window.

BibTeX

@conference{Hackett-2019-122264,
author = {Sean Hackett and Yang Cai and Mel Siegel},
title = {Activity Recognition from Sensor Fusion on Fireman's Helmet},
booktitle = {Proceedings of 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI '19)},
year = {2019},
month = {October},
}