We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Fusion of Video and Inertial Sensing for Deep Learning–Based Human Action Recognition.
- Authors
Wei, Haoran; Jafari, Roozbeh; Kehtarnavaz, Nasser
- Abstract
This paper presents the simultaneous utilization of video images and inertial signals that are captured at the same time via a video camera and a wearable inertial sensor within a fusion framework in order to achieve a more robust human action recognition compared to the situations when each sensing modality is used individually. The data captured by these sensors are turned into 3D video images and 2D inertial images that are then fed as inputs into a 3D convolutional neural network and a 2D convolutional neural network, respectively, for recognizing actions. Two types of fusion are considered—Decision-level fusion and feature-level fusion. Experiments are conducted using the publicly available dataset UTD-MHAD in which simultaneous video images and inertial signals are captured for a total of 27 actions. The results obtained indicate that both the decision-level and feature-level fusion approaches generate higher recognition accuracies compared to the approaches when each sensing modality is used individually. The highest accuracy of 95.6% is obtained for the decision-level fusion approach.
- Publication
Sensors (14248220), 2019, Vol 19, Issue 17, p3680
- ISSN
1424-8220
- Publication type
Article
- DOI
10.3390/s19173680