We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Robust Human Activity Recognition using Multimodal Feature-Level Fusion.
- Authors
C. J., Anil Kumar; Abraham, Christo; M. C., Darshan; Dominic, Freddy; P. S., Anandakrishnan
- Abstract
Automated recognition of human activities or actions has great significance as it incorporates wide-ranging applications, including surveillance, robotics, and personal health monitoring. Over the past few years, many computer vision-based methods have been developed for recognizing human actions from RGB and depth camera videos. These methods include space-time trajectory, motion encoding, key poses extraction, space-time occupancy patterns, depth motion maps, and skeleton joints. However, these camera-based approaches are affected by background clutter and illumination changes and applicable to a limited field of view only. Wearable inertial sensors provide a viable solution to these challenges but are subjected to several limitations such as location and orientation sensitivity. Due to the complimentary trait of the data obtained from the camera and inertial sensors, the utilization of multiple sensing modalities for accurate recognition of human action is gradually increasing. This project presents a viable multimodal feature-level fusion approach for robust human action recognition, which utilizes data from multiple sensors, including RGB camera, depth sensor, and wearable inertial sensors. We extracted the computationally efficient features from the data obtained from RGB-D video camera and inertial body sensors. These features include densely extracted histogram of oriented gradient (HOG) features from RGB/depth videos and statistical signal attributes from wearable sensors data. The proposed human action recognition (HAR) framework is tested on a publicly available multimodal human action dataset UTD-MHAD consisting of 10 different human actions. Support Vector Machine and K-Nearest Neighbor classifiers are used for training and testing the proposed fusion model for HAR. The experimental results indicate that the proposed scheme achieves better recognition results as compared to the state of the art. The feature-level fusion of RGB and inertial sensors provide the overall best performance for the proposed system, with an accuracy rate of 97.6%.
- Subjects
HUMAN activity recognition; WEARABLE technology; CAMCORDERS; SUPPORT vector machines; K-nearest neighbor classification; HUMAN behavior; INERTIAL confinement fusion
- Publication
Grenze International Journal of Engineering & Technology (GIJET), 2023, Vol 9, Issue 1, p856
- ISSN
2395-5287
- Publication type
Article