We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
Combining multiple deep cues for action recognition.
- Authors
Wang, Ruiqi; Wu, Xinxiao
- Abstract
In this paper, we propose a novel deep learning based framework to fuse multiple cues of action motions, objects and scenes for complex action recognition. Since the deep features achieve promising results, three deep representations are extracted for capturing both temporal and contextual information of actions. Particularly, for the action cue, we first adopt a deep detection model to detect persons frame by frame and then feed the deep representations of persons into a Gated Recurrent Unit model to generate the action features. Different from the existing deep action features, our feature is capable of modeling the global dynamics of long human motion. The scene and object cues are also represented by deep features pooling on all the frames in a video. Moreover, we introduce an lp-norm multiple kernel learning method to effectively combine the multiple deep representations of the video to learn robust classifiers of actions by capturing the contextual relationships between action, object and scene. Extensive experiments on two real-world action datasets (i.e., UCF101 and HMDB51) clearly demonstrate the effectiveness of our method.
- Subjects
HUMAN activity recognition; DEEP learning; VIDEOS; ARTIFICIAL neural networks; HUMAN kinematics
- Publication
Multimedia Tools & Applications, 2019, Vol 78, Issue 8, p9933
- ISSN
1380-7501
- Publication type
Article
- DOI
10.1007/s11042-018-6509-0