We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Hybrid convolutional neural networks and optical flow for video visual attention prediction.
- Authors
Sun, Meijun; Zhou, Ziqi; Zhang, Dong; Wang, Zheng
- Abstract
In this paper, a convolutional neural networks (CNN) and optical flow based method is proposed for prediction of visual attention in the videos. First, a deep-learning framework is employed to extract spatial features in frames to replace those commonly used handcrafted features. The optical flow is calculated to obtain the temporal feature of the moving objects in video frames, which always draw audiences’ attentions. By integrating these two groups of features, a hybrid spatial temporal feature set is obtained and taken as the input of a support vector machine (SVM) to predict the degree of visual attention. Finally, two publicly available video datasets were used to test the performance of the proposed model, where the results have demonstrated the efficacy of the proposed approach.
- Subjects
VISUAL perception; ARTIFICIAL neural networks; OPTICAL flow; VIDEO processing; SUPPORT vector machines
- Publication
Multimedia Tools & Applications, 2018, Vol 77, Issue 22, p29231
- ISSN
1380-7501
- Publication type
Article
- DOI
10.1007/s11042-018-5793-z